Science.gov

Sample records for a-posteriori map estimation

  1. A posteriori error estimates for Maxwell equations

    NASA Astrophysics Data System (ADS)

    Schoeberl, Joachim

    2008-06-01

    Maxwell equations are posed as variational boundary value problems in the function space H(operatorname{curl}) and are discretized by Nedelec finite elements. In Beck et al., 2000, a residual type a posteriori error estimator was proposed and analyzed under certain conditions onto the domain. In the present paper, we prove the reliability of that error estimator on Lipschitz domains. The key is to establish new error estimates for the commuting quasi-interpolation operators recently introduced in J. Schoeberl, Commuting quasi-interpolation operators for mixed finite elements. Similar estimates are required for additive Schwarz preconditioning. To incorporate boundary conditions, we establish a new extension result.

  2. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  3. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  4. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  5. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  6. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  7. Implicit a posteriori error estimates for the Maxwell equations

    NASA Astrophysics Data System (ADS)

    Izsak, Ferenc; Harutyunyan, Davit; van der Vegt, Jaap J. W.

    2008-09-01

    An implicit a posteriori error estimation technique is presented and analyzed for the numerical solution of the time-harmonic Maxwell equations using Nedelec edge elements. For this purpose we define a weak formulation for the error on each element and provide an efficient and accurate numerical solution technique to solve the error equations locally. We investigate the well-posedness of the error equations and also consider the related eigenvalue problem for cubic elements. Numerical results for both smooth and non-smooth problems, including a problem with reentrant corners, show that an accurate prediction is obtained for the local error, and in particular the error distribution, which provides essential information to control an adaptation process. The error estimation technique is also compared with existing methods and provides significantly sharper estimates for a number of reported test cases.

  8. Cost functions to estimate a posteriori probabilities in multiclass problems.

    PubMed

    Cid-Sueiro, J; Arribas, J I; Urbán-Muñoz, S; Figueiras-Vidal, A R

    1999-01-01

    The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed in this paper. We establish necessary and sufficient conditions that these costs must satisfy in one-class one-output networks whose outputs are consistent with probability laws. We focus our attention on a particular subset of the corresponding cost functions; those which verify two usually interesting properties: symmetry and separability (well-known cost functions, such as the quadratic cost or the cross entropy are particular cases in this subset). Finally, we present a universal stochastic gradient learning rule for single-layer networks, in the sense of minimizing a general version of these cost functions for a wide family of nonlinear activation functions.

  9. Object detection and amplitude estimation based on maximum a posteriori reconstructions

    SciTech Connect

    Hanson, K.M.

    1990-01-01

    We report on the behavior of the linear maximum a posteriori (MAP) tomographic reconstruction technique as a function of the assumed rms noise {sigma}{sub n} in the measurements, which specifies the degree of confidence in the measurement data. The unconstrained MAP reconstructions are evaluated on the basis of the performance of two related tasks; object detection and amplitude estimation. It is found that the detectability of medium-sized discs remains constant up to relatively large {sigma}{sub n} before slowly diminishing. However, the amplitudes of the discs estimated from the MAP reconstructions increasingly deviate from their actual values as {sigma}{sub n} increases.

  10. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  11. Three-dimensional super-resolution structured illumination microscopy with maximum a posteriori probability image estimation.

    PubMed

    Lukeš, Tomáš; Křížek, Pavel; Švindrych, Zdeněk; Benda, Jakub; Ovesný, Martin; Fliegel, Karel; Klíma, Miloš; Hagen, Guy M

    2014-12-01

    We introduce and demonstrate a new high performance image reconstruction method for super-resolution structured illumination microscopy based on maximum a posteriori probability estimation (MAP-SIM). Imaging performance is demonstrated on a variety of fluorescent samples of different thickness, labeling density and noise levels. The method provides good suppression of out of focus light, improves spatial resolution, and allows reconstruction of both 2D and 3D images of cells even in the case of weak signals. The method can be used to process both optical sectioning and super-resolution structured illumination microscopy data to create high quality super-resolution images.

  12. Maximum a posteriori probability estimation for localizing damage using ultrasonic guided waves

    NASA Astrophysics Data System (ADS)

    Flynn, Eric B.; Todd, Michael D.; Wilcox, Paul D.; Drinkwater, Bruce W.; Croxford, Anthony J.

    2011-04-01

    Presented is an approach to damage localization for guided wave structural health monitoring (GWSHM) in plate-like structures. In this mode of SHM, transducers excite and sense guided waves in order to detect and characterize the presence of damage. The premise of the presented localization approach is simple: use as the estimated damage location the point on the structure with the maximum a posteriori probability (MAP) of being the location of damage (i.e., the most probable location given a set of sensor measurements). This is accomplished by constructing a minimally-informed statistical model of the GWSHM process. Parameters of the model which are unknown, such as scattered wave amplitude, are assigned non-informative Bayesian prior distributions and averaged out of the a posteriori probability calculation. Using an ensemble of measurements from an instrumented plate with stiffening stringers, the performance of the MAP estimate is compared to that of what were found to be the two most effective previously reported algorithms. The MAP estimate proved superior in nearly all test cases and was particularly effective in localizing damage using very sparse arrays of as few as three transducers.

  13. Phylogenetic assignment of Mycobacterium tuberculosis Beijing clinical isolates in Japan by maximum a posteriori estimation.

    PubMed

    Seto, Junji; Wada, Takayuki; Iwamoto, Tomotada; Tamaru, Aki; Maeda, Shinji; Yamamoto, Kaori; Hase, Atsushi; Murakami, Koichi; Maeda, Eriko; Oishi, Akira; Migita, Yuji; Yamamoto, Taro; Ahiko, Tadayuki

    2015-10-01

    Intra-species phylogeny of Mycobacterium tuberculosis has been regarded as a clue to estimate its potential risk to develop drug-resistance and various epidemiological tendencies. Genotypic characterization of variable number of tandem repeats (VNTR), a standard tool to ascertain transmission routes, has been improving as a public health effort, but determining phylogenetic information from those efforts alone is difficult. We present a platform based on maximum a posteriori (MAP) estimation to estimate phylogenetic information for M. tuberculosis clinical isolates from individual profiles of VNTR types. This study used 1245 M. tuberculosis clinical isolates obtained throughout Japan for construction of an MAP estimation formula. Two MAP estimation formulae, classification of Beijing family and other lineages, and classification of five Beijing sublineages (ST11/26, STK, ST3, and ST25/19 belonging to the ancient Beijing subfamily and modern Beijing subfamily), were created based on 24 loci VNTR (24Beijing-VNTR) profiles and phylogenetic information of the isolates. Recursive estimation based on the formulae showed high concordance with their authentic phylogeny by multi-locus sequence typing (MLST) of the isolates. The formulae might further support phylogenetic estimation of the Beijing lineage M. tuberculosis from the VNTR genotype with various geographic backgrounds. These results suggest that MAP estimation can function as a reliable probabilistic process to append phylogenetic information to VNTR genotypes of M. tuberculosis independently, which might improve the usage of genotyping data for control, understanding, prevention, and treatment of TB.

  14. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  15. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  16. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  17. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  18. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  19. Explicit a posteriori error estimates for eigenvalue analysis of heterogeneous elastic structures.

    SciTech Connect

    Walsh, Timothy Francis; Reese, Garth M.; Hetmaniuk, Ulrich L.

    2005-07-01

    An a posteriori error estimator is developed for the eigenvalue analysis of three-dimensional heterogeneous elastic structures. It constitutes an extension of a well-known explicit estimator to heterogeneous structures. We prove that our estimates are independent of the variations in material properties and independent of the polynomial degree of finite elements. Finally, we study numerically the effectivity of this estimator on several model problems.

  20. Maximum A Posteriori Bayesian Estimation of Chromatographic Parameters by Limited Number of Experiments.

    PubMed

    Wiczling, Paweł; Kubik, Łukasz; Kaliszan, Roman

    2015-07-21

    The aim of this work was to develop a nonlinear mixed-effect chromatographic model able to describe the retention times of weak acids and bases in all possible combinations of organic modifier content and mobile-phase pH. Further, we aimed to identify the influence of basic covariates, like lipophilicity (log P), dissociation constant (pK(a)), and polar surface area (PSA), on the intercompound variability of chromatographic parameters. Lastly, we aimed to propose the optimal limited experimental design to the estimation process of parameters through a maximum a posteriori (MAP) Bayesian method to facilitate the method development process. The data set comprised retention times for two series of organic modifier content collected at different pH for a large series of acids and bases. The obtained typical parameters and their distribution were subsequently used as priors to improve the estimation process from reduced design with a variable number of preliminary experiments. The MAP Bayesian estimator was validated using two external-validation data sets. The common literature model was used to relate analyte retention time with mobile-phase pH and organic modifier content. A set of QSRR-based covariate relationships was established. It turned out that four preliminary experiments and prior information that includes analyte pK(a), log P, acid/base type, and PSA are sufficient to accurately predict analyte retention in virtually all combined changes of pH and organic modifier content. The MAP Bayesian estimator of all important chromatographic parameters controlling retention in pH/organic modifier gradient was developed. It can be used to improve parameter estimation using limited experimental design.

  1. Image Bit-depth Enhancement via Maximum-A-Posteriori Estimation of AC Signal.

    PubMed

    Wan, Pengfei; Cheung, Gene; Florencio, Dinei; Zhang, Cha; Au, Oscar C

    2016-04-13

    When images at low bit-depth are rendered at high bit-depth displays, missing least significant bits need to be estimated. We study the image bit-depth enhancement problem: estimating an original image from its quantized version from a minimum mean squared error (MMSE) perspective. We first argue that a graph-signal smoothness prior-one defined on a graph embedding the image structure-is an appropriate prior for the bit-depth enhancement problem. We next show that solving for the MMSE solution directly is in general too computationally expensive to be practical. We then propose an efficient approximation strategy. Specifically, we first estimate the AC component of the desired signal in a maximum a posteriori (MAP) formulation, efficiently computed via convex programming. We then compute the DC component with an MMSE criterion in closed form given the computed AC component. Experiments show that our proposed two-step approach has improved performance over conventional bit-depth enhancement schemes in both objective and subjective comparisons.

  2. A posteriori error estimation for hp -adaptivity for fourth-order equations

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.; Rangelova, Marina

    2010-04-01

    A posteriori error estimates developed to drive hp-adaptivity for second-order reaction-diffusion equations are extended to fourth-order equations. A C^1 hierarchical finite element basis is constructed from Hermite-Lobatto polynomials. A priori estimates of the error in several norms for both the interpolant and finite element solution are derived. In the latter case this requires a generalization of the well-known Aubin-Nitsche technique to time-dependent fourth-order equations. We show that the finite element solution and corresponding Hermite-Lobatto interpolant are asymptotically equivalent. A posteriori error estimators based on this equivalence for solutions at two orders are presented. Both are shown to be asymptotically exact on grids of uniform order. These estimators can be used to control various adaptive strategies. Computational results for linear steady-state and time-dependent equations corroborate the theory and demonstrate the effectiveness of the estimators in adaptive settings.

  3. Superconvergence and recovery type a posteriori error estimation for hybrid stress finite element method

    NASA Astrophysics Data System (ADS)

    Bai, YanHong; Wu, YongKe; Xie, XiaoPing

    2016-09-01

    Superconvergence and a posteriori error estimators of recovery type are analyzed for the 4-node hybrid stress quadrilateral finite element method proposed by Pian and Sumihara (Int. J. Numer. Meth. Engrg., 1984, 20: 1685-1695) for linear elasticity problems. Uniform superconvergence of order $O(h^{1+\\min\\{\\alpha,1\\}})$ with respect to the Lam\\'{e} constant $\\lambda$ is established for both the recovered gradients of the displacement vector and the stress tensor under a mesh assumption, where $\\alpha>0$ is a parameter characterizing the distortion of meshes from parallelograms to quadrilaterals. A posteriori error estimators based on the recovered quantities are shown to be asymptotically exact. Numerical experiments confirm the theoretical results.

  4. Evaluation of a Maximum A-Posteriori Slope Estimator for a Hartmann Wavefront Sensor

    DTIC Science & Technology

    1997-12-01

    MAXIMUM A-POSTERIORI SLOPE ESTIMATOR FOR A HARTMANN WAVEFRONT SENSOR THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Electrical Engineering Troy B. Van...other post-processing techniques such as inverse filtering or blind deconvolution [1, 20]. Significant research has been done by the Air Force Maui

  5. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  6. A posteriori error estimation of H1 mixed finite element method for the Benjamin-Bona-Mahony equation

    NASA Astrophysics Data System (ADS)

    Shafie, Sabarina; Tran, Thanh

    2017-08-01

    Error estimations of H1 mixed finite element method for the Benjamin-Bona-Mahony equation are considered. The problem is reformulated into a system of first order partial differential equations, which allows an approximation of the unknown function and its derivative. Local parabolic error estimates are introduced to approximate the true errors from the computed solutions; the so-called a posteriori error estimates. Numerical experiments show that the a posteriori error estimates converge to the true errors of the problem.

  7. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  8. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  9. A maximum a posteriori probability time-delay estimation for seismic signals

    NASA Astrophysics Data System (ADS)

    Carrier, A.; Got, J.-L.

    2014-09-01

    Cross-correlation and cross-spectral time delays often exhibit strong outliers due to ambiguities or cycle jumps in the correlation function. Their number increases when signal-to-noise, signal similarity or spectral bandwidth decreases. Such outliers heavily determine the time-delay probability density function and the results of further computations (e.g. double-difference location and tomography) using these time delays. In the present research we expressed cross-correlation as a function of the squared difference between signal amplitudes and show that they are closely related. We used this difference as a cost function whose minimum is reached when signals are aligned. Ambiguities may be removed in this function by using a priori information. We propose using the traveltime difference as a priori time-delay information. By modelling the probability density function of the traveltime difference by a Cauchy distribution and the probability density function of the data (differences of seismic signal amplitudes) by a Laplace distribution we were able to find explicitly the time-delay a posteriori probability density function. The location of the maximum of this a posteriori probability density function is the maximum a posteriori time-delay estimation for earthquake signals. Using this estimation to calculate time delays for earthquakes on the south flank of Kilauea statistically improved the cross-correlation time-delay estimation for these data and resulted in successful double-difference relocation for an increased number of earthquakes. This robust time-delay estimation improves the spatiotemporal resolution of seismicity rates in the south flank of Kilauea.

  10. A posteriori error estimates for the Johnson–Nédélec FEM–BEM coupling

    PubMed Central

    Aurada, M.; Feischl, M.; Karkulik, M.; Praetorius, D.

    2012-01-01

    Only very recently, Sayas [The validity of Johnson–Nédélec's BEM-FEM coupling on polygonal interfaces. SIAM J Numer Anal 2009;47:3451–63] proved that the Johnson–Nédélec one-equation approach from [On the coupling of boundary integral and finite element methods. Math Comput 1980;35:1063–79] provides a stable coupling of finite element method (FEM) and boundary element method (BEM). In our work, we now adapt the analytical results for different a posteriori error estimates developed for the symmetric FEM–BEM coupling to the Johnson–Nédélec coupling. More precisely, we analyze the weighted-residual error estimator, the two-level error estimator, and different versions of (h−h/2)-based error estimators. In numerical experiments, we use these estimators to steer h-adaptive algorithms, and compare the effectivity of the different approaches. PMID:22347772

  11. Local a posteriori estimates for pointwise gradient errors in finite element methods for elliptic problems

    NASA Astrophysics Data System (ADS)

    Demlow, Alan

    2007-03-01

    We prove local a posteriori error estimates for pointwise gradient errors in finite element methods for a second-order linear elliptic model problem. First we split the local gradient error into a computable local residual term and a weaker global norm of the finite element error (the ``pollution term''). Using a mesh-dependent weight, the residual term is bounded in a sharply localized fashion. In specific situations the pollution term may also be bounded by computable residual estimators. On nonconvex polygonal and polyhedral domains in two and three space dimensions, we may choose estimators for the pollution term which do not employ specific knowledge of corner singularities and which are valid on domains with cracks. The finite element mesh is only required to be simplicial and shape-regular, so that highly graded and unstructured meshes are allowed.

  12. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  13. Noise-bias and polarization-artifact corrected optical coherence tomography by maximum a-posteriori intensity estimation

    PubMed Central

    Chan, Aaron C.; Hong, Young-Joo; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2017-01-01

    We propose using maximum a-posteriori (MAP) estimation to improve the image signal-to-noise ratio (SNR) in polarization diversity (PD) optical coherence tomography. PD-detection removes polarization artifacts, which are common when imaging highly birefringent tissue or when using a flexible fiber catheter. However, dividing the probe power to two polarization detection channels inevitably reduces the SNR. Applying MAP estimation to PD-OCT allows for the removal of polarization artifacts while maintaining and improving image SNR. The effectiveness of the MAP-PD method is evaluated by comparing it with MAP-non-PD, intensity averaged PD, and intensity averaged non-PD methods. Evaluation was conducted in vivo with human eyes. The MAP-PD method is found to be optimal, demonstrating high SNR and artifact suppression, especially for highly birefringent tissue, such as the peripapillary sclera. The MAP-PD based attenuation coefficient image also shows better differentiation of attenuation levels than non-MAP attenuation images. PMID:28736656

  14. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  15. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  16. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  17. Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

  18. Item exposure control for multidimensional computer adaptive testing under maximum likelihood and expected a posteriori estimation.

    PubMed

    Huebner, Alan R; Wang, Chun; Quinlan, Kari; Seubert, Lauren

    2016-12-01

    Item bank stratification has been shown to be an effective method for combating item overexposure in both uni- and multidimensional computer adaptive testing. However, item bank stratification cannot guarantee that items will not be overexposed-that is, exposed at a rate exceeding some prespecified threshold. In this article, we propose enhancing stratification for multidimensional computer adaptive tests by combining it with the item eligibility method, a technique for controlling the maximum exposure rate in computerized tests. The performance of the method was examined via a simulation study and compared to existing methods of item selection and exposure control. Also, for the first time, maximum likelihood (MLE) and expected a posteriori (EAP) estimation of examinee ability were compared side by side in a multidimensional computer adaptive test. The simulation suggested that the proposed method is effective in suppressing the maximum item exposure rate with very little loss of measurement accuracy and precision. As compared to MLE, EAP generates smaller mean squared errors of the ability estimates in all simulation conditions.

  19. A Kernel Density Estimator-Based Maximum A Posteriori Image Reconstruction Method for Dynamic Emission Tomography Imaging.

    PubMed

    Ihsani, Alvin; Farncombe, Troy H

    2016-05-01

    A novel maximum a posteriori (MAP) method for dynamic single-photon emission computed tomography image reconstruction is proposed. The prior probability is modeled as a multivariate kernel density estimator (KDE), effectively modeling the prior probability non-parametrically, with the aim of reducing the effects of artifacts arising from inconsistencies in projection measurements in low-count regimes where projections are dominated by noise. The proposed prior spatially and temporally limits the variation of time-activity functions (TAFs) and attracts similar TAFs together. The similarity between TAFs is determined by the spatial and range scaling parameters of the KDE-like prior. The resulting iterative image reconstruction method is evaluated using two simulated phantoms, namely the extended cardiac-torso (XCAT) heart phantom and a simulated Mini-Deluxe Phantom. The phantoms were chosen to observe the effects of the proposed prior on the TAFs based on the vicinity and abutments of regions with different activities. Our results show the effectiveness of the proposed iterative reconstruction method, especially in low-count regimes, which provides better uniformity within each region of activity, significant reduction of spatiotemporal variations caused by noise, and sharper separation between different regions of activity than expectation maximization and an MAP method employing a more traditional Gibbs prior.

  20. A Kernel Density Estimator-Based Maximum A Posteriori Image Reconstruction Method for Dynamic Emission Tomography Imaging.

    PubMed

    Ihsani, Alvin; Farncombe, Troy

    2016-03-25

    A novel maximum a posteriori (MAP) method for dynamic SPECT image reconstruction is proposed. The prior probability is modelled as a multivariate kernel density estimator (KDE), effectively modelling the prior probability nonparametrically, with the aim of reducing the effects of artifacts arising from inconsistencies in projection measurements in lowcount regimes where projections are dominated by noise. The proposed prior spatially and temporally limits the variation of time-activity functions (TAF) and "attracts" similar TAFs together. The similarity between TAFs is determined by the spatial and range scaling parameters of the KDE-like prior. The resulting iterative image reconstruction method is evaluated using two simulated phantoms, namely the XCAT heart phantom and a simulated Mini-Deluxe PhantomTM. The phantoms were chosen to observe the effects of the proposed prior on the TAFs based on the vicinity and abutments of regions with different activities. Our results show the effectiveness of the proposed iterative reconstruction method, especially in low-count regimes, which provides better uniformity within each region of activity, significant reduction of spatio-temporal variations caused by noise, and sharper separation between different regions of activity than expectation maximization and a MAP method employing a more "traditional" Gibbs prior.

  1. A functional-type a posteriori error estimate of approximate solutions for Reissner-Mindlin plates and its implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Maxim; Chistiakova, Olga

    2017-06-01

    Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.

  2. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  3. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  4. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  5. Maximum a posteriori estimation of crystallographic phases in X-ray diffraction tomography

    PubMed Central

    Gürsoy, Doĝa; Biçer, Tekin; Almer, Jonathan D.; Kettimuthu, Raj; Stock, Stuart R.; De Carlo, Francesco

    2015-01-01

    A maximum a posteriori approach is proposed for X-ray diffraction tomography for reconstructing three-dimensional spatial distribution of crystallographic phases and orientations of polycrystalline materials. The approach maximizes the a posteriori density which includes a Poisson log-likelihood and an a priori term that reinforces expected solution properties such as smoothness or local continuity. The reconstruction method is validated with experimental data acquired from a section of the spinous process of a porcine vertebra collected at the 1-ID-C beamline of the Advanced Photon Source, at Argonne National Laboratory. The reconstruction results show significant improvement in the reduction of aliasing and streaking artefacts, and improved robustness to noise and undersampling compared to conventional analytical inversion approaches. The approach has the potential to reduce data acquisition times, and significantly improve beamtime efficiency. PMID:25939627

  6. A model selection algorithm for a posteriori probability estimation with neural networks.

    PubMed

    Arribas, Juan Ignacio; Cid-Sueiro, Jesús

    2005-07-01

    This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes.

  7. Maximum a Posteriori (MAP) Estimates for Hyperspectral Image Enhancement

    DTIC Science & Technology

    2004-09-01

    nd s N high resolution multispectral pixels Figure 2.3: Multispectral Data Cube x point processor with an attached vector processor unit (VPU). Each...response matrix s (if provided at all) 29 % should be provided in the spectral domain. If PCA analysis % is to be performed (i.e. pca_mode > 0), s will be...subject to ∑P j=1 sj = 1 and sj ≥ 0 for 1 ≤ j ≤ P , in order to determine a normalized spectral response vector s . The normalized spectral response

  8. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    UNCLASSIFIED N G SZYMCZAK ET AL. MAR 83 BN-i@82 F/G 12/1 NL I hhhhhhh EhhhhhhhhhhhE mhhhhomhhlhhhEIEEIEEEEEIlUso o.4 Q.8 L-A -J1 IIIII1 L MICROCOPY...AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION-DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...ESTIMATES AND ADAPTIVITY 6. PERFORMING OR. REPORT NMBER 7. AUTNOR(e) S. CONTRACT OR GRANT NUM11CR’ s) W. G. Szymczak and I. Babu~ka ONR N00014-77-0623 S

  9. Blind deconvolution of images with model discrepancies using maximum a posteriori estimation with heavy-tailed priors

    NASA Astrophysics Data System (ADS)

    Kotera, Jan; Å roubek, Filip

    2015-02-01

    Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straight- forward maximum a posteriori estimation incorporating sparse priors and mechanism to deal with boundary artifacts, combined with an efficient numerical method can produce results which compete with or outperform much more complicated state-of-the-art methods. Our method is naturally extended to deal with overexposure in low-light photography, where linear blurring model is violated.

  10. An a-posteriori error estimator for linear elastic fracture mechanics using the stable generalized/extended finite element method

    NASA Astrophysics Data System (ADS)

    Lins, R. M.; Ferreira, M. D. C.; Proença, S. P. B.; Duarte, C. A.

    2015-12-01

    In this study, a recovery-based a-posteriori error estimator originally proposed for the Corrected XFEM is investigated in the framework of the stable generalized FEM (SGFEM). Both Heaviside and branch functions are adopted to enrich the approximations in the SGFEM. Some necessary adjustments to adapt the expressions defining the enhanced stresses in the original error estimator are discussed in the SGFEM framework. Relevant aspects such as effectivity indexes, error distribution, convergence rates and accuracy of the recovered stresses are used in order to highlight the main findings and the effectiveness of the error estimator. Two benchmark problems of the 2-D fracture mechanics are selected to assess the robustness of the error estimator hereby investigated. The main findings of this investigation are: the SGFEM shows higher accuracy than G/XFEM and a reduced sensitivity to blending element issues. The error estimator can accurately capture these features of both methods.

  11. A Maximum a Posteriori Estimation Framework for Robust High Dynamic Range Video Synthesis

    NASA Astrophysics Data System (ADS)

    Li, Yuelong; Lee, Chul; Monga, Vishal

    2017-03-01

    High dynamic range (HDR) image synthesis from multiple low dynamic range (LDR) exposures continues to be actively researched. The extension to HDR video synthesis is a topic of significant current interest due to potential cost benefits. For HDR video, a stiff practical challenge presents itself in the form of accurate correspondence estimation of objects between video frames. In particular, loss of data resulting from poor exposures and varying intensity make conventional optical flow methods highly inaccurate. We avoid exact correspondence estimation by proposing a statistical approach via maximum a posterior (MAP) estimation, and under appropriate statistical assumptions and choice of priors and models, we reduce it to an optimization problem of solving for the foreground and background of the target frame. We obtain the background through rank minimization and estimate the foreground via a novel multiscale adaptive kernel regression technique, which implicitly captures local structure and temporal motion by solving an unconstrained optimization problem. Extensive experimental results on both real and synthetic datasets demonstrate that our algorithm is more capable of delivering high-quality HDR videos than current state-of-the-art methods, under both subjective and objective assessments. Furthermore, a thorough complexity analysis reveals that our algorithm achieves better complexity-performance trade-off than conventional methods.

  12. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  13. Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models.

    PubMed

    Koyama, Shinsuke; Paninski, Liam

    2010-08-01

    A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton-Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.

  14. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    NASA Astrophysics Data System (ADS)

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-06-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence.

  15. A posteriori error estimates for continuous/discontinuous Galerkin approximations of the Kirchhoff-Love buckling problem

    NASA Astrophysics Data System (ADS)

    Hansbo, Peter; Larson, Mats G.

    2015-11-01

    Second order buckling theory involves a one-way coupled coupled problem where the stress tensor from a plane stress problem appears in an eigenvalue problem for the fourth order Kirchhoff plate. In this paper we present an a posteriori error estimate for the critical buckling load and mode corresponding to the smallest eigenvalue and associated eigenvector. A particular feature of the analysis is that we take the effect of approximate computation of the stress tensor and also provide an error indicator for the plane stress problem. The Kirchhoff plate is discretized using a continuous/discontinuous finite element method based on standard continuous piecewise polynomial finite element spaces. The same finite element spaces can be used to solve the plane stress problem.

  16. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  17. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  18. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    SciTech Connect

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  19. SU-E-J-170: Beyond Single-Cycle 4DCT: Maximum a Posteriori (MAP) Reconstruction-Based Binning-Free Multicycle 4DCT for Lung Radiotherapy

    SciTech Connect

    Cheung, Y; Sawant, A; Hinkle, J; Joshi, S

    2014-06-01

    Purpose: Thoracic motion changes from cycle-to-cycle and day-to-day. Conventional 4DCT does not capture these cycle to cycle variations. We present initial results of a novel 4DCT reconstruction technique based on maximum a posteriori (MAP) reconstruction. The technique uses the same acquisition process (and therefore dose) as a conventional 4DCT in order to create a high spatiotemporal resolution cine CT that captures several breathing cycles. Methods: Raw 4DCT data were acquired from a lung cancer patient. The continuous 4DCT was reconstructed using MAP algorithm which uses the raw, time-stamped CT data to reconstruct images while simultaneously estimating deformation in the subject's anatomy. This framework incorporates physical effects such as hysteresis and is robust to detector noise and irregular breathing patterns. The 4D image is described in terms of a 3D reference image defined at one end of the hysteresis loop, and two deformation vector fields (DVFs) corresponding to inhale motion and exhale motion respectively. The MAP method uses all of the CT projection data and maximizes the log posterior in order to iteratively estimate a timevariant deformation vector field that describes the entire moving and deforming volume. Results: The MAP 4DCT yielded CT-quality images for multiple cycles corresponding to the entire duration of CT acquisition, unlike the conventional 4DCT, which only yielded a single cycle. Variations such as amplitude and frequency changes and baseline shifts were clearly captured by the MAP 4DC Conclusion: We have developed a novel, binning-free, parameterized 4DCT reconstruction technique that can capture cycle-to-cycle variations of respiratory motion. This technique provides an invaluable tool for respiratory motion management research. This work was supported by funding from the National Institutes of Health and VisionRT Ltd. Amit Sawant receives research funding from Varian Medical Systems, Vision RT and Elekta.

  20. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    SciTech Connect

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximate weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.

  1. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    SciTech Connect

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximate weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.

  2. A posteriori error estimators for the discrete ordinates approximation of the one-speed neutron transport equation

    SciTech Connect

    O'Brien, S.; Azmy, Y. Y.

    2013-07-01

    When calculating numerical solutions of the neutron transport equation it is important to have a measure of the accuracy of the solution. As the true solution is generally not known, a suitable estimation of the error must be made. The steady state transport equation possesses discretization errors in all its independent variables: angle, energy and space. In this work only spatial discretization errors are considered. An exact transport solution, in which the degree of regularity of the exact flux across the singular characteristic is controlled, is manufactured to determine the numerical solutions true discretization error. This solution is then projected onto a Legendre polynomial space in order to form an exact solution on the same basis space as the numerical solution, Discontinuous Galerkin Finite Element Method (DGFEM), to enable computation of the true error. Over a series of test problems the true error is compared to the error estimated by: Ragusa and Wang (RW), residual source (LER) and cell discontinuity estimators (JD). The validity and accuracy of the considered estimators are primarily assessed by considering the effectivity index and global L2 norm of the error. In general RW excels at approximating the true error distribution but usually under-estimates its magnitude; the LER estimator emulates the true error distribution but frequently over-estimates the magnitude of the true error; the JD estimator poorly captures the true error distribution and generally under-estimates the error about singular characteristics but over-estimates it elsewhere. (authors)

  3. An asymptotically exact, pointwise, a posteriori error estimator for the finite element method with super convergence properties

    SciTech Connect

    Hugger, J.

    1995-12-31

    When the finite element solution of a variational problem possesses certain super convergence properties, it is possible very inexpensively to obtain a correction term providing an additional order of approximation of the solution. The correction can be used for error estimation locally or globally in whatever norm is preferred, or if no error estimation is wanted it can be used for postprocessing of the solution to improve the quality. In this paper such a correction term is described for the general case of n dimensional, linear or nonlinear problems. Computational evidence of the performance in one space dimension is given with special attention to the effects of the appearance of singularities and zeros of derivatives in the exact solution.

  4. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  5. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  6. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-07-06

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  7. A comparative estimation of the errors in the sunspot coordinate catalog compiled at Cuba and the methods of their a posteriori decrease.

    NASA Astrophysics Data System (ADS)

    Nagovitsyn, Yu. A.; Nikonov, O. V.; Perez Doval, J.

    1992-06-01

    A comparison of the accuracy of the Cuba, Greenwich and Debrecen catalogs of sunspot coordinates has been made. A new method for a posteriori decrease of coordinate errors is given. The following conclusions have been made: 1. The accuracy of absolute heliographic coordinates for the Cuban catalog is 0.26 and for the Greenwich catalog is 0.32 of the heliographic degree. 2. Reduction to smoothed coordinate values improves the accuracy by a factor of 1.5. 3. Reduction values within the frame of the proposed technique REPORT to "pseudorelative" coordinates enables an improvement of the initial accuracy of sunspot coordinate measurement by 5 - 7 times.

  8. A Posteriori Analysis for Hydrodynamic Simulations Using Adjoint Methodologies

    SciTech Connect

    Woodward, C S; Estep, D; Sandelin, J; Wang, H

    2009-02-26

    This report contains results of analysis done during an FY08 feasibility study investigating the use of adjoint methodologies for a posteriori error estimation for hydrodynamics simulations. We developed an approach to adjoint analysis for these systems through use of modified equations and viscosity solutions. Targeting first the 1D Burgers equation, we include a verification of the adjoint operator for the modified equation for the Lax-Friedrichs scheme, then derivations of an a posteriori error analysis for a finite difference scheme and a discontinuous Galerkin scheme applied to this problem. We include some numerical results showing the use of the error estimate. Lastly, we develop a computable a posteriori error estimate for the MAC scheme applied to stationary Navier-Stokes.

  9. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    SciTech Connect

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  10. Comparing Mapped Plot Estimators

    Treesearch

    Paul C. Van Deusen

    2006-01-01

    Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...

  11. Time Required to Compute A Posteriori Probabilities,

    DTIC Science & Technology

    The paper discusses the time required to compute a posteriori probabilities using Bayes ’ Theorem . In a two-hypothesis example it is shown that, to... Bayes ’ Theorem as the group operation. Winograd’s results concerning the lower bound on the time required to perform a group operation on a finite group using logical circuitry are therefore applicable. (Author)

  12. [ETHICAL PRINCIPALS AND A POSTERIORI JUSTIFICATIONS].

    PubMed

    Heintz, Monica

    2015-12-01

    It is difficult to conceive that the human being, while being the same everywhere, could be cared for in such different ways in other societies. Anthropologists acknowledge that the diversity of cultures implies a diversity of moral values, thus that in a multicultural society the individual could draw upon different moral frames to justify the peculiarities of her/his demand of care. But how could we determine what is the moral frame that catalyzes behaviour while all we can record are a posteriori justifications of actions? In most multicultural societies where several moralframes coexist, there is an implicit hierarchy between ethical systems derived from a hierarchy of power which falsifies these a posteriori justifications. Moreover anthropologists often fail to acknowledge that individual behaviour does not always reflect individual values, but is more often the result of negotiations between the moralframes available in society and her/his own desires and personal experience. This is certainly due to the difficulty to account for a dynamic and complex interplay of moral values that cannot be analysed as a system. The impact of individual experience on the way individuals give or receive care could also be only weakly linked to a moral system even when this reference comes up explicitly in the a posteriori justifications.

  13. Comparison of minimum-norm maximum likelihood and maximum a posteriori wavefront reconstructions for large adaptive optics systems.

    PubMed

    Béchet, Clémentine; Tallon, Michel; Thiébaut, Eric

    2009-03-01

    The performances of various estimators for wavefront sensing applications such as adaptive optics (AO) are compared. Analytical expressions for the bias and variance terms in the mean squared error (MSE) are derived for the minimum-norm maximum likelihood (MNML) and the maximum a posteriori (MAP) reconstructors. The MAP estimator is analytically demonstrated to yield an optimal trade-off that reduces the MSE, hence leading to a better Strehl ratio. The implications for AO applications are quantified thanks to simulations on 8-m- and 42-m-class telescopes. We show that the MAP estimator can achieve twice as low MSE as MNML methods do. Large AO systems can thus benefit from the high quality of MAP reconstruction in O(n) operations, thanks to the fast fractal iterative method (FrIM) algorithm (Thiébaut and Tallon, submitted to J. Opt. Soc. Am. A).

  14. Relative Precision of Ability Estimation in Polytomous CAT: A Comparison under the Generalized Partial Credit Model and Graded Response Model.

    ERIC Educational Resources Information Center

    Wang, Shudong; Wang, Tianyou

    The purpose of this Monte Carlo study was to evaluate the relative accuracy of T. Warm's weighted likelihood estimate (WLE) compared to maximum likelihood estimate (MLE), expected a posteriori estimate (EAP), and maximum a posteriori estimate (MAP), using the generalized partial credit model (GPCM) and graded response model (GRM) under a variety…

  15. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  16. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  17. Maximum a posteriori video super-resolution using a new multichannel image prior.

    PubMed

    Belekos, Stefanos P; Galatsanos, Nikolaos P; Katsaggelos, Aggelos K

    2010-06-01

    Super-resolution (SR) is the term used to define the process of estimating a high-resolution (HR) image or a set of HR images from a set of low-resolution (LR) observations. In this paper we propose a class of SR algorithms based on the maximum a posteriori (MAP) framework. These algorithms utilize a new multichannel image prior model, along with the state-of-the-art single channel image prior and observation models. A hierarchical (two-level) Gaussian nonstationary version of the multichannel prior is also defined and utilized within the same framework. Numerical experiments comparing the proposed algorithms among themselves and with other algorithms in the literature, demonstrate the advantages of the adopted multichannel approach.

  18. Maximum a posteriori CMB lensing reconstruction

    NASA Astrophysics Data System (ADS)

    Carron, Julien; Lewis, Antony

    2017-09-01

    Gravitational lensing of the cosmic microwave background (CMB) is a valuable cosmological signal that correlates to tracers of large-scale structure and acts as a important source of confusion for primordial B -mode polarization. State-of-the-art lensing reconstruction analyses use quadratic estimators, which are easily applicable to data. However, these estimators are known to be suboptimal, in particular for polarization, and large improvements are expected to be possible for high signal-to-noise polarization experiments. We develop a method and numerical code, lensit, that is able to find efficiently the most probable lensing map, introducing no significant approximations to the lensed CMB likelihood, and applicable to beamed and masked data with inhomogeneous noise. It works by iteratively reconstructing the primordial unlensed CMB using a deflection estimate and its inverse, and removing residual lensing from these maps with quadratic estimator techniques. Roughly linear computational cost is maintained due to fast convergence of iterative searches, combined with the local nature of lensing. The method achieves the maximal improvement in signal to noise expected from analytical considerations on the unmasked parts of the sky. Delensing with this optimal map leads to forecast tensor-to-scalar ratio parameter errors improved by a factor ≃2 compared to the quadratic estimator in a CMB stage IV configuration.

  19. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  20. Mapped Plot Patch Size Estimates

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...

  1. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  2. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  3. Rigorous a posteriori assessment of accuracy in EMG decomposition.

    PubMed

    McGill, Kevin C; Marateb, Hamid R

    2011-02-01

    If electromyography (EMG) decomposition is to be a useful tool for scientific investigation, it is essential to know that the results are accurate. Because of background noise, waveform variability, motor-unit action potential (MUAP) indistinguishability, and perplexing superpositions, accuracy assessment is not straightforward. This paper presents a rigorous statistical method for assessing decomposition accuracy based only on evidence from the signal itself. The method uses statistical decision theory in a Bayesian framework to integrate all the shape- and firing-time-related information in the signal to compute an objective a posteriori measure of confidence in the accuracy of each discharge in the decomposition. The assessment is based on the estimated statistical properties of the MUAPs and noise and takes into account the relative likelihood of every other possible decomposition. The method was tested on 3 pairs of real EMG signals containing 4-7 active MUAP trains per signal that had been decomposed by a human expert. It rated 97% of the identified MUAP discharges as accurate to within ± 0.5 ms with a confidence level of 99%, and detected six decomposition errors. Cross-checking between signal pairs verified all but two of these assertions. These results demonstrate that the approach is reliable and practical for real EMG signals.

  4. Segmenting pectoralis muscle on digital mammograms by a Markov random field-maximum a posteriori model

    PubMed Central

    Ge, Mei; Mainprize, James G.; Mawdsley, Gordon E.; Yaffe, Martin J.

    2014-01-01

    Abstract. Accurate and automatic segmentation of the pectoralis muscle is essential in many breast image processing procedures, for example, in the computation of volumetric breast density from digital mammograms. Its segmentation is a difficult task due to the heterogeneity of the region, neighborhood complexities, and shape variability. The segmentation is achieved by pixel classification through a Markov random field (MRF) image model. Using the image intensity feature as observable data and local spatial information as a priori, the posterior distribution is estimated in a stochastic process. With a variable potential component in the energy function, by the maximum a posteriori (MAP) estimate of the labeling image, given the image intensity feature which is assumed to follow a Gaussian distribution, we achieved convergence properties in an appropriate sense by Metropolis sampling the posterior distribution of the selected energy function. By proposing an adjustable spatial constraint, the MRF-MAP model is able to embody the shape requirement and provide the required flexibility for the model parameter fitting process. We demonstrate that accurate and robust segmentation can be achieved for the curving-triangle-shaped pectoralis muscle in the medio-lateral-oblique (MLO) view, and the semielliptic-shaped muscle in cranio-caudal (CC) view digital mammograms. The applicable mammograms can be either “For Processing” or “For Presentation” image formats. The algorithm was developed using 56 MLO-view and 79 CC-view FFDM “For Processing” images, and quantitatively evaluated against a random selection of 122 MLO-view and 173 CC-view FFDM images of both presentation intent types. PMID:26158068

  5. Joint MAP bias estimation and data association: algorithms

    NASA Astrophysics Data System (ADS)

    Danford, Scott; Kragel, Bret; Poore, Aubrey

    2007-09-01

    The problem of joint maximum a posteriori (MAP) bias estimation and data association belongs to a class of nonconvex mixed integer nonlinear programming problems. These problems are difficult to solve due to both the combinatorial nature of the problem and the nonconvexity of the objective function or constraints. A specific problem that has received some attention in the tracking literature is that of the target object map problem in which one tries match a set of tracks as observed by two different sensors in the presence of biases, which are modeled here as a translation between the track states. The general framework also applies to problems in which the costs are general nonlinear functions of the biases. The goal of this paper is to present a class of algorithms based on the branch and bound framework and the "all-pairs" and k-best heuristics that provide a good initial upper bound for a branch and bound algorithm. These heuristics can be used as part of a real-time algorithm or as part of an "anytime algorithm" within the branch and bound framework. In addition, we consider both the A*-search and depth-first search procedures as well as several efficiency improvements such as gating. While this paper focuses on the algorithms, a second paper will focus on simulations.

  6. A posteriori correction for source decay in 3D bioluminescent source localization using multiview measured data

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Pu; Tian, Jie; Liu, Dan; Wang, Ruifang

    2009-02-01

    As a novel optical molecular imaging technique, bioluminescence tomography (BLT) can be used to monitor the biological activities non-invasively at the cellular and molecular levels. In most of known BLT studies, however, the time variation of the bioluminescent source is neglected. It gives rise to the inconsistent views during the multiview continuous wave measurement. In other words, the real measured data from different measured views come from 'different' bioluminescent sources. It could bring large errors in bioluminescence reconstruction. In this paper, a posteriori correction strategy for adaptive FEM-based reconstruction is proposed and developed. The method helps to improve the source localization considering the bioluminescent energy variance during the multiview measurement. In the method, the correction for boundary signals by means of a posteriori correction strategy, which adopts the energy ratio of measured data in the overlapping domains between the adjacent measurements as the correcting factor, can eliminate the effect of the inconsistent views. Then the adaptive mesh refinement with a posteriori error estimation helps to improve the precision and efficiency of BLT reconstruction. In addition, a priori permissible source region selection based on the surface measured data further reduces the ill-posedness of BLT and enhances numerical stability. Finally, three-dimension numerical simulations using the heterogeneous phantom are performed. The numerically measured data is generated by Monte Carlo (MC) method which is known as the Gold standard and can avoid the inverse crime. The reconstructed result with correction shows more accuracy compared to that without correction.

  7. Estimating uncertainty in map intersections

    Treesearch

    Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker

    2009-01-01

    Traditionally, natural resource managers have asked the question "How much?" and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question "Where?" and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases...

  8. Estimating A Reference Standard Segmentation With Spatially Varying Performance Parameters: Local MAP STAPLE

    PubMed Central

    Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.

    2012-01-01

    We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727

  9. Real-time maximum a-posteriori image reconstruction for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.

    2015-08-01

    Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.

  10. Anatomical labeling of the circle of willis using maximum a posteriori graph matching.

    PubMed

    Robben, David; Sunaert, Stefan; Thijs, Vincent; Wilms, Guy; Maes, Frederik; Suetens, Paul

    2013-01-01

    A new method for anatomically labeling the vasculature is presented and applied to the Circle of Willis. Our method converts the segmented vasculature into a graph that is matched with an annotated graph atlas in a maximum a posteriori (MAP) way. The MAP matching is formulated as a quadratic binary programming problem which can be solved efficiently. Unlike previous methods, our approach can handle non tree-like vasculature and large topological differences. The method is evaluated in a leave-one-out test on MRA of 30 subjects where it achieves a sensitivity of 93% and a specificity of 85% with an average error of 1.5 mm on matching bifurcations in the vascular graph.

  11. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  12. A posteriori operation detection in evolving software models

    PubMed Central

    Langer, Philip; Wimmer, Manuel; Brosch, Petra; Herrmannsdörfer, Markus; Seidl, Martina; Wieland, Konrad; Kappel, Gerti

    2013-01-01

    As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment. PMID:23471366

  13. Effects of using a posteriori methods for the conservation of integral invariants. [for weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.

    1988-01-01

    The nature and effect of using a posteriori adjustments to nonconservative finite-difference schemes to enforce integral invariants of the corresponding analytic system are examined. The method of a posteriori integral constraint restoration is analyzed for the case of linear advection, and the harmonic response associated with the a posteriori adjustments is examined in detail. The conservative properties of the shallow water system are reviewed, and the constraint restoration algorithm applied to the shallow water equations are described. A comparison is made between forecasts obtained using implicit and a posteriori methods for the conservation of mass, energy, and potential enstrophy in the complete nonlinear shallow-water system.

  14. Effects of using a posteriori methods for the conservation of integral invariants. [for weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.

    1988-01-01

    The nature and effect of using a posteriori adjustments to nonconservative finite-difference schemes to enforce integral invariants of the corresponding analytic system are examined. The method of a posteriori integral constraint restoration is analyzed for the case of linear advection, and the harmonic response associated with the a posteriori adjustments is examined in detail. The conservative properties of the shallow water system are reviewed, and the constraint restoration algorithm applied to the shallow water equations are described. A comparison is made between forecasts obtained using implicit and a posteriori methods for the conservation of mass, energy, and potential enstrophy in the complete nonlinear shallow-water system.

  15. An a posteriori-driven adaptive Mixed High-Order method with application to electrostatics

    NASA Astrophysics Data System (ADS)

    Di Pietro, Daniele A.; Specogna, Ruben

    2016-12-01

    In this work we propose an adaptive version of the recently introduced Mixed High-Order method and showcase its performance on a comprehensive set of academic and industrial problems in computational electromagnetism. The latter include, in particular, the numerical modeling of comb-drive and MEMS devices. Mesh adaptation is driven by newly derived, residual-based error estimators. The resulting method has several advantageous features: It supports fairly general meshes, it enables arbitrary approximation orders, and has a moderate computational cost thanks to hybridization and static condensation. The a posteriori-driven mesh refinement is shown to significantly enhance the performance on problems featuring singular solutions, allowing to fully exploit the high-order of approximation.

  16. A posteriori compensation of the systematic error due to polynomial interpolation in digital image correlation

    NASA Astrophysics Data System (ADS)

    Baldi, Antonio; Bertolino, Filippo

    2013-10-01

    It is well known that displacement components estimated using digital image correlation are affected by a systematic error due to the polynomial interpolation required by the numerical algorithm. The magnitude of bias depends on the characteristics of the speckle pattern (i.e., the frequency content of the image), on the fractional part of displacements and on the type of polynomial used for intensity interpolation. In literature, B-Spline polynomials are pointed out as being able to introduce the smaller errors, whereas bilinear and cubic interpolants generally give the worst results. However, the small bias of B-Spline polynomials is partially counterbalanced by a somewhat larger execution time. We will try to improve the accuracy of lower order polynomials by a posteriori correcting their results so as to obtain a faster and more accurate analysis.

  17. Electron transport in magnetrons by a posteriori Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Costin, C.; Minea, T. M.; Popa, G.

    2014-02-01

    Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.

  18. Universal estimates for critical circle mappings.

    PubMed

    Khanin, K. M.

    1991-08-01

    A thermodynamic formalism is constructed for critical circle mappings. It is used to prove universal estimates for the asymptotic behavior of renormalized mappings. Certain applications of statistical mechanics to research on the ergodic properties of critical homeomorphisms of a circle are also discussed.

  19. 4D maximum a posteriori reconstruction in dynamic SPECT using a compartmental model-based prior.

    PubMed

    Kadrmas, D J; Gullberg, G T

    2001-05-01

    A 4D ordered-subsets maximum a posteriori (OSMAP) algorithm for dynamic SPECT is described which uses a temporal prior that constrains each voxel's behaviour in time to conform to a compartmental model. No a priori limitations on kinetic parameters are applied; rather, the parameter estimates evolve as the algorithm iterates to a solution. The estimated parameters and time-activity curves are used within the reconstruction algorithm to model changes in the activity distribution as the camera rotates, avoiding artefacts due to inconsistencies of data between projection views. This potentially allows for fewer, longer-duration scans to be used and may have implications for noise reduction. The algorithm was evaluated qualitatively using dynamic 99mTc-teboroxime SPECT scans in two patients, and quantitatively using a series of simulated phantom experiments. The OSMAP algorithm resulted in images with better myocardial uniformity and definition, gave time-activity curves with reduced noise variations, and provided wash-in parameter estimates with better accuracy and lower statistical uncertainty than those obtained from conventional ordered-subsets expectation-maximization (OSEM) processing followed by compartmental modelling. The new algorithm effectively removed the bias in k21 estimates due to inconsistent projections for sampling schedules as slow as 60 s per timeframe, but no improvement in wash-out parameter estimates was observed in this work. The proposed dynamic OSMAP algorithm provides a flexible framework which may benefit a variety of dynamic tomographic imaging applications.

  20. On Evaluation of Recharge Model Uncertainty: a Priori and a Posteriori

    SciTech Connect

    Ming Ye; Karl Pohlmann; Jenny Chapman; David Shafer

    2006-01-30

    Hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Hydrologic analyses typically rely on a single conceptual-mathematical model, which ignores conceptual model uncertainty and may result in bias in predictions and under-estimation of predictive uncertainty. This study is to assess conceptual model uncertainty residing in five recharge models developed to date by different researchers based on different theories for Nevada and Death Valley area, CA. A recently developed statistical method, Maximum Likelihood Bayesian Model Averaging (MLBMA), is utilized for this analysis. In a Bayesian framework, the recharge model uncertainty is assessed, a priori, using expert judgments collected through an expert elicitation in the form of prior probabilities of the models. The uncertainty is then evaluated, a posteriori, by updating the prior probabilities to estimate posterior model probability. The updating is conducted through maximum likelihood inverse modeling by calibrating the Death Valley Regional Flow System (DVRFS) model corresponding to each recharge model against observations of head and flow. Calibration results of DVRFS for the five recharge models are used to estimate three information criteria (AIC, BIC, and KIC) used to rank and discriminate these models. Posterior probabilities of the five recharge models, evaluated using KIC, are used as weights to average head predictions, which gives posterior mean and variance. The posterior quantities incorporate both parametric and conceptual model uncertainties.

  1. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  2. A MAP approach for joint motion estimation, segmentation, and super resolution.

    PubMed

    Shen, Huanfeng; Zhang, Liangpei; Huang, Bo; Li, Pingxiang

    2007-02-01

    Super resolution image reconstruction allows the recovery of a high-resolution (HR) image from several low-resolution images that are noisy, blurred, and down sampled. In this paper, we present a joint formulation for a complex super-resolution problem in which the scenes contain multiple independently moving objects. This formulation is built upon the maximum a posteriori (MAP) framework, which judiciously combines motion estimation, segmentation, and super resolution together. A cyclic coordinate descent optimization procedure is used to solve the MAP formulation, in which the motion fields, segmentation fields, and HR images are found in an alternate manner given the two others, respectively. Specifically, the gradient-based methods are employed to solve the HR image and motion fields, and an iterated conditional mode optimization method to obtain the segmentation fields. The proposed algorithm has been tested using a synthetic image sequence, the "Mobile and Calendar" sequence, and the original "Motorcycle and Car" sequence. The experiment results and error analyses verify the efficacy of this algorithm.

  3. MAP estimators for piecewise continuous inversion

    NASA Astrophysics Data System (ADS)

    Dunlop, M. M.; Stuart, A. M.

    2016-10-01

    We study the inverse problem of estimating a field u a from data comprising a finite set of nonlinear functionals of u a , subject to additive noise; we denote this observed data by y. Our interest is in the reconstruction of piecewise continuous fields u a in which the discontinuity set is described by a finite number of geometric parameters a. Natural applications include groundwater flow and electrical impedance tomography. We take a Bayesian approach, placing a prior distribution on u a and determining the conditional distribution on u a given the data y. It is then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP estimators can be characterised as minimisers of a generalised Onsager-Machlup functional, in the case where the prior measure is a Gaussian random field. We extend this theory to a more general class of prior distributions which allows for piecewise continuous fields. Specifically, the prior field is assumed to be piecewise Gaussian with random interfaces between the different Gaussians defined by a finite number of parameters. We also make connections with recent work on MAP estimators for linear problems and possibly non-Gaussian priors (Helin and Burger 2015 Inverse Problems 31 085009) which employs the notion of Fomin derivative. In showing applicability of our theory we focus on the groundwater flow and EIT models, though the theory holds more generally. Numerical experiments are implemented for the groundwater flow model, demonstrating the feasibility of determining MAP estimators for these piecewise continuous models, but also that the geometric formulation can lead to multiple nearby (local) MAP estimators. We relate these MAP estimators to the behaviour of output from MCMC samples of the posterior, obtained using a state-of-the-art function space Metropolis-Hastings method.

  4. New method for tuning hyperparameter for the total variation norm in the maximum a posteriori ordered subsets expectation maximization reconstruction in SPECT myocardial perfusion imaging

    NASA Astrophysics Data System (ADS)

    Yang, Zhaoxia; Krol, Andrzej; Xu, Yuesheng; Feiglin, David H.

    2011-03-01

    In order to improve the tradeoff between noise and bias, and to improve uniformity of the reconstructed myocardium while preserving spatial resolution in parallel-beam collimator SPECT myocardial perfusion imaging (MPI) we investigated the most advantageous approach to provide reliable estimate of the optimal value of hyperparameter for the Total Variation (TV) norm in the iterative Bayesian Maximum A Posteriori Ordered Subsets Expectation Maximization (MAP-OSEM) one step late tomographic reconstruction with Gibbs prior. Our aim was to find the optimal value of hyperparameter corresponding to the lowest bias at the lowest noise while maximizing uniformity and spatial resolution for the reconstructed myocardium in SPECT MPI. We found that the L-curve method that is by definition a global technique provides good guidance for selection of the optimal value of the hyperparameter. However, for a heterogeneous object such as human thorax the fine-tuning of the hyperparameter's value can be only accomplished by means of a local method such as the proposed bias-noise distance (BND) curve. We established that our BND-curve method provides accurate optimized hyperparameter's value estimation as long as the region of interest volume for which it is defined is sufficiently large and is located sufficiently close to the myocardium.

  5. Comparative assessment of four a-posteriori uncertainty quantification methods for PIV data

    NASA Astrophysics Data System (ADS)

    Vlachos, Pavlos; Sciacchitano, Andrea; Neal, Douglas; Smith, Barton; Warner, Scott

    2014-11-01

    Particle Image Velocimetry (PIV) is a well-established technique for the measurement of the flow velocity in a two or three dimensional domain. As in any other technique, PIV data are affected by measurement errors, defined as the difference between the measured velocity and its actual value, which is unknown. The objective of uncertainty quantification is estimating an interval that contains the (unknown) actual error magnitude with a certain probability. In the present work, four methods for the a-posteriori uncertainty quantification of PIV data are assessed. The methods are: the uncertainty surface method (Timmins et al., 2012), the particle disparity approach (Sciacchitano et al., 2013; the peak ratio approach (Charonko and Vlachos, 2013) and the correlation statistics method (Wieneke 2014). For the assessment, a dedicated experimental database of a rectangular jet flow has been produced (Neal et al., 2014) where a reference velocity is known with a high degree of confidence. The comparative assessment has shown strengths and weaknesses of the four uncertainty quantification methods under different flow fields and imaging conditions.

  6. Comparison of an assumption-free Bayesian approach with Optimal Sampling Schedule to a maximum a posteriori Approach for Personalizing Cyclophosphamide Dosing.

    PubMed

    Laínez, José M; Orcun, Seza; Pekny, Joseph F; Reklaitis, Gintaras V; Suvannasankha, Attaya; Fausel, Christopher; Anaissie, Elias J; Blau, Gary E

    2014-01-01

    Variable metabolism, dose-dependent efficacy, and a narrow therapeutic target of cyclophosphamide (CY) suggest that dosing based on individual pharmacokinetics (PK) will improve efficacy and minimize toxicity. Real-time individualized CY dose adjustment was previously explored using a maximum a posteriori (MAP) approach based on a five serum-PK sampling in patients with hematologic malignancy undergoing stem cell transplantation. The MAP approach resulted in an improved toxicity profile without sacrificing efficacy. However, extensive PK sampling is costly and not generally applicable in the clinic. We hypothesize that the assumption-free Bayesian approach (AFBA) can reduce sampling requirements, while improving the accuracy of results. Retrospective analysis of previously published CY PK data from 20 patients undergoing stem cell transplantation. In that study, Bayesian estimation based on the MAP approach of individual PK parameters was accomplished to predict individualized day-2 doses of CY. Based on these data, we used the AFBA to select the optimal sampling schedule and compare the projected probability of achieving the therapeutic end points. By optimizing the sampling schedule with the AFBA, an effective individualized PK characterization can be obtained with only two blood draws at 4 and 16 hours after administration on day 1. The second-day doses selected with the AFBA were significantly different than the MAP approach and averaged 37% higher probability of attaining the therapeutic targets. The AFBA, based on cutting-edge statistical and mathematical tools, allows an accurate individualized dosing of CY, with simplified PK sampling. This highly accessible approach holds great promise for improving efficacy, reducing toxicities, and lowering treatment costs. © 2013 Pharmacotherapy Publications, Inc.

  7. Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging

    NASA Astrophysics Data System (ADS)

    Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.

    2008-03-01

    We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.

  8. Ontology based log content extraction engine for a posteriori security control.

    PubMed

    Azkia, Hanieh; Cuppens-Boulahia, Nora; Cuppens, Frédéric; Coatrieux, Gouenou

    2012-01-01

    In a posteriori access control, users are accountable for actions they performed and must provide evidence, when required by some legal authorities for instance, to prove that these actions were legitimate. Generally, log files contain the needed data to achieve this goal. This logged data can be recorded in several formats; we consider here IHE-ATNA (Integrating the healthcare enterprise-Audit Trail and Node Authentication) as log format. The difficulty lies in extracting useful information regardless of the log format. A posteriori access control frameworks often include a log filtering engine that provides this extraction function. In this paper we define and enforce this function by building an IHE-ATNA based ontology model, which we query using SPARQL, and show how the a posteriori security controls are made effective and easier based on this function.

  9. Uncertainty estimation for map-based analyses

    Treesearch

    Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker

    2010-01-01

    Traditionally, natural resource managers have asked the question, “How much?” and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question, “Where?” and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases, access to...

  10. A posteriori information effects on culpability judgments from a cross-cultural perspective.

    PubMed

    Wan, Wendy W N; Chiu, Chi-Yue; Luk, Chung-Leung

    2005-10-01

    A posteriori information about the moral attributes of the victim of a crime can affect an observer's judgment on the culpability of the actor of the crime so that negative moral attributes of the victim will lead to a lower judgment of culpability. The authors found this effect of a posteriori information among 118 American and 123 Chinese participants, but the underlying mechanisms were different between the two cultural groups. The Americans considered the psychological state of the actor during the crime, whereas the Chinese considered the morality of the actor during the crime. The authors discussed these results in light of the respondents' implicit theories of morality.

  11. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  12. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  13. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  14. A Feedback Finite Element Method with a Posteriori Error Estimation. Part 1. The Finite Element Method and Some Basic Properties of the A Posteriori Error Estimator.

    DTIC Science & Technology

    1984-10-01

    Mesztenyi, W. Szymczak , FEARS User’s Manual for Univac 1100. Tech. Note BN-991, Institute for Physical Science and Technology, University of Maryland...W. Szymczak , FEARS Details of Mathematical Formulation, Tech. Note BN-994, Institute for Physical Science and Technology, University of Maryland

  15. Inverse modeling of the (137)Cs source term of the Fukushima Dai-ichi Nuclear Power Plant accident constrained by a deposition map monitored by aircraft.

    PubMed

    Yumimoto, Keiya; Morino, Yu; Ohara, Toshimasa; Oura, Yasuji; Ebihara, Mitsuru; Tsuruta, Haruo; Nakajima, Teruyuki

    2016-11-01

    The amount of (137)Cs released by the Fukushima Dai-ichi Nuclear Power Plant accident of 11 March 2011 was inversely estimated by integrating an atmospheric dispersion model, an a priori source term, and map of deposition recorded by aircraft. An a posteriori source term refined finer (hourly) variations comparing with the a priori term, and estimated (137)Cs released 11 March to 2 April to be 8.12 PBq. Although time series of the a posteriori source term was generally similar to those of the a priori source term, notable modifications were found in the periods when the a posteriori source term was well-constrained by the observations. Spatial pattern of (137)Cs deposition with the a posteriori source term showed better agreement with the (137)Cs deposition monitored by aircraft. The a posteriori source term increased (137)Cs deposition in the Naka-dori region (the central part of Fukushima Prefecture) by 32.9%, and considerably improved the underestimated a priori (137)Cs deposition. Observed values of deposition measured at 16 stations and surface atmospheric concentrations collected on a filter tape of suspended particulate matter were used for validation of the a posteriori results. A great improvement was found in surface atmospheric concentration on 15 March; the a posteriori source term reduced root mean square error, normalized mean error, and normalized mean bias by 13.4, 22.3, and 92.0% for the hourly values, respectively. However, limited improvements were observed in some periods and areas due to the difficulty in simulating accurate wind fields and the lack of the observational constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. [Experience with using mathematic model for evaluation of a posteriori occupational risk].

    PubMed

    Piktushanskaia, T E

    2009-01-01

    The author analyzed changes in occupational morbidity among workers of leading economic branches of Russian Federation, gave prognosis of occupational morbidity level for recent and distant future. The morbidity level is characterized by reliable decreasing trend--that is due to long decline in diagnostic rate of occupational diseases in periodic medical examinations. The author specified mathematic model to evaluate a posteriori occupational risk, based on materials concerning periodic medical examinations in coal miners.

  17. Consistent robust a posteriori error majorants for approximate solutions of diffusion-reaction equations

    NASA Astrophysics Data System (ADS)

    Korneev, V. G.

    2016-11-01

    Efficiency of the error control of numerical solutions of partial differential equations entirely depends on the two factors: accuracy of an a posteriori error majorant and the computational cost of its evaluation for some test function/vector-function plus the cost of the latter. In the paper consistency of an a posteriori bound implies that it is the same in the order with the respective unimprovable a priori bound. Therefore, it is the basic characteristic related to the first factor. The paper is dedicated to the elliptic diffusion-reaction equations. We present a guaranteed robust a posteriori error majorant effective at any nonnegative constant reaction coefficient (r.c.). For a wide range of finite element solutions on a quasiuniform meshes the majorant is consistent. For big values of r.c. the majorant coincides with the majorant of Aubin (1972), which, as it is known, for relatively small r.c. (< ch -2 ) is inconsistent and looses its sense at r.c. approaching zero. Our majorant improves also some other majorants derived for the Poisson and reaction-diffusion equations.

  18. Prototyping and FPGA-based MAP synchronizer for very high rate FQPSK

    NASA Technical Reports Server (NTRS)

    Gray, A.; Kang, E.

    2001-01-01

    While fundamental formulations of maximum a posteriori (MAP) estimation for symbol timing [1] have been in exisence for some time, it has generally not seen widespread usage in communications receivers due to its relatively greater complexity in comparison to other designs. However, MAP has been shown to provide significant performance advantages for the acquisition and tracking of digital modulations under low SNR conditions when compared to traditional techniques, such as the data transition tracking loop [2].

  19. Nonmarket valuation of water quality in a rural transition economy in Turkey applying an a posteriori bid design

    NASA Astrophysics Data System (ADS)

    Bederli Tümay, Aylin; Brouwer, Roy

    2007-05-01

    In this paper, we investigate the economic benefits associated with public investments in wastewater treatment in one of the special protected areas along Turkey's touristic Mediterranean coast, the Köyceǧiz-Dalyan watershed. The benefits, measured in terms of boatable, fishable, swimmable and drinkable water quality, are estimated using a public survey format following the contingent valuation (CV) method. The study presented here is the first of its kind in Turkey. The study's main objective is to assess public perception, understanding, and valuation of improved wastewater treatment facilities in the two largest population centers in the watershed, facing the same water pollution problems as a result of lack of appropriate wastewater treatment. We test the validity and reliability of the application of the CV methodology to this specific environmental problem in a rural transition economy and evaluate the transferability of the results within the watershed. In order to facilitate willingness to pay (WTP) value elicitation we apply a novel dichotomous choice procedure where bid design takes place a posteriori instead of a priori. The statistical efficiency of different bid vectors is evaluated in terms of the estimated welfare measures' mean square errors using Monte Carlo simulation. The robustness of bid function specification is analyzed through average WTP and standard deviation estimated using parametric and nonparametric methods.

  20. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects.

    PubMed

    Weller, J I; VanRaden, P M; Wiggans, G R

    2013-08-01

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 [~50,000 single nucleotide polymorphisms (SNP); Illumina Inc., San Diego, CA] genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with ≥100 genotyped sons with genetic evaluations based on progeny tests. For 33 traits (milk, fat, and protein yields; fat and protein percentages; somatic cell score; productive life; daughter pregnancy rate; heifer and cow conception rates; service-sire and daughter calving ease; service-sire and daughter stillbirth; 18 conformation traits; and net merit), the analysis was applied to the autosomal segment with the SNP with the greatest effect in the genomic evaluation of each trait. All traits except 2 had a within-family haplotype effect. The same design was applied with the genetic evaluations of sons corrected for SNP effects associated with chromosomes besides the one under analysis. The number of within-family contrasts was 166 without adjustment and 211 with adjustment. Of the 52 bulls analyzed, 36 had BovineHD (high density; Illumina Inc.) genotypes that were used to test for concordance between sire quantitative trait loci and SNP genotypes; complete concordance was not obtained for any effects. Of the 31 traits with effects from the a posteriori granddaughter design, 21 were analyzed with the modified granddaughter design. Only sires with a contrast for the a posteriori granddaughter design and ≥200 granddaughters with a record usable for genetic evaluation were included. Calving traits could not be analyzed because individual cow evaluations were not computed. Eight traits had within-family haplotype effects. With respect to milk and fat yields and fat percentage, the results on Bos taurus autosome (BTA) 14 corresponded to the hypothesis that a missense mutation in the diacylglycerol O-acyltransferase 1 (DGAT1) gene is the main causative mutation

  1. Residential electricity load decomposition method based on maximum a posteriori probability

    NASA Astrophysics Data System (ADS)

    Shan, Guangpu; Zhou, Heng; Liu, Song; Liu, Peng

    2017-05-01

    In order to improvement problems that the computational complexity and the accuracy is not high in load decomposition, a load decomposition method based on the maximum a posteriori probability is proposed, the electrical equipment steady-state current is chosen as load characteristic, according to the Bayesian formula, all the electric equipment's' electricity information value can be acquired at a time exactly. Experimental results show that the method can identify the running state of each power equipment, and can get a higher decomposition accuracy. In addition, the data used can be collected by the common smart meters that can be directly got from the current market, reducing the cost of hardware input.

  2. Improved Phrase Translation Modeling Using Maximum A-Posteriori (MAP) Adaptation

    DTIC Science & Technology

    2013-07-01

    Distribution A. Approved for public release; distribution unlimited.           AIR FORCE RESEARCH LABORATORY 711TH HUMAN PERFORMANCE WING HUMAN EFFECTIVENESS...Human Performance Wing Air Force Research Laboratory                           This report is published in the interest of scientific and technical...NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) Air Force Materiel Command Air Force Research Laboratory 711th Human Performance

  3. Constrained map-based inventory estimation

    Treesearch

    Paul C. Van Deusen; Francis A. Roesch

    2007-01-01

    A region can conceptually be tessellated into polygons at different scales or resolutions. Likewise, samples can be taken from the region to determine the value of a polygon variable for each scale. Sampled polygons can be used to estimate values for other polygons at the same scale. However, estimates should be compatible across the different scales. Estimates are...

  4. Simple a posteriori slope limiter (Post Limiter) for high resolution and efficient flow computations

    NASA Astrophysics Data System (ADS)

    Kitamura, Keiichi; Hashimoto, Atsushi

    2017-07-01

    A simple and efficient a posteriori slope limiter (;Post Limiter;) is proposed for compressible Navier-Stokes and Euler equations, and examined in 1D and 2D. The Post Limiter tries to employ un-limited solutions where and when possible (even at shocks), and blend the un-limited and (1st-order) limited solutions smoothly, leading to equivalently four times resolution in 1D. This idea was inspired by a posteriori limiting approaches originally developed by Clain et al. (2011) [18] for higher-order flow computations, but proposed here is an alternative suitable and simplified for 2nd-order spatial accuracy with improved both solution and convergence. In fact, any iteration processes are no longer required to determine optimal orders of accuracy, since the limited and un-limited values are available at one time at 2nd-order. In 2D, several numerical examples have been dealt with, and both the κ = 1 / 3 MUSCL (in a structured solver) and Green-Gauss (in an unstructured solver) reconstructions demonstrated resolution improvement (nearly 4 × 4 times), convergence acceleration, and removal of numerical noises. Even on triangular meshes (on which least-squares reconstruction is used), the unstructured solver showed the improved solutions if cell geometries (cell-orientation angles) are properly taken into account. Therefore, the Post Limiter is readily incorporated into existing codes.

  5. Phylogenomics and a posteriori data partitioning resolve the Cretaceous angiosperm radiation Malpighiales.

    PubMed

    Xi, Zhenxiang; Ruhfel, Brad R; Schaefer, Hanno; Amorim, André M; Sugumaran, M; Wurdack, Kenneth J; Endress, Peter K; Matthews, Merran L; Stevens, Peter F; Mathews, Sarah; Davis, Charles C

    2012-10-23

    The angiosperm order Malpighiales includes ~16,000 species and constitutes up to 40% of the understory tree diversity in tropical rain forests. Despite remarkable progress in angiosperm systematics during the last 20 y, relationships within Malpighiales remain poorly resolved, possibly owing to its rapid rise during the mid-Cretaceous. Using phylogenomic approaches, including analyses of 82 plastid genes from 58 species, we identified 12 additional clades in Malpighiales and substantially increased resolution along the backbone. This greatly improved phylogeny revealed a dynamic history of shifts in net diversification rates across Malpighiales, with bursts of diversification noted in the Barbados cherries (Malpighiaceae), cocas (Erythroxylaceae), and passion flowers (Passifloraceae). We found that commonly used a priori approaches for partitioning concatenated data in maximum likelihood analyses, by gene or by codon position, performed poorly relative to the use of partitions identified a posteriori using a Bayesian mixture model. We also found better branch support in trees inferred from a taxon-rich, data-sparse matrix, which deeply sampled only the phylogenetically critical placeholders, than in trees inferred from a taxon-sparse matrix with little missing data. Although this matrix has more missing data, our a posteriori partitioning strategy reduced the possibility of producing multiple distinct but equally optimal topologies and increased phylogenetic decisiveness, compared with the strategy of partitioning by gene. These approaches are likely to help improve phylogenetic resolution in other poorly resolved major clades of angiosperms and to be more broadly useful in studies across the Tree of Life.

  6. Quantitative evaluation of efficiency of the methods for a posteriori filtration of the slip-rate time histories

    NASA Astrophysics Data System (ADS)

    Kristekova, M.; Galis, M.; Moczo, P.; Kristek, J.

    2012-04-01

    Simulated slip-rate time histories often are not free from spurious high-frequency oscillations. This is because the used spatial grid is not fine enough to properly discretize possibly broad-spectrum slip-rate and stress variations and the spatial breakdown zone of the propagating rupture. In order to reduce the oscillations some numerical modelers apply the artificial damping. An alternative way is the application of the adaptive smoothing algorithm (ASA, Galis et al. 2010). The other modelers, however, rely on the a posteriori filtration. If the oscillations do not affect (change) development and propagation of the rupture during simulations, it is possible to apply a posteriori filtration to reduce the oscillations. Often, however, the a posteriori filtration is a problematic trade-off between suppression of oscillations and distortion of a true slip rate. We present quantitative comparison of efficiency of several methods. We have analyzed slip-rate time histories simulated by the FEM-TSN method. Signals containing spurious high-frequency oscillations and signals after application of a posteriori filtering have been compared to the reference signal. The reference signal was created by application of a careful iterative and adjusted denoising of the slip rate simulated using the finest (technically possible) spatial grid. We performed extensive numerical simulations in order to test efficiency of a posteriori filtration for slip rates with different level and nature of spurious oscillations. We show that the time-frequency analysis and time-frequency misfit criteria (Kristekova et al. 2006, 2009) are suitable tools for evaluation of efficiency of a posteriori filtration methods and also clear indicators of possible distortions introduced by a posteriori filtration.

  7. Disparity map estimation using image pyramid

    NASA Astrophysics Data System (ADS)

    Roszkowski, Mikołaj

    2013-10-01

    The task of a short baseline stereo matching algorithm is to calculate the disparity map given two rectified images of one scene. Most algorithms assume that a maximal possible disparity exists and search all disparities in the range from 1 to this maximal disparity. In the case of large images and wide disparity search range this can be very computationally demanding. In this article a simple coarse to fine hierarchical matching method based on the Gaussian pyramid and local stereo matching is investigated. Such an approach allows significant reduction of the number of disparities searched compared to the full search algorithm. Moreover it is shown, that grouping pixels into simple square regions is in most cases sufficient to avoid significant errors that typically appear at disparity map discontinuities when hierarchical schemes are used. Finally, it is presented that in most cases the quality of the disparity map obtained using the investigated algorithm is of comparable quality to a disparity map obtained using full-search local stereo algorithm.

  8. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  9. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-12-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of l1-norm minimization using a standard linear programming algorithm is O(N3). We show that this cost can be reduced to O(N2) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach.

  10. Analysis of the Efficiency of an A-Posteriori Error Estimator for Linear Triangular Finite Elements

    DTIC Science & Technology

    1991-06-01

    Release 1.0, NOETIC Tech. Corp., St. Louis, Missouri, 1985. [28] R. VERFURTH, FEMFLOW-user guide. Version 1, Report, Universitiit Zirich, 1989. [29] R... study and research for foreign students in numerical mathematics who are supported by foreign governments or exchange agencies (Fulbright, etc

  11. A Posteriori Error Estimation of Adaptive Finite Difference Schemes for Hyperbolic Systems

    DTIC Science & Technology

    1988-06-01

    scheme have been studied by Ciment (ref 24), Fritts (ref 25), Hoffman (ref 26), Osher.- and Sanders (ref 27), Sanders (ref 28), and Mastin (ref 29...Methods for Partial Differential Equations, SIAM, Philadelphia, 1983. 24. Ciment , M., "Stable Difference Schemes With Uneven Mesh Spacings," Math. Comp

  12. Mapping quantitative trait Loci using generalized estimating equations.

    PubMed Central

    Lange, C; Whittaker, J C

    2001-01-01

    A number of statistical methods are now available to map quantitative trait loci (QTL) relative to markers. However, no existing methodology can simultaneously map QTL for multiple nonnormal traits. In this article we rectify this deficiency by developing a QTL-mapping approach based on generalized estimating equations (GEE). Simulation experiments are used to illustrate the application of the GEE-based approach. PMID:11729173

  13. Can Visually Impaired Children Use Tactile Maps to Estimate Directions?

    ERIC Educational Resources Information Center

    Ungar, S.; And Others

    1994-01-01

    Eighty-eight children (either totally blind or with residual vision) estimated directions between landmarks in a large scale layout of objects. Children experienced the layout either directly by walking around it or indirectly by examining a tactile map. Use of tactile maps considerably facilitated the performance of the blind children. (Author/DB)

  14. Estimating mapped-plot forest attributes with ratios of means

    Treesearch

    S.J. Zarnoch; W.A. Bechtold

    2000-01-01

    The mapped-plot design utilized by the U.S. Department of Agriculture (USDA) Forest Inventory and Analysis and the National Forest Health Monitoring Programs is described. Data from 2458 forested mapped plots systematically spread across 25 States reveal that 35 percent straddle multiple conditions. The ratio-of-means estimator is developed as a method to obtain...

  15. Satellite-map position estimation for the Mars rover

    NASA Technical Reports Server (NTRS)

    Hayashi, Akira; Dean, Thomas

    1989-01-01

    A method for locating the Mars rover using an elevation map generated from satellite data is described. In exploring its environment, the rover is assumed to generate a local rover-centered elevation map that can be used to extract information about the relative position and orientation of landmarks corresponding to local maxima. These landmarks are integrated into a stochastic map which is then matched with the satellite map to obtain an estimate of the robot's current location. The landmarks are not explicitly represented in the satellite map. The results of the matching algorithm correspond to a probabilistic assessment of whether or not the robot is located within a given region of the satellite map. By assigning a probabilistic interpretation to the information stored in the satellite map, researchers are able to provide a precise characterization of the results computed by the matching algorithm.

  16. Machine learning source separation using maximum a posteriori nonnegative matrix factorization.

    PubMed

    Gao, Bin; Woo, Wai Lok; Ling, Bingo W-K

    2014-07-01

    A novel unsupervised machine learning algorithm for single channel source separation is presented. The proposed method is based on nonnegative matrix factorization, which is optimized under the framework of maximum a posteriori probability and Itakura-Saito divergence. The method enables a generalized criterion for variable sparseness to be imposed onto the solution and prior information to be explicitly incorporated through the basis vectors. In addition, the method is scale invariant where both low and high energy components of a signal are treated with equal importance. The proposed algorithm is a more complete and efficient approach for matrix factorization of signals that exhibit temporal dependency of the frequency patterns. Experimental tests have been conducted and compared with other algorithms to verify the efficiency of the proposed method.

  17. Conjugate quasilinear Dirichlet and Neumann problems and a posteriori error bounds

    NASA Technical Reports Server (NTRS)

    Lavery, J. E.

    1976-01-01

    Quasilinear Dirichlet and Neumann problems on a rectangle D with boundary D prime are considered. Using these concepts, conjugate problems, that is, a pair of one Dirichlet and one Neumann problem, the minima of the energies of which add to zero, are introduced. From the concept of conjugate problems, two-sided bounds for the energy of the exact solution of any given Dirichlet or Neumann problem are constructed. These two-sided bounds for the energy at the exact solution are in turn used to obtain a posteriori error bounds for the norm of the difference of the approximate and exact solutions of the problem. These bounds do not involve the unknown exact solution and are easily constructed numerically.

  18. A posteriori correction of camera characteristics from large image data sets.

    PubMed

    Afanasyev, Pavel; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; van Duinen, Gijs; Alewijnse, Bart; Peters, Peter J; Abrahams, Jan-Pieter; Portugal, Rodrigo V; Schatz, Michael; van Heel, Marin

    2015-06-11

    Large datasets are emerging in many fields of image processing including: electron microscopy, light microscopy, medical X-ray imaging, astronomy, etc. Novel computer-controlled instrumentation facilitates the collection of very large datasets containing thousands of individual digital images. In single-particle cryogenic electron microscopy ("cryo-EM"), for example, large datasets are required for achieving quasi-atomic resolution structures of biological complexes. Based on the collected data alone, large datasets allow us to precisely determine the statistical properties of the imaging sensor on a pixel-by-pixel basis, independent of any "a priori" normalization routinely applied to the raw image data during collection ("flat field correction"). Our straightforward "a posteriori" correction yields clean linear images as can be verified by Fourier Ring Correlation (FRC), illustrating the statistical independence of the corrected images over all spatial frequencies. The image sensor characteristics can also be measured continuously and used for correcting upcoming images.

  19. Facial Expression Recognition by Supervised Independent Component Analysis Using MAP Estimation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Kotani, Kazunori

    Permutation ambiguity of the classical Independent Component Analysis (ICA) may cause problems in feature extraction for pattern classification. Especially when only a small subset of components is derived from data, these components may not be most distinctive for classification, because ICA is an unsupervised method. We include a selective prior for de-mixing coefficients into the classical ICA to alleviate the problem. Since the prior is constructed upon the classification information from the training data, we refer to the proposed ICA model with a selective prior as a supervised ICA (sICA). We formulated the learning rule for sICA by taking a Maximum a Posteriori (MAP) scheme and further derived a fixed point algorithm for learning the de-mixing matrix. We investigate the performance of sICA in facial expression recognition from the aspects of both correct rate of recognition and robustness even with few independent components.

  20. A MAP estimator based on geometric Brownian motion for sample distances of laser triangulation data

    NASA Astrophysics Data System (ADS)

    Herrmann, Markus; Otesteanu, Marius

    2016-11-01

    The proposed algorithm is designed to enhance the line-detection stability in laser-stripe sensors. Despite their many features and capabilities, these sensors become unstable when measuring in dark or strongly-reflective environments. Ambiguous points within a camera image can appear on dark surfaces and be confused with noise when the laser-reflection intensity approaches noise level. Similar problems arise when strong reflections within the sensor image have intensities comparable to that of the laser. In these circumstances, it is difficult to determine the most probable point for the laser line. Hence, the proposed algorithm introduces a maximum a posteriori estimator, based on geometric Brownian motion, to provide a range estimate for the expected location of the reflected laser line.

  1. Distortion Estimates for Negative Schwarzian Maps.

    DTIC Science & Technology

    1988-02-29

    continuous with respect to Lebesgue measure. If Q(E) > 0. then since ’T generates. Ve > 0. 3n. 3.1 E J, such that e and hence ((f "(E)) > t -,a() < n ecegf...Estimates on Distortion The distortion dis(f) is invariant under changes of scale in the domain and is multi- plied by the inverse of a scaling factor

  2. Estimating a Path through a Map of Decision Making

    PubMed Central

    Brock, William A.; Bentley, R. Alexander; O'Brien, Michael J.; Caiado, Camilia C. S.

    2014-01-01

    Studies of the evolution of collective behavior consider the payoffs of individual versus social learning. We have previously proposed that the relative magnitude of social versus individual learning could be compared against the transparency of payoff, also known as the “transparency” of the decision, through a heuristic, two-dimensional map. Moving from west to east, the estimated strength of social influence increases. As the decision maker proceeds from south to north, transparency of choice increases, and it becomes easier to identify the best choice itself and/or the best social role model from whom to learn (depending on position on east–west axis). Here we show how to parameterize the functions that underlie the map, how to estimate these functions, and thus how to describe estimated paths through the map. We develop estimation methods on artificial data sets and discuss real-world applications such as modeling changes in health decisions. PMID:25369369

  3. Covariance and correlation estimation in electron-density maps.

    PubMed

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  4. Offshore wind resource estimation from satellite SAR wind field maps

    NASA Astrophysics Data System (ADS)

    Hasager, C. B.; Nielsen, M.; Astrup, P.; Barthelmie, R.; Dellwik, E.; Jensen, N. O.; Jørgensen, B. H.; Pryor, S. C.; Rathmann, O.; Furevik, B. R.

    2005-10-01

    A wind resource estimation study based on a series of 62 satellite wind field maps is presented. The maps were retrieved from imaging synthetic aperture radar (SAR) data. The wind field maps were used as input to the software RWT, which calculates the offshore wind resource based on spatial averaging (footprint modelling) of the wind statistic in each satellite image. The calculated statistics can then be input to the program WAsP and used in lieu of in-situ observations by meteorological instruments. A regional wind climate map based on satellite SAR images delineates significant spatial wind speed variations. The site of investigation was Horns Rev in the North Sea, where a meteorological time series is used for comparison. The advantages and limitations of these new techniques, which seem particularly useful for mapping of the regional wind climate, are discussed. Copyright

  5. Estimation of geometrically undistorted B0 inhomogeneity maps

    NASA Astrophysics Data System (ADS)

    Matakos, A.; Balter, J.; Cao, Y.

    2014-09-01

    Geometric accuracy of MRI is one of the main concerns for its use as a sole image modality in precision radiation therapy (RT) planning. In a state-of-the-art scanner, system level geometric distortions are within acceptable levels for precision RT. However, subject-induced B0 inhomogeneity may vary substantially, especially in air-tissue interfaces. Recent studies have shown distortion levels of more than 2 mm near the sinus and ear canal are possible due to subject-induced field inhomogeneity. These distortions can be corrected with the use of accurate B0 inhomogeneity field maps. Most existing methods estimate these field maps from dual gradient-echo (GRE) images acquired at two different echo-times under the assumption that the GRE images are practically undistorted. However distortion that may exist in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate correction of clinical images. This work proposes a method for estimating undistorted field maps from GRE acquisitions using an iterative joint estimation technique. The proposed method yields geometrically corrected GRE images and undistorted field maps that can also be used for the correction of images acquired by other sequences. The proposed method is validated through simulation, phantom experiments and applied to patient data. Our simulation results show that our method reduces the root-mean-squared error of the estimated field map from the ground truth by ten-fold compared to the distorted field map. Both the geometric distortion and the intensity corruption (artifact) in the images caused by the B0 field inhomogeneity are corrected almost completely. Our phantom experiment showed improvement in the geometric correction of approximately 1 mm at an air-water interface using the undistorted field map compared to using a distorted field map. The proposed method for undistorted field map estimation can lead to improved geometric

  6. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  7. A Posteriori Study of a DNS Database Describing Super critical Binary-Species Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2012-01-01

    Currently, the modeling of supercritical-pressure flows through Large Eddy Simulation (LES) uses models derived for atmospheric-pressure flows. Those atmospheric-pressure flows do not exhibit the particularities of high densitygradient magnitude features observed both in experiments and simulations of supercritical-pressure flows in the case of two species mixing. To assess whether the current LES modeling is appropriate and if found not appropriate to propose higher-fidelity models, a LES a posteriori study has been conducted for a mixing layer that initially contains different species in the lower and upper streams, and where the initial pressure is larger than the critical pressure of either species. An initially-imposed vorticity perturbation promotes roll-up and a double pairing of four initial span-wise vortices into an ultimate vortex that reaches a transitional state. The LES equations consist of the differential conservation equations coupled with a real-gas equation of state, and the equation set uses transport properties depending on the thermodynamic variables. Unlike all LES models to date, the differential equations contain, additional to the subgrid scale (SGS) fluxes, a new SGS term that is a pressure correction in the momentum equation. This additional term results from filtering of Direct Numerical Simulation (DNS) equations, and represents the gradient of the difference between the filtered pressure and the pressure computed from the filtered flow field. A previous a priori analysis, using a DNS database for the same configuration, found this term to be of leading order in the momentum equation, a fact traced to the existence of high-densitygradient magnitude regions that populated the entire flow; in the study, models were proposed for the SGS fluxes as well as this new term. In the present study, the previously proposed constantcoefficient SGS-flux models of the a priori investigation are tested a posteriori in LES, devoid of or including, the

  8. Modelling of turbulent lifted jet flames using flamelets: a priori assessment and a posteriori validation

    NASA Astrophysics Data System (ADS)

    Ruan, Shaohong; Swaminathan, Nedunchezhian; Darbyshire, Oliver

    2014-03-01

    This study focuses on the modelling of turbulent lifted jet flames using flamelets and a presumed Probability Density Function (PDF) approach with interest in both flame lift-off height and flame brush structure. First, flamelet models used to capture contributions from premixed and non-premixed modes of the partially premixed combustion in the lifted jet flame are assessed using a Direct Numerical Simulation (DNS) data for a turbulent lifted hydrogen jet flame. The joint PDFs of mixture fraction Z and progress variable c, including their statistical correlation, are obtained using a copula method, which is also validated using the DNS data. The statistically independent PDFs are found to be generally inadequate to represent the joint PDFs from the DNS data. The effects of Z-c correlation and the contribution from the non-premixed combustion mode on the flame lift-off height are studied systematically by including one effect at a time in the simulations used for a posteriori validation. A simple model including the effects of chemical kinetics and scalar dissipation rate is suggested and used for non-premixed combustion contributions. The results clearly show that both Z-c correlation and non-premixed combustion effects are required in the premixed flamelets approach to get good agreement with the measured flame lift-off heights as a function of jet velocity. The flame brush structure reported in earlier experimental studies is also captured reasonably well for various axial positions. It seems that flame stabilisation is influenced by both premixed and non-premixed combustion modes, and their mutual influences.

  9. LIME: Low-light Image Enhancement via Illumination Map Estimation.

    PubMed

    Guo, Xiaojie; Li, Yu; Ling, Haibin

    2016-12-14

    When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for highquality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G and B channels. Further, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the wellconstructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

  10. MAP Estimation of Chin and Cheek Contours in Video Sequences

    NASA Astrophysics Data System (ADS)

    Kampmann, Markus

    2004-12-01

    An algorithm for the estimation of chin and cheek contours in video sequences is proposed. This algorithm exploits a priori knowledge about shape and position of chin and cheek contours in images. Exploiting knowledge about the shape, a parametric 2D model representing chin and cheek contours is introduced. Exploiting knowledge about the position, a MAP estimator is developed taking into account the observed luminance gradient as well as a priori probabilities of chin and cheek contours positions. The proposed algorithm was tested with head and shoulder video sequences (image resolution CIF). In nearly 70% of all investigated video frames, a subjectively error free estimation could be achieved. The 2D estimate error is measured as on average between 2.4 and[InlineEquation not available: see fulltext.].

  11. The MAP Spacecraft Angular State Estimation After Sensor Failure

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, the conclusions have a far reaching consequence.

  12. Determination of quantitative trait variants by concordance via application of the a posteriori granddaughter design to the U.S. Holstein population

    USDA-ARS?s Scientific Manuscript database

    Experimental designs that exploit family information can provide substantial predictive power in quantitative trait variant discovery projects. Concordance between quantitative trait locus genotype as determined by the a posteriori granddaughter design and marker genotype was determined for 29 trai...

  13. Improving hyperspectral band selection by constructing an estimated reference map

    NASA Astrophysics Data System (ADS)

    Guo, Baofeng; Damper, Robert I.; Gunn, Steve R.; Nelson, James D. B.

    2014-01-01

    We investigate band selection for hyperspectral image classification. Mutual information (MI) measures the statistical dependence between two random variables. By modeling the reference map as one of the two random variables, MI can, therefore, be used to select the bands that are more useful for image classification. A new method is proposed to estimate the MI using an optimally constructed reference map, reducing reliance on ground-truth information. To reduce the interferences from noise and clutters, the reference map is constructed by averaging a subset of spectral bands that are chosen with the best capability to approximate the ground truth. To automatically find these bands, we develop a searching strategy consisting of differentiable MI, gradient ascending algorithm, and random-start optimization. Experiments on AVIRIS 92AV3C dataset and Pavia University scene dataset show that the proposed method outperformed the benchmark methods. In AVIRIS 92AV3C dataset, up to 55% of bands can be removed without significant loss of classification accuracy, compared to the 40% from that using the reference map accompanied with the dataset. Meanwhile, its performance is much more robust to accuracy degradation when bands are cut off beyond 60%, revealing a better agreement in the MI calculation. In Pavia University scene dataset, using 45 bands achieved 86.18% classification accuracy, which is only 1.5% lower than that using all the 103 bands.

  14. Estimation of Genetic Effects and Genotype-Phenotype Maps

    PubMed Central

    Le Rouzic, Arnaud; Álvarez-Castro, José M.

    2008-01-01

    Determining the genetic architecture of complex traits is a necessary step to understand phenotypic changes in natural, experimental and domestic populations. However, this is still a major challenge for modern genetics, since the estimation of genetic effects tends to be complicated by genetic interactions, which lead to changes in the effect of allelic substitutions depending on the genetic background. Recent progress in statistical tools aiming to describe and quantify genetic effects meaningfully improves the efficiency and the availability of genotype-to-phenotype mapping methods. In this contribution, we facilitate the practical use of the recently published ‘NOIA’ quantitative framework by providing an implementation of linear and multilinear regressions, change of reference operation and genotype-to-phenotype mapping in a package (‘noia’) for the software R, and we discuss theoretical and practical benefits evolutionary and quantitative geneticists may find in using proper modeling strategies to quantify the effects of genes. PMID:19204820

  15. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  16. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  17. Mass storage estimates for the digital mapping era.

    USGS Publications Warehouse

    Light, D.L.

    1986-01-01

    Proponents of the digital era recognize that a break-through in mass storage technology may be required to attain a reasonable degree of computerization of the cartographic mapping and data management process. This paper provides the rationale for estimating that about 1014 bits of digital mass storage are needed for developing a digital 1:24 000-scale topographic data base of the US. Also, it will discuss the optical disk as a leading candidate for handling the mass storage dilemma.-from Author

  18. Arbitrary-Lagrangian-Eulerian Discontinuous Galerkin schemes with a posteriori subcell finite volume limiting on moving unstructured meshes

    NASA Astrophysics Data System (ADS)

    Boscheri, Walter; Dumbser, Michael

    2017-10-01

    Lagrangian formulations that are based on a fixed computational grid and which instead evolve the mapping of the reference configuration to the current one. Our new Lagrangian-type DG scheme adopts the novel a posteriori sub-cell finite volume limiter method recently developed in [62] for fixed unstructured grids. In this approach, the validity of the candidate solution produced in each cell by an unlimited ADER-DG scheme is verified against a set of physical and numerical detection criteria, such as the positivity of pressure and density, the absence of floating point errors (NaN) and the satisfaction of a relaxed discrete maximum principle (DMP) in the sense of polynomials. Those cells which do not satisfy all of the above criteria are flagged as troubled cells and are recomputed at the aid of a more robust second order TVD finite volume scheme. To preserve the subcell resolution capability of the original DG scheme, the FV limiter is run on a sub-grid that is 2 N + 1 times finer compared to the mesh of the original unlimited DG scheme. The new subcell averages are then gathered back into a high order DG polynomial by a usual conservative finite volume reconstruction operator. The numerical convergence rates of the new ALE ADER-DG schemes are studied up to fourth order in space and time and several test problems are simulated in order to check the accuracy and the robustness of the proposed numerical method in the context of the Euler and Navier-Stokes equations for compressible gas dynamics, considering both inviscid and viscous fluids. Finally, an application inspired by Inertial Confinement Fusion (ICF) type flows is considered by solving the Euler equations and the PDE of viscous and resistive magnetohydrodynamics (VRMHD).

  19. Estimating and Mapping the Population at Risk of Sleeping Sickness

    PubMed Central

    Franco, José R.; Paone, Massimo; Diarra, Abdoulaye; Ruiz-Postigo, José Antonio; Fèvre, Eric M.; Mattioli, Raffaele C.; Jannin, Jean G.

    2012-01-01

    Background Human African trypanosomiasis (HAT), also known as sleeping sickness, persists as a public health problem in several sub-Saharan countries. Evidence-based, spatially explicit estimates of population at risk are needed to inform planning and implementation of field interventions, monitor disease trends, raise awareness and support advocacy. Comprehensive, geo-referenced epidemiological records from HAT-affected countries were combined with human population layers to map five categories of risk, ranging from “very high” to “very low,” and to estimate the corresponding at-risk population. Results Approximately 70 million people distributed over a surface of 1.55 million km2 are estimated to be at different levels of risk of contracting HAT. Trypanosoma brucei gambiense accounts for 82.2% of the population at risk, the remaining 17.8% being at risk of infection from T. b. rhodesiense. Twenty-one million people live in areas classified as moderate to very high risk, where more than 1 HAT case per 10,000 inhabitants per annum is reported. Discussion Updated estimates of the population at risk of sleeping sickness were made, based on quantitative information on the reported cases and the geographic distribution of human population. Due to substantial methodological differences, it is not possible to make direct comparisons with previous figures for at-risk population. By contrast, it will be possible to explore trends in the future. The presented maps of different HAT risk levels will help to develop site-specific strategies for control and surveillance, and to monitor progress achieved by ongoing efforts aimed at the elimination of sleeping sickness. PMID:23145192

  20. Estimating and mapping ecological processes influencing microbial community assembly

    PubMed Central

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-01-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725

  1. Estimating and mapping ecological processes influencing microbial community assembly

    DOE PAGES

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; ...

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recentlymore » developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.« less

  2. Estimating and mapping ecological processes influencing microbial community assembly

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.

  3. Pose Estimation and Mapping Using Catadioptric Cameras with Spherical Mirrors

    NASA Astrophysics Data System (ADS)

    Ilizirov, Grigory; Filin, Sagi

    2016-06-01

    Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system's parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.

  4. Optimizing spectral wave estimates with adjoint-based sensitivity maps

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos

    2014-04-01

    A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.

  5. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment

    PubMed Central

    Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-01-01

    Purpose This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. Materials and Methods A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. Results The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. Conclusion A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment. PMID:27051635

  6. Unsupervised Mineralogical Mapping for Fast Exploration of CRISM Imagery

    NASA Astrophysics Data System (ADS)

    Allender, E. J.; Stepinski, T.

    2014-07-01

    We propose an unsupervised analysis of CRISM TRDR imagery which generates maps displaying the locations of unique mineral classes and provides information for a posteriori interpretation of these classes.

  7. Ultrasonic noninvasive temperature estimation using echoshift gradient maps: simulation results.

    PubMed

    Techavipoo, Udomchai; Chen, Quan; Varghese, Tomy

    2005-07-01

    Percutaneous ultrasound-image-guided radiofrequency (rf) ablation is an effective treatment for patients with hepatic malignancies that are excluded from surgical resection due to other complications. However, ablated regions are not clearly differentiated from normal untreated regions using conventional ultrasound imaging due to similar echogenic tissue properties. In this paper, we investigate the statistics that govern the relationship between temperature elevation and the corresponding temperature map obtained from the gradient of the echoshifts obtained using consecutive ultrasound radiofrequency signals. A relationship derived using experimental data on the sound speed and tissue expansion variations measured on canine liver tissue samples at different elevated temperatures is utilized to generate ultrasound radiofrequency simulated data. The simulated data set is then utilized to statistically estimate the accuracy and precision of the temperature distributions obtained. The results show that temperature increases between 37 and 67 degrees C can be estimated with standard deviations of +/- 3 degrees C. Our results also indicate that the correlation coefficient between consecutive radiofrequency signals should be greater than 0.85 to obtain accurate temperature estimates.

  8. A Novel Gibbs Maximum A Posteriori (GMAP) Approach on Bayesian Nonlinear Mixed-Effects Population Pharmacokinetics (PK) Models

    PubMed Central

    Kim, Seongho; Hall, Stephen D.; Li, Lang

    2009-01-01

    In this paper, various Bayesian Monte Carlo Markov Chain (MCMC) methods and the proposed algorithm, Gibbs maximum a posteriori (GMAP) algorithm, are compared for implementing the nonlinear mixed-effects model in pharmacokinetics (PK) studies. An intravenous two-compartmental PK model is adopted to fit the PK data from the midazolam (MDZ) studies, which recruited 24 individuals with 9 different time points per subject. The three-stage hierarchical nonlinear mixed model is constructed. Data analysis and model performance comparisons show that GMAP converges the fastest, and provides reliable results. At the mean time, data augmentation (DA) methods are used for the Random-walk Metropolis method. Data analysis shows that the speed of the convergence of Random-walk Metropolis can be improved by DA, but all of them are not as fast as GMAP. The performance of GMAP and various MCMC algorithms are compared through Midazolam data analysis and simulation. PMID:20183435

  9. Allowing for MSD prevention during facilities planning for a public service: an a posteriori analysis of 10 library design projects.

    PubMed

    Bellemare, Marie; Trudel, Louis; Ledoux, Elise; Montreuil, Sylvie; Marier, Micheline; Laberge, Marie; Vincent, Patrick

    2006-01-01

    Research was conducted to identify an ergonomics-based intervention model designed to factor in musculoskeletal disorder (MSD) prevention when library projects are being designed. The first stage of the research involved an a posteriori analysis of 10 recent redesign projects. The purpose of the analysis was to document perceptions about the attention given to MSD prevention measures over the course of a project on the part of 2 categories of employees: librarians responsible for such projects and personnel working in the libraries before and after changes. Subjects were interviewed in focus groups. Outcomes of the analysis can guide our ergonomic assessment of current situations and contribute to a better understanding of the way inclusion or improvement of prevention measures can support the workplace design process.

  10. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    SciTech Connect

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.

  11. Ability Estimation for Conventional Tests.

    ERIC Educational Resources Information Center

    Kim, Jwa K.; Nicewander, W. Alan

    1993-01-01

    Bias, standard error, and reliability of five ability estimators were evaluated using Monte Carlo estimates of the unknown conditional means and variances of the estimators. Results indicate that estimates based on Bayesian modal, expected a posteriori, and weighted likelihood estimators were reasonably unbiased with relatively small standard…

  12. Maps of Dust IR Emission for Use in Estimation of Reddening and CMBR Foregrounds

    NASA Astrophysics Data System (ADS)

    Schlegel, D. J.; Finkbeiner, D. P.; Davis, Marc

    1997-12-01

    We present a full sky 100micron map that is a reprocessed composite of the COBE/DIRBE and IRAS/ISSA maps, with the zodiacal foreground and confirmed point sources removed. We have constructed a map of the dust temperature, so that the 100micron map can be converted to a map proportional to dust column density. The dust temperature varies from 17 K to 21 K, which is modest but does modify the estimate of the dust column by a factor of 5. The result of these manipulations is a map with DIRBE-quality calibration and IRAS resolution. A wealth of filamentary detail is apparent on many different scales at all Galactic latitudes. In high latitude regions, the dust map correlates well with maps of HI emission, but deviations are significant. To generate the full sky dust maps, we must first remove zodiacal light contamination as well as a possible cosmic infrared background (CIB). For the 100micron map no signficant CIB is detected, but in the 140micron and 240micron maps, where the zodiacal contamination is weaker, we detect the CIB at surprisingly high flux levels of 30 +/- 8 {nW/m}(2/sr) at 140\\micron, and 16 \\pm 3.4 {nW/m}^2/sr at 240micron (95% confidence), which is an integrated flux ~ 2 times that extrapolated from optical galaxies in the Hubble Deep Field. The primary use of these maps is likely to be as a new estimator of Galactic extinction. To calibrate our maps, we assume a standard reddening law, and use the colors of elliptical galaxies. We demonstrate that the new maps are twice as accurate as the older Burstein-Heiles reddening estimates in regions of low and moderate reddening. The maps are expected to be significantly more accurate in regions of high reddening. These dust maps will also be useful for estimating millimeter emission that contaminates CMBR experiments and for estimating soft X-ray absorption.

  13. Decision-making in structure solution using Bayesian estimates of map quality: the PHENIX autosol wizard

    SciTech Connect

    Terwilliger, Thomas C; Adams, Paul D; Read, Randy J; Mccoy, Airlie J

    2008-01-01

    Ten measures of experimental electron-density-map quality are examined and the skewness of electron density is found to be the best indicator of actual map quality. A Bayesian approach to estimating map quality is developed and used in the PHENIX AutoSol wizard to make decisions during automated structure solution.

  14. Estimating Cortical Feature Maps with Dependent Gaussian Processes.

    PubMed

    Hughes, Nicholas J; Goodhill, Geoffrey J

    2017-10-01

    A striking example of brain organisation is the stereotyped arrangement of cell preferences in the visual cortex for edges of particular orientations in the visual image. These "orientation preference maps" appear to have remarkably consistent statistical properties across many species. However fine scale analysis of these properties requires the accurate reconstruction of maps from imaging data which is highly noisy. A new approach for solving this reconstruction problem is to use Bayesian Gaussian process methods, which produce more accurate results than classical techniques. However, so far this work has not considered the fact that maps for several other features of visual input coexist with the orientation preference map and that these maps have mutually dependent spatial arrangements. Here we extend the Gaussian process framework to the multiple output case, so that we can consider multiple maps simultaneously. We demonstrate that this improves reconstruction of multiple maps compared to both classical techniques and the single output approach, can encode the empirically observed relationships, and is easily extendible. This provides the first principled approach for studying the spatial relationships between feature maps in visual cortex.

  15. A Posteriori Analysis of Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    SciTech Connect

    Donald Estep; Michael Holst; Simon Tavener

    2010-02-08

    This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.

  16. How BenMAP-CE Estimates the Health and Economic Effects of Air Pollution

    EPA Pesticide Factsheets

    The BenMAP-CE tool estimates the number and economic value of health impacts resulting from changes in air quality - specifically, ground-level ozone and fine particles. Learn what data BenMAP-CE uses and how the estimates are calculated.

  17. Surface height map estimation from a single image using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaowei; Zhong, Guoqiang; Qi, Lin; Dong, Junyu; Pham, Tuan D.; Mao, Jianzhou

    2017-02-01

    Surface height map estimation is an important task in high-resolution 3D reconstruction. This task differs from general scene depth estimation in the fact that surface height maps contain more high frequency information or fine details. Existing methods based on radar or other equipments can be used for large-scale scene depth recovery, but might fail in small-scale surface height map estimation. Although some methods are available for surface height reconstruction based on multiple images, e.g. photometric stereo, height map estimation directly from a single image is still a challenging issue. In this paper, we present a novel method based on convolutional neural networks (CNNs) for estimating the height map from a single image, without any equipments or extra prior knowledge of the image contents. Experimental results based on procedural and real texture datasets show the proposed algorithm is effective and reliable.

  18. MAPPING SPATIAL ACCURACY AND ESTIMATING LANDSCAPE INDICATORS FROM THEMATIC LAND COVER MAPS USING FUZZY SET THEORY

    EPA Science Inventory

    The accuracy of thematic map products is not spatially homogenous, but instead variable across most landscapes. Properly analyzing and representing the spatial distribution (pattern) of thematic map accuracy would provide valuable user information for assessing appropriate applic...

  19. MAPPING SPATIAL ACCURACY AND ESTIMATING LANDSCAPE INDICATORS FROM THEMATIC LAND COVER MAPS USING FUZZY SET THEORY

    EPA Science Inventory

    The accuracy of thematic map products is not spatially homogenous, but instead variable across most landscapes. Properly analyzing and representing the spatial distribution (pattern) of thematic map accuracy would provide valuable user information for assessing appropriate applic...

  20. A Priori and a Posteriori Dietary Patterns during Pregnancy and Gestational Weight Gain: The Generation R Study.

    PubMed

    Tielemans, Myrte J; Erler, Nicole S; Leermakers, Elisabeth T M; van den Broek, Marion; Jaddoe, Vincent W V; Steegers, Eric A P; Kiefte-de Jong, Jessica C; Franco, Oscar H

    2015-11-12

    Abnormal gestational weight gain (GWG) is associated with adverse pregnancy outcomes. We examined whether dietary patterns are associated with GWG. Participants included 3374 pregnant women from a population-based cohort in the Netherlands. Dietary intake during pregnancy was assessed with food-frequency questionnaires. Three a posteriori-derived dietary patterns were identified using principal component analysis: a "Vegetable, oil and fish", a "Nuts, high-fiber cereals and soy", and a "Margarine, sugar and snacks" pattern. The a priori-defined dietary pattern was based on national dietary recommendations. Weight was repeatedly measured around 13, 20 and 30 weeks of pregnancy; pre-pregnancy and maximum weight were self-reported. Normal weight women with high adherence to the "Vegetable, oil and fish" pattern had higher early-pregnancy GWG than those with low adherence (43 g/week (95% CI 16; 69) for highest vs. lowest quartile (Q)). Adherence to the "Margarine, sugar and snacks" pattern was associated with a higher prevalence of excessive GWG (OR 1.45 (95% CI 1.06; 1.99) Q4 vs. Q1). Normal weight women with higher scores on the "Nuts, high-fiber cereals and soy" pattern had more moderate GWG than women with lower scores (-0.01 (95% CI -0.02; -0.00) per SD). The a priori-defined pattern was not associated with GWG. To conclude, specific dietary patterns may play a role in early pregnancy but are not consistently associated with GWG.

  1. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  2. On the optimality of the MAP estimation loop for carrier phase tracking BPSK and QPSK signals

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1979-01-01

    Starting with MAP estimation theory as a basis for optimally estimating carrier phase of BPSK and QPSK modulations, it is shown in this paper that the closed loop phase trackers, which are motivated by this approach, are indeed closed loop optimum in the minimum mean-square phase tracking jitter sense. The corresponding squaring loss performance of these so-called MAP estimation loops is compared with that of more practical implementations wherein the hyperbolic tangent nonlinearity is approximated by simpler functions.

  3. Model-Based MR Parameter Mapping with Sparsity Constraints: Parameter Estimation and Performance Bounds

    PubMed Central

    Zhao, Bo; Lam, Fan; Liang, Zhi-Pei

    2014-01-01

    MR parameter mapping (e.g., T1 mapping, T2 mapping, T2∗ mapping) is a valuable tool for tissue characterization. However, its practical utility has been limited due to long data acquisition times. This paper addresses this problem with a new model-based parameter mapping method. The proposed method utilizes a formulation that integrates the explicit signal model with sparsity constraints on the model parameters, enabling direct estimation of the parameters of interest from highly undersampled, noisy k-space data. An efficient greedy-pursuit algorithm is described to solve the resulting constrained parameter estimation problem. Estimation-theoretic bounds are also derived to analyze the benefits of incorporating sparsity constraints and benchmark the performance of the proposed method. The theoretical properties and empirical performance of the proposed method are illustrated in a T2 mapping application example using computer simulations. PMID:24833520

  4. An embedded saliency map estimator scheme: application to video encoding.

    PubMed

    Tsapatsoulis, Nicolas; Rapantzikos, Konstantinos; Pattichis, Constantinos

    2007-08-01

    In this paper we propose a novel saliency-based computational model for visual attention. This model processes both top-down (goal directed) and bottom-up information. Processing in the top-down channel creates the so called skin conspicuity map and emulates the visual search for human faces performed by humans. This is clearly a goal directed task but is generic enough to be context independent. Processing in the bottom-up information channel follows the principles set by Itti et al. but it deviates from them by computing the orientation, intensity and color conspicuity maps within a unified multi-resolution framework based on wavelet subband analysis. In particular, we apply a wavelet based approach for efficient computation of the topographic feature maps. Given that wavelets and multiresolution theory are naturally connected the usage of wavelet decomposition for mimicking the center surround process in humans is an obvious choice. However, our implementation goes further. We utilize the wavelet decomposition for inline computation of the features (such as orientation angles) that are used to create the topographic feature maps. The bottom-up topographic feature maps and the top-down skin conspicuity map are then combined through a sigmoid function to produce the final saliency map. A prototype of the proposed model was realized through the TMDSDMK642-0E DSP platform as an embedded system allowing real-time operation. For evaluation purposes, in terms of perceived visual quality and video compression improvement, a ROI-based video compression setup was followed. Extended experiments concerning both MPEG-1 as well as low bit-rate MPEG-4 video encoding were conducted showing significant improvement in video compression efficiency without perceived deterioration in visual quality.

  5. Relative Camera Pose Estimation Method Using Optimization on the Manifold

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Hao, X.; Li, J.

    2017-05-01

    To solve the problem of relative camera pose estimation, a method using optimization with respect to the manifold is proposed. Firstly from maximum-a-posteriori (MAP) model to nonlinear least squares (NLS) model, the general state estimation model using optimization is derived. Then the camera pose estimation model is applied to the general state estimation model, while the parameterization of rigid body transformation is represented by Lie group/algebra. The jacobian of point-pose model with respect to Lie group/algebra is derived in detail and thus the optimization model of rigid body transformation is established. Experimental results show that compared with the original algorithms, the approaches with optimization can obtain higher accuracy both in rotation and translation, while avoiding the singularity of Euler angle parameterization of rotation. Thus the proposed method can estimate relative camera pose with high accuracy and robustness.

  6. MASS STORAGE ESTIMATES FOR THE DIGITAL MAPPING AREA.

    USGS Publications Warehouse

    Light, Donald L.

    1983-01-01

    Modern computer technology offers cartographers the potential for transition from conventional film-oriented methods to digital techniques as the way of mapping in the future. Traditional methods utilizing silver halide aerial and lithographic films for storage are time proven, and film is a very high density archival storage media. In view of this, proponents of the digital era recognize that a breakthrough in mass storage technology may be required to attain a reasonable degree of computerization of the cartographic mapping and data management process.

  7. A Priori and a Posteriori Dietary Patterns during Pregnancy and Gestational Weight Gain: The Generation R Study

    PubMed Central

    Tielemans, Myrte J.; Erler, Nicole S.; Leermakers, Elisabeth T. M.; van den Broek, Marion; Jaddoe, Vincent W. V.; Steegers, Eric A. P.; Kiefte-de Jong, Jessica C.; Franco, Oscar H.

    2015-01-01

    Abnormal gestational weight gain (GWG) is associated with adverse pregnancy outcomes. We examined whether dietary patterns are associated with GWG. Participants included 3374 pregnant women from a population-based cohort in the Netherlands. Dietary intake during pregnancy was assessed with food-frequency questionnaires. Three a posteriori-derived dietary patterns were identified using principal component analysis: a “Vegetable, oil and fish”, a “Nuts, high-fiber cereals and soy”, and a “Margarine, sugar and snacks” pattern. The a priori-defined dietary pattern was based on national dietary recommendations. Weight was repeatedly measured around 13, 20 and 30 weeks of pregnancy; pre-pregnancy and maximum weight were self-reported. Normal weight women with high adherence to the “Vegetable, oil and fish” pattern had higher early-pregnancy GWG than those with low adherence (43 g/week (95% CI 16; 69) for highest vs. lowest quartile (Q)). Adherence to the “Margarine, sugar and snacks” pattern was associated with a higher prevalence of excessive GWG (OR 1.45 (95% CI 1.06; 1.99) Q4 vs. Q1). Normal weight women with higher scores on the “Nuts, high-fiber cereals and soy” pattern had more moderate GWG than women with lower scores (−0.01 (95% CI −0.02; −0.00) per SD). The a priori-defined pattern was not associated with GWG. To conclude, specific dietary patterns may play a role in early pregnancy but are not consistently associated with GWG. PMID:26569303

  8. Auto-SOM: recursive parameter estimation for guidance of self-organizing feature maps.

    PubMed

    Haese, K; Goodhill, G J

    2001-03-01

    An important technique for exploratory data analysis is to form a mapping from the high-dimensional data space to a low-dimensional representation space such that neighborhoods are preserved. A popular method for achieving this is Kohonen's self-organizing map (SOM) algorithm. However, in its original form, this requires the user to choose the values of several parameters heuristically to achieve good performance. Here we present the Auto-SOM, an algorithm that estimates the learning parameters during the training of SOMs automatically. The application of Auto-SOM provides the facility to avoid neighborhood violations up to a user-defined degree in either mapping direction. Auto-SOM consists of a Kalman filter implementation of the SOM coupled with a recursive parameter estimation method. The Kalman filter trains the neurons' weights with estimated learning coefficients so as to minimize the variance of the estimation error. The recursive parameter estimation method estimates the width of the neighborhood function by minimizing the prediction error variance of the Kalman filter. In addition, the "topographic function" is incorporated to measure neighborhood violations and prevent the map's converging to configurations with neighborhood violations. It is demonstrated that neighborhoods can be preserved in both mapping directions as desired for dimension-reducing applications. The development of neighborhood-preserving maps and their convergence behavior is demonstrated by three examples accounting for the basic applications of self-organizing feature maps.

  9. MAPPING SPATIAL ACCURACY AND ESTIMATING LANDSCAPE INDICATORS FROM THEMATIC LAND COVER MAPS USING FUZZY SET THEORY

    EPA Science Inventory

    This paper presents a fuzzy set-based method of mapping spatial accuracy of thematic map and computing several ecological indicators while taking into account spatial variation of accuracy associated with different land cover types and other factors (e.g., slope, soil type, etc.)...

  10. MAPPING SPATIAL ACCURACY AND ESTIMATING LANDSCAPE INDICATORS FROM THEMATIC LAND COVER MAPS USING FUZZY SET THEORY

    EPA Science Inventory

    This paper presents a fuzzy set-based method of mapping spatial accuracy of thematic map and computing several ecological indicators while taking into account spatial variation of accuracy associated with different land cover types and other factors (e.g., slope, soil type, etc.)...

  11. Posteriori error estimation of h-p finite element approximations of frictional contact problems

    NASA Astrophysics Data System (ADS)

    Lee, C. Y.; Oden, J. T.

    1994-03-01

    Dynamic and static fractional contact problems are described using the normal compliance law on the contact boundary. Dynamic problems are recast into quasistatic problems by time discretization. An a posteriori error estimator is developed for nonlinear elliptic equation of corresponding static or quasistatic problems. The a posteriori error estimator is applied to a frictionless case and extended to frictional contact problems. An adaptive strategy is introduced and h-p finite element meshes are obtained through a procedure based on a priori and a posteriori error estimations. Numerical examples are given to support the theoretical results.

  12. Multiresolution field map estimation using golden section search for water-fat separation.

    PubMed

    Lu, Wenmiao; Hargreaves, Brian A

    2008-07-01

    Many diagnostic MRI sequences demand reliable and uniform fat suppression. Multipoint water-fat separation methods, which are based on chemical-shift induced phase differences, have shown great success in the presence of field inhomogeneities. This work presents a computationally efficient and robust field map estimation method. The method begins with subsampling image data into a multiresolution image pyramidal structure, and then utilizes a golden section search to directly locate possible field map values at the coarsest level of the pyramidal structure. The field map estimate is refined and propagated to increasingly finer resolutions in an efficient manner until the full-resolution field map is obtained for final water-fat separation. The proposed method is validated with multiecho sequences where long echo-spacings normally impose great challenges on reliable field map estimation.

  13. Computational approaches and software tools for genetic linkage map estimation in plants.

    PubMed

    Cheema, Jitender; Dicks, Jo

    2009-11-01

    Genetic maps are an important component within the plant biologist's toolkit, underpinning crop plant improvement programs. The estimation of plant genetic maps is a conceptually simple yet computationally complex problem, growing ever more so with the development of inexpensive, high-throughput DNA markers. The challenge for bioinformaticians is to develop analytical methods and accompanying software tools that can cope with datasets of differing sizes, from tens to thousands of markers, that can incorporate the expert knowledge that plant biologists typically use when developing their maps, and that facilitate user-friendly approaches to achieving these goals. Here, we aim to give a flavour of computational approaches for genetic map estimation, discussing briefly many of the key concepts involved, and describing a selection of software tools that employ them. This review is intended both for plant geneticists as an introduction to software tools with which to estimate genetic maps, and for bioinformaticians as an introduction to the underlying computational approaches.

  14. Comparative analysis of a-priori and a-posteriori dietary patterns using state-of-the-art classification algorithms: a case/case-control study.

    PubMed

    Kastorini, Christina-Maria; Papadakis, George; Milionis, Haralampos J; Kalantzi, Kallirroi; Puddu, Paolo-Emilio; Nikolaou, Vassilios; Vemmos, Konstantinos N; Goudevenos, John A; Panagiotakos, Demosthenes B

    2013-11-01

    To compare the accuracy of a-priori and a-posteriori dietary patterns in the prediction of acute coronary syndrome (ACS) and ischemic stroke. This is actually the first study to employ state-of-the-art classification methods for this purpose. During 2009-2010, 1000 participants were enrolled; 250 consecutive patients with a first ACS and 250 controls (60±12 years, 83% males), as well as 250 consecutive patients with a first stroke and 250 controls (75±9 years, 56% males). The controls were population-based and age-sex matched to the patients. The a-priori dietary patterns were derived from the validated MedDietScore, whereas the a-posteriori ones were extracted from principal components analysis. Both approaches were modeled using six classification algorithms: multiple logistic regression (MLR), naïve Bayes, decision trees, repeated incremental pruning to produce error reduction (RIPPER), artificial neural networks and support vector machines. The classification accuracy of the resulting models was evaluated using the C-statistic. For the ACS prediction, the C-statistic varied from 0.587 (RIPPER) to 0.807 (MLR) for the a-priori analysis, while for the a-posteriori one, it fluctuated between 0.583 (RIPPER) and 0.827 (MLR). For the stroke prediction, the C-statistic varied from 0.637 (RIPPER) to 0.767 (MLR) for the a-priori analysis, and from 0.617 (decision tree) to 0.780 (MLR) for the a-posteriori. Both dietary pattern approaches achieved equivalent classification accuracy over most classification algorithms. The choice, therefore, depends on the application at hand. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Optimizing Spectral Wave Estimates with Adjoint-Based Sensitivity Maps

    DTIC Science & Technology

    2014-02-18

    spectrum (left panel ) at the nearshore location of interest indicated with a blue asterisk. Black contours plotted on top of grayscale Duck bathymetry in...set of simulations examines the effectiveness of spectral sensitivity maps in a very shallow, mildly sloping, surf-zone environment—the CRAB -surveyed...depth in the surf zone, north of the pier ( blue asterisk in figure). An alternate “accessible” region roughly 200 m by 1,000 m is arbitrarily defined

  16. Map scale effects on estimating the number of undiscovered mineral deposits

    USGS Publications Warehouse

    Singer, D.A.; Menzie, W.D.

    2008-01-01

    Estimates of numbers of undiscovered mineral deposits, fundamental to assessing mineral resources, are affected by map scale. Where consistently defined deposits of a particular type are estimated, spatial and frequency distributions of deposits are linked in that some frequency distributions can be generated by processes randomly in space whereas others are generated by processes suggesting clustering in space. Possible spatial distributions of mineral deposits and their related frequency distributions are affected by map scale and associated inclusions of non-permissive or covered geological settings. More generalized map scales are more likely to cause inclusion of geologic settings that are not really permissive for the deposit type, or that include unreported cover over permissive areas, resulting in the appearance of deposit clustering. Thus, overly generalized map scales can cause deposits to appear clustered. We propose a model that captures the effects of map scale and the related inclusion of non-permissive geologic settings on numbers of deposits estimates, the zero-inflated Poisson distribution. Effects of map scale as represented by the zero-inflated Poisson distribution suggest that the appearance of deposit clustering should diminish as mapping becomes more detailed because the number of inflated zeros would decrease with more detailed maps. Based on observed worldwide relationships between map scale and areas permissive for deposit types, mapping at a scale with twice the detail should cut permissive area size of a porphyry copper tract to 29% and a volcanic-hosted massive sulfide tract to 50% of their original sizes. Thus some direct benefits of mapping an area at a more detailed scale are indicated by significant reductions in areas permissive for deposit types, increased deposit density and, as a consequence, reduced uncertainty in the estimate of number of undiscovered deposits. Exploration enterprises benefit from reduced areas requiring

  17. Estimating and mapping of soil carbon stock using satellite data

    NASA Astrophysics Data System (ADS)

    Hongo, C.; Tamura, E.; Aijima, K.; Niwa, K.

    2014-12-01

    Recently, the carbon capture and storage has been attracting attention as a method for the mitigation of the global warming in agricultural sphere. In Japan, since its topography is complicated, precision monitoring and investigation has a limit. So, utilization of the remote sensing is expected as a precise and effective investigation method. Previous research in Japan, Sekiya et al. (2010) estimated the soil carbon stock from soil surface down to 100 cm depth in Hokkaido. However, the estimated values may not reflect current situation, because in this research relatively old soil survey data from the 1960's to the 1970's were used to estimate the soil carbon stock. Under this background, we developed an estimation method using satellite data to evaluate the soil carbon stocks in the agricultural field covering wide area to be used as the fundamental data. Result of our study suggests that there is a significant correlation between the amount of soil carbon and the reflectance value from visible to near-infrared wavelength region. This is the reason that the color of the soil becomes dark and electromagnetic wave absorbency from visible to near-infrared wavelength increases corresponding with increment of the soil carbon content. Especially, a high negative correlation is found between the reflectance value of red wavelength and the soil carbon stock in the SPOT satellite data of 2013.

  18. Debris flow risk mapping on medium scale and estimation of prospective economic losses

    NASA Astrophysics Data System (ADS)

    Blahut, Jan; Sterlacchini, Simone

    2010-05-01

    Delimitation of potential zones affected by debris flow hazard, mapping of areas at risk, and estimation of future economic damage provides important information for spatial planners and local administrators in all countries endangered by this type of phenomena. This study presents a medium scale (1:25 000 - 1: 50 000) analysis applied in the Consortium of Mountain Municipalities of Valtellina di Tirano (Italian Alps, Lombardy Region). In this area a debris flow hazard map was coupled with the information about the elements at risk to obtain monetary values of prospective damage. Two available hazard maps were obtained from GIS medium scale modelling. Probability estimations of debris flow occurrence were calculated using existing susceptibility maps and two sets of aerial images. Value to the elements at risk was assigned according to the official information on housing costs and land value from the Territorial Agency of Lombardy Region. In the first risk map vulnerability values were assumed to be 1. The second risk map uses three classes of vulnerability values qualitatively estimated according to the debris flow possible propagation. Risk curves summarizing the possible economic losses were calculated. Finally these maps of economic risk were compared to maps derived from qualitative evaluation of the values of the elements at risk.

  19. Mapping of Estimations and Prediction Intervals Using Extreme Learning Machines

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2015-04-01

    Due to the large amount and complexity of data available nowadays in environmental sciences, we face the need to apply more robust methodology allowing analyses and understanding of the phenomena under study. One particular but very important aspect of this understanding is the reliability of generated prediction models. From the data collection to the prediction map, several sources of error can occur and affect the final result. Theses sources are mainly identified as uncertainty in data (data noise), and uncertainty in the model. Their combination leads to the so-called prediction interval. Quantifying these two categories of uncertainty allows a finer understanding of phenomena under study and a better assessment of the prediction accuracy. The present research deals with a methodology combining a machine learning algorithm (ELM - Extreme Learning Machine) with a bootstrap-based procedure. Developed by G.-B. Huang et al. (2006), ELM is an artificial neural network following the structure of a multilayer perceptron (MLP) with one single hidden layer. Compared to classical MLP, ELM has the ability to learn faster without loss of accuracy, and need only one hyper-parameter to be fitted (that is the number of nodes in the hidden layer). The key steps of the proposed method are as following: sample from the original data a variety of subsets using bootstrapping; from these subsets, train and validate ELM models; and compute residuals. Then, the same procedure is performed a second time with only the squared training residuals. Finally, taking into account the two modeling levels allows developing the mean prediction map, the model uncertainty variance, and the data noise variance. The proposed approach is illustrated using geospatial data. References Efron B., and Tibshirani R. 1986, Bootstrap Methods for Standard Errors, Confidence Intervals, and Other Measures of Statistical accuracy, Statistical Science, vol. 1: 54-75. Huang G.-B., Zhu Q.-Y., and Siew C.-K. 2006

  20. Influence of resolution in irrigated area mapping and area estimation

    USGS Publications Warehouse

    Velpuri, N.M.; Thenkabail, P.S.; Gumma, M.K.; Biradar, C.; Dheeravath, V.; Noojipady, P.; Yuanjie, L.

    2009-01-01

    The overarching goal of this paper was to determine how irrigated areas change with resolution (or scale) of imagery. Specific objectives investigated were to (a) map irrigated areas using four distinct spatial resolutions (or scales), (b) determine how irrigated areas change with resolutions, and (c) establish the causes of differences in resolution-based irrigated areas. The study was conducted in the very large Krishna River basin (India), which has a high degree of formal contiguous, and informal fragmented irrigated areas. The irrigated areas were mapped using satellite sensor data at four distinct resolutions: (a) NOAA AVHRR Pathfinder 10,000 m, (b) Terra MODIS 500 m, (c) Terra MODIS 250 m, and (d) Landsat ETM+ 30 m. The proportion of irrigated areas relative to Landsat 30 m derived irrigated areas (9.36 million hectares for the Krishna basin) were (a) 95 percent using MODIS 250 m, (b) 93 percent using MODIS 500 m, and (c) 86 percent using AVHRR 10,000 m. In this study, it was found that the precise location of the irrigated areas were better established using finer spatial resolution data. A strong relationship (R2 = 0.74 to 0.95) was observed between irrigated areas determined using various resolutions. This study proved the hypotheses that "the finer the spatial resolution of the sensor used, greater was the irrigated area derived," since at finer spatial resolutions, fragmented areas are detected better. Accuracies and errors were established consistently for three classes (surface water irrigated, ground water/conjunctive use irrigated, and nonirrigated) across the four resolutions mentioned above. The results showed that the Landsat data provided significantly higher overall accuracies (84 percent) when compared to MODIS 500 m (77 percent), MODIS 250 m (79 percent), and AVHRR 10,000 m (63 percent). ?? 2009 American Society for Photogrammetry and Remote Sensing.

  1. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  2. Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.

    PubMed

    Mutimbu, Lawrence; Robles-Kelly, Antonio

    2016-08-31

    This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.

  3. Spatiotemporal System Identification With Continuous Spatial Maps and Sparse Estimation.

    PubMed

    Aram, Parham; Kadirkamanathan, Visakan; Anderson, Sean R

    2015-11-01

    We present a framework for the identification of spatiotemporal linear dynamical systems. We use a state-space model representation that has the following attributes: 1) the number of spatial observation locations are decoupled from the model order; 2) the model allows for spatial heterogeneity; 3) the model representation is continuous over space; and 4) the model parameters can be identified in a simple and sparse estimation procedure. The model identification procedure we propose has four steps: 1) decomposition of the continuous spatial field using a finite set of basis functions where spatial frequency analysis is used to determine basis function width and spacing, such that the main spatial frequency contents of the underlying field can be captured; 2) initialization of states in closed form; 3) initialization of state-transition and input matrix model parameters using sparse regression-the least absolute shrinkage and selection operator method; and 4) joint state and parameter estimation using an iterative Kalman-filter/sparse-regression algorithm. To investigate the performance of the proposed algorithm we use data generated by the Kuramoto model of spatiotemporal cortical dynamics. The identification algorithm performs successfully, predicting the spatiotemporal field with high accuracy, whilst the sparse regression leads to a compact model.

  4. A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation.

    PubMed

    Fu, Xueyang; Liao, Yinghao; Zeng, Delu; Huang, Yue; Zhang, Xiao-Ping; Ding, Xinghao

    2015-12-01

    In this paper, a new probabilistic method for image enhancement is presented based on a simultaneous estimation of illumination and reflectance in the linear domain. We show that the linear domain model can better represent prior information for better estimation of reflectance and illumination than the logarithmic domain. A maximum a posteriori (MAP) formulation is employed with priors of both illumination and reflectance. To estimate illumination and reflectance effectively, an alternating direction method of multipliers is adopted to solve the MAP problem. The experimental results show the satisfactory performance of the proposed method to obtain reflectance and illumination with visually pleasing enhanced results and a promising convergence rate. Compared with other testing methods, the proposed method yields comparable or better results on both subjective and objective assessments.

  5. A method for estimating and removing streaking artifacts in quantitative susceptibility mapping.

    PubMed

    Li, Wei; Wang, Nian; Yu, Fang; Han, Hui; Cao, Wei; Romero, Rebecca; Tantiwongkosi, Bundhit; Duong, Timothy Q; Liu, Chunlei

    2015-03-01

    Quantitative susceptibility mapping (QSM) is a novel MRI method for quantifying tissue magnetic property. In the brain, it reflects the molecular composition and microstructure of the local tissue. However, susceptibility maps reconstructed from single-orientation data still suffer from streaking artifacts which obscure structural details and small lesions. We propose and have developed a general method for estimating streaking artifacts and subtracting them from susceptibility maps. Specifically, this method uses a sparse linear equation and least-squares (LSQR)-algorithm-based method to derive an initial estimation of magnetic susceptibility, a fast quantitative susceptibility mapping method to estimate the susceptibility boundaries, and an iterative approach to estimate the susceptibility artifact from ill-conditioned k-space regions only. With a fixed set of parameters for the initial susceptibility estimation and subsequent streaking artifact estimation and removal, the method provides an unbiased estimate of tissue susceptibility with negligible streaking artifacts, as compared to multi-orientation QSM reconstruction. This method allows for improved delineation of white matter lesions in patients with multiple sclerosis and small structures of the human brain with excellent anatomical details. The proposed methodology can be extended to other existing QSM algorithms. Copyright © 2014. Published by Elsevier Inc.

  6. A method for estimating and removing streaking artifacts in quantitative susceptibility mapping

    PubMed Central

    Li, Wei; Wang, Nian; Yu, Fang; Han, Hui; Cao, Wei; Romero, Rebecca; Tantiwongkosi, Bundhit; Duong, Timothy Q.; Liu, Chunlei

    2015-01-01

    Quantitative susceptibility mapping (QSM) is a novel MRI method for quantifying tissue magnetic property. In the brain, it reflects the molecular composition and microstructure of the local tissue. However, susceptibility maps reconstructed from single-orientation data still suffer from streaking artifacts which obscure structural details and small lesions. We propose and have developed a general method for estimating streaking artifacts and subtracting them from susceptibility maps. Specifically, this method uses a sparse linear equation and least-squares (LSQR)-algorithm-based method to derive an initial estimation of magnetic susceptibility, a fast quantitative susceptibility mapping method to estimate the susceptibility boundaries, and an iterative approach to estimate the susceptibility artifact from ill-conditioned k-space regions only. With a fixed set of parameters for the initial susceptibility estimation and subsequent streaking artifact estimation and removal, the method provides an unbiased estimate of tissue susceptibility with negligible streaking artifacts, as compared to multi-orientation QSM reconstruction. This method allows for improved delineation of white matter lesions in patients with multiple sclerosis and small structures of the human brain with excellent anatomical details. The proposed methodology can be extended to other existing QSM algorithms. PMID:25536496

  7. Local mapping of detector response for reliable quantum state estimation.

    PubMed

    Cooper, Merlin; Karpiński, Michał; Smith, Brian J

    2014-07-14

    Improved measurement techniques are central to technological development and foundational scientific exploration. Quantum physics relies on detectors sensitive to non-classical features of systems, enabling precise tests of physical laws and quantum-enhanced technologies including precision measurement and secure communications. Accurate detector response calibration for quantum-scale inputs is key to future research and development in these cognate areas. To address this requirement, quantum detector tomography has been recently introduced. However, this technique becomes increasingly challenging as the complexity of the detector response and input space grow in a number of measurement outcomes and required probe states, leading to further demands on experiments and data analysis. Here we present an experimental implementation of a versatile, alternative characterization technique to address many-outcome quantum detectors that limits the input calibration region and does not involve numerical post processing. To demonstrate the applicability of this approach, the calibrated detector is subsequently used to estimate non-classical photon number states.

  8. Precise Point Positioning with Ionosphere Estimation and application of Regional Ionospheric Maps

    NASA Astrophysics Data System (ADS)

    Galera Monico, J. F.; Marques, H. A.; Rocha, G. D. D. C.

    2015-12-01

    The ionosphere is one of most difficult source of errors to be modelled in the GPS positioning, mainly when applying data collected by single frequency receivers. Considering Precise Point Positioning (PPP) with single frequency data the options available include, for example, the use of Klobuchar model or applying Global Ionosphere Maps (GIM). The GIM contains Vertical Electron Content (VTEC) values that are commonly estimated considering a global network with poor covering in certain regions. For this reason Regional Ionosphere Maps (RIM) have been developed considering local GNSS network, for instance, the La Plata Ionospheric Model (LPIM) developed inside the context of SIRGAS (Geocentric Reference System for Americas). The South American RIM are produced with data from nearly 50 GPS ground receivers and considering these maps are generated for each hour with spatial resolution of one degree it is expected to provide better accuracy in GPS positioning for such region. Another possibility to correct for ionosphere effects in the PPP is to apply the ionosphere estimation technique based on Kalman filter. In this case, the ionosphere can be treated as a stochastic process and a good initial guess is necessary what can be obtained from an ionospheric map. In this paper we present the methodology involved with ionosphere estimation by using Kalman filter and also the application of global and regional ionospheric maps in the PPP as first guess. The ionosphere estimation strategy was implemented in the house software called RT_PPP that is capable of accomplishing PPP either for single or dual frequency data. GPS data from Brazilian station near equatorial region were processed and results with regional maps were compared with those by using global maps. Improvements of the order 15% were observed. In case of ionosphere estimation, the estimated coordinates were compared with ionosphere free solution and after PPP convergence the results reached centimeter accuracy.

  9. Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure

    NASA Astrophysics Data System (ADS)

    Liu, Chun; Li, Zhengning; Zhou, Yuan

    2016-06-01

    Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.

  10. Local mapping of detector response for reliable quantum state estimation

    PubMed Central

    Cooper, Merlin; Karpiński, Michał; Smith, Brian J.

    2014-01-01

    Improved measurement techniques are central to technological development and foundational scientific exploration. Quantum physics relies on detectors sensitive to non-classical features of systems, enabling precise tests of physical laws and quantum-enhanced technologies including precision measurement and secure communications. Accurate detector response calibration for quantum-scale inputs is key to future research and development in these cognate areas. To address this requirement, quantum detector tomography has been recently introduced. However, this technique becomes increasingly challenging as the complexity of the detector response and input space grow in a number of measurement outcomes and required probe states, leading to further demands on experiments and data analysis. Here we present an experimental implementation of a versatile, alternative characterization technique to address many-outcome quantum detectors that limits the input calibration region and does not involve numerical post processing. To demonstrate the applicability of this approach, the calibrated detector is subsequently used to estimate non-classical photon number states. PMID:25019300

  11. Improvement of ocean state estimation by assimilating mapped Argo drift data.

    PubMed

    Masuda, Shuhei; Sugiura, Nozomi; Osafune, Satoshi; Doi, Toshimasa

    2014-01-01

    We investigated the impact of assimilating a mapped dataset of subsurface ocean currents into an ocean state estimation. We carried out two global ocean state estimations from 2000 to 2007 using the K7 four-dimensional variational data synthesis system, one of which included an additional map of climatological geostrophic currents estimated from the global set of Argo floats. We assessed the representativeness of the volume transport in the two exercises. The assimilation of Argo ocean current data at only one level, 1000 dbar depth, had subtle impacts on the estimated volume transports, which were strongest in the subtropical North Pacific. The corrections at 10(°)N, where the impact was most notable, arose through the nearly complete offset of wind stress curl by the data synthesis system in conjunction with the first mode baroclinic Rossby wave adjustment. Our results imply that subsurface current data can be effective for improving the estimation of global oceanic circulation by a data synthesis.

  12. Retrospective estimation of the susceptibility driven field map for distortion correction in echo planar imaging.

    PubMed

    Takeda, Hiroyuki; Kim, Boklye

    2013-01-01

    Echo planar imaging (EPI) sequence used for acquiring functional MRI (fMRI) time series data provides the advantage of high temporal resolution, but also is highly sensitive to the magnetic field inhomogeneity resulting in geometric distortions. A static field-inhomogeneity map measured before or after the fMRI scan to correct for such distortions does not account for magnetic field changes due to the head motion during the time series acquisition. In practice, the field map dynamically changes with head motion during the scan and leads to variations in the geometric distortion. We model in this work the field inhomogeneity with the object and the scanner dependent terms. The object-specific term varies with the object's magnetic susceptibility and orientation, i.e., head position with respect to B0. Thus, the simple transformation of the acquired field may not yield an accurate field map. We assume that the scanner-specific field remains unchanged and independent of the head motion. Our approach in this study is to retrospectively estimate the object's magnetic susceptibility (chi) map from an observed high-resolution static field map using an estimator derived from a probability density function of non-uniform noise. This approach is capable of finding the susceptibility map regardless of the wrapping effect. A dynamic field map at each head position can be estimated by applying a rigid body transformation to the estimated chi-map and the 3-D susceptibility voxel convolution (SVC) which is a physics-based discrete convolution model for computing chi-induced field inhomogeneity.

  13. The Effect of Map Boundary on Estimates of Landscape Resistance to Animal Movement

    PubMed Central

    Koen, Erin L.; Garroway, Colin J.; Wilson, Paul J.; Bowman, Jeff

    2010-01-01

    Background Artificial boundaries on a map occur when the map extent does not cover the entire area of study; edges on the map do not exist on the ground. These artificial boundaries might bias the results of animal dispersal models by creating artificial barriers to movement for model organisms where there are no barriers for real organisms. Here, we characterize the effects of artificial boundaries on calculations of landscape resistance to movement using circuit theory. We then propose and test a solution to artificially inflated resistance values whereby we place a buffer around the artificial boundary as a substitute for the true, but unknown, habitat. Methodology/Principal Findings We randomly assigned landscape resistance values to map cells in the buffer in proportion to their occurrence in the known map area. We used circuit theory to estimate landscape resistance to organism movement and gene flow, and compared the output across several scenarios: a habitat-quality map with artificial boundaries and no buffer, a map with a buffer composed of randomized habitat quality data, and a map with a buffer composed of the true habitat quality data. We tested the sensitivity of the randomized buffer to the possibility that the composition of the real but unknown buffer is biased toward high or low quality. We found that artificial boundaries result in an overestimate of landscape resistance. Conclusions/Significance Artificial map boundaries overestimate resistance values. We recommend the use of a buffer composed of randomized habitat data as a solution to this problem. We found that resistance estimated using the randomized buffer did not differ from estimates using the real data, even when the composition of the real data was varied. Our results may be relevant to those interested in employing Circuitscape software in landscape connectivity and landscape genetics studies. PMID:20668690

  14. The effect of map boundary on estimates of landscape resistance to animal movement.

    PubMed

    Koen, Erin L; Garroway, Colin J; Wilson, Paul J; Bowman, Jeff

    2010-07-26

    Artificial boundaries on a map occur when the map extent does not cover the entire area of study; edges on the map do not exist on the ground. These artificial boundaries might bias the results of animal dispersal models by creating artificial barriers to movement for model organisms where there are no barriers for real organisms. Here, we characterize the effects of artificial boundaries on calculations of landscape resistance to movement using circuit theory. We then propose and test a solution to artificially inflated resistance values whereby we place a buffer around the artificial boundary as a substitute for the true, but unknown, habitat. We randomly assigned landscape resistance values to map cells in the buffer in proportion to their occurrence in the known map area. We used circuit theory to estimate landscape resistance to organism movement and gene flow, and compared the output across several scenarios: a habitat-quality map with artificial boundaries and no buffer, a map with a buffer composed of randomized habitat quality data, and a map with a buffer composed of the true habitat quality data. We tested the sensitivity of the randomized buffer to the possibility that the composition of the real but unknown buffer is biased toward high or low quality. We found that artificial boundaries result in an overestimate of landscape resistance. Artificial map boundaries overestimate resistance values. We recommend the use of a buffer composed of randomized habitat data as a solution to this problem. We found that resistance estimated using the randomized buffer did not differ from estimates using the real data, even when the composition of the real data was varied. Our results may be relevant to those interested in employing Circuitscape software in landscape connectivity and landscape genetics studies.

  15. Parsimonious estimation of sex-specific map distances by stepwise maximum likelihood regression

    SciTech Connect

    Fann, C.S.J.; Ott, J.

    1995-10-10

    In human genetic maps, differences between female (x{sub f}) and male (x{sub m}) map distances may be characterized by the ratio, R = x{sub f}/x{sub m}, or the relative difference, Q = (x{sub f} - x{sub m})/(x{sub f} + x{sub m}) = (R - 1)/(R + 1). For a map of genetic markers spread along a chromosome, Q(d) may be viewed as a graph of Q versus the midpoints, d, of the map intervals. To estimate male and female map distances for each interval, a novel method is proposed to evaluate the most parsimonious trend of Q(d) along the chromosome, where Q(d) is expressed as a polynomial in d. Stepwise maximum likelihood polynomial regression of Q is described. The procedure has been implemented in a FORTRAN program package, TREND, and is applied to data on chromosome 18. 11 refs., 2 figs., 3 tabs.

  16. Searching for primordial non-Gaussianity in Planck CMB maps using a combined estimator

    SciTech Connect

    Novaes, C.P.; Wuensche, C.A.; Bernui, A.; Ferreira, I.S. E-mail: bernui@on.br E-mail: ca.wuensche@inpe.br

    2014-01-01

    The extensive search for deviations from Gaussianity in cosmic microwave background radiation (CMB) data is very important due to the information about the very early moments of the universe encoded there. Recent analyses from Planck CMB data do not exclude the presence of non-Gaussianity of small amplitude, although they are consistent with the Gaussian hypothesis. The use of different techniques is essential to provide information about types and amplitudes of non-Gaussianities in the CMB data. In particular, we find interesting to construct an estimator based upon the combination of two powerful statistical tools that appears to be sensitive enough to detect tiny deviations from Gaussianity in CMB maps. This estimator combines the Minkowski functionals with a Neural Network, maximizing a tool widely used to study non-Gaussian signals with a reinforcement of another tool designed to identify patterns in a data set. We test our estimator by analyzing simulated CMB maps contaminated with different amounts of local primordial non-Gaussianity quantified by the dimensionless parameter f{sub  NL}. We apply it to these sets of CMB maps and find ∼> 98% of chance of positive detection, even for small intensity local non-Gaussianity like f{sub  NL} = 38±18, the current limit from Planck data for large angular scales. Additionally, we test the suitability to distinguish between primary and secondary non-Gaussianities: first we train the Neural Network with two sets, one of nearly Gaussian CMB maps (|f{sub  NL}| ≤ 10) but contaminated with realistic inhomogeneous Planck noise (i.e., secondary non-Gaussianity) and the other of non-Gaussian CMB maps, that is, maps endowed with weak primordial non-Gaussianity (28 ≤ f{sub  NL} ≤ 48); after that we test an ensemble composed of CMB maps either with one of these non-Gaussian contaminations, and find out that our method successfully classifies ∼ 95% of the tested maps as being CMB maps containing primordial or

  17. Estimation of high-resolution dust column density maps. Empirical model fits

    NASA Astrophysics Data System (ADS)

    Juvela, M.; Montillaud, J.

    2013-09-01

    Context. Sub-millimetre dust emission is an important tracer of column density N of dense interstellar clouds. One has to combine surface brightness information at different spatial resolutions, and specific methods are needed to derive N at a resolution higher than the lowest resolution of the observations. Some methods have been discussed in the literature, including a method (in the following, method B) that constructs the N estimate in stages, where the smallest spatial scales being derived only use the shortest wavelength maps. Aims: We propose simple model fitting as a flexible way to estimate high-resolution column density maps. Our goal is to evaluate the accuracy of this procedure and to determine whether it is a viable alternative for making these maps. Methods: The new method consists of model maps of column density (or intensity at a reference wavelength) and colour temperature. The model is fitted using Markov chain Monte Carlo methods, comparing model predictions with observations at their native resolution. We analyse simulated surface brightness maps and compare its accuracy with method B and the results that would be obtained using high-resolution observations without noise. Results: The new method is able to produce reliable column density estimates at a resolution significantly higher than the lowest resolution of the input maps. Compared to method B, it is relatively resilient against the effects of noise. The method is computationally more demanding, but is feasible even in the analysis of large Herschel maps. Conclusions: The proposed empirical modelling method E is demonstrated to be a good alternative for calculating high-resolution column density maps, even with considerable super-resolution. Both methods E and B include the potential for further improvements, e.g., in the form of better a priori constraints.

  18. Mapping analyses to estimate EQ-5D utilities and responses based on Oxford Knee Score.

    PubMed

    Dakin, Helen; Gray, Alastair; Murray, David

    2013-04-01

    The Oxford Knee Score (OKS) is a validated 12-item measure of knee replacement outcomes. An algorithm to estimate EQ-5D utilities from OKS would facilitate cost-utility analysis on studies analyses using OKS but not generic health state preference measures. We estimate mapping (or cross-walking) models that predict EQ-5D utilities and/or responses based on OKS. We also compare different model specifications and assess whether different datasets yield different mapping algorithms. Models were estimated using data from the Knee Arthroplasty Trial and the UK Patient Reported Outcome Measures dataset, giving a combined estimation dataset of 134,269 questionnaires from 81,213 knee replacement patients and an internal validation dataset of 45,213 questionnaires from 27,397 patients. The best model was externally validated on registry data (10,002 observations from 4,505 patients) from the South West London Elective Orthopaedic Centre. Eight models of the relationship between OKS and EQ-5D were evaluated, including ordinary least squares, generalized linear models, two-part models, three-part models and response mapping. A multinomial response mapping model using OKS responses to predict EQ-5D response levels had best prediction accuracy, with two-part and three-part models also performing well. In the external validation sample, this model had a mean squared error of 0.033 and a mean absolute error of 0.129. Relative model performance, coefficients and predictions differed slightly but significantly between the two estimation datasets. The resulting response mapping algorithm can be used to predict EQ-5D utilities and responses from OKS responses. Response mapping appears to perform particularly well in large datasets.

  19. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  20. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  1. Decision-Making in Structure Solution using Bayesian Estimates of Map Quality: The PHENIX AutoSol Wizard

    SciTech Connect

    Terwilliger, T. C.; Adams, P. D.; Read, R. J.; McCoy, A. J.; Moriarty, Nigel W.; Grosse-Kunstleve, R. W.; Afonine, P. V.; Zwart, P. H.; Hung, L.-W.

    2009-03-01

    Estimates of the quality of experimental maps are important in many stages of structure determination of macromolecules. Map quality is defined here as the correlation between a map and the map calculated based on a final refined model. Here we examine 10 different measures of experimental map quality using a set of 1359 maps calculated by reanalysis of 246 solved MAD, SAD, and MIR datasets. A simple Bayesian approach to estimation of map quality from one or more measures is presented. We find that a Bayesian estimator based on the skew of histograms of electron density is the most accurate of the 10 individual Bayesian estimators of map quality examined, with a correlation between estimated and actual map quality of 0.90. A combination of the skew of electron density with the local correlation of rms density gives a further improvement in estimating map quality, with an overall correlation coefficient of 0.92. The PHENIX AutoSol Wizard carries out automated structure solution based on any combination of SAD, MAD, SIR, or MIR datasets. The Wizard is based on tools from the PHENIX package and uses the Bayesian estimates of map quality described here to choose the highest-quality solutions after experimental phasing.

  2. Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

    PubMed Central

    Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar

    2015-01-01

    This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289

  3. Position estimation and local mapping using omnidirectional images and global appearance descriptors.

    PubMed

    Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar

    2015-10-16

    This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods.

  4. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part II: isohyetal maps

    USGS Publications Warehouse

    Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.

    1992-01-01

    Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances

  5. Map correlation method: Selection of a reference streamgage to estimate daily streamflow at ungaged catchments.

    USGS Publications Warehouse

    Archfield, Stacey A.; Vogel, Richard M.

    2010-01-01

    Daily streamflow time series are critical to a very broad range of hydrologic problems. Whereas daily streamflow time series are readily obtained from gaged catchments, streamflow information is commonly needed at catchments for which no measured streamflow information exists. At ungaged catchments, methods to estimate daily streamflow time series typically require the use of a reference streamgage, which transfers properties of the streamflow time series at a reference streamgage to the ungaged catchment. Therefore, the selection of a reference streamgage is one of the central challenges associated with estimation of daily streamflow at ungaged basins. The reference streamgage is typically selected by choosing the nearest streamgage; however, this paper shows that selection of the nearest streamgage does not provide a consistent selection criterion. We introduce a new method, termed the map-correlation method, which selects the reference streamgage whose daily streamflows are most correlated with an ungaged catchment. When applied to the estimation of daily streamflow at 28 streamgages across southern New England, daily streamflows estimated by a reference streamgage selected using the map-correlation method generally provides improved estimates of daily streamflow time series over streamflows estimated by the selection and use of the nearest streamgage. The map correlation method could have potential for many other applications including identifying redundancy and uniqueness in a streamgage network, calibration of rainfall runoff models at ungaged sites, as well as for use in catchment classification.

  6. Decision-making in structure solution using Bayesian estimates of map quality: the PHENIX AutoSol wizard

    SciTech Connect

    Terwilliger, Thomas C.; Adams, Paul D.; Read, Randy J.; McCoy, Airlie J.; Moriarty, Nigel W.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Zwart, Peter H.; Hung, Li-Wei

    2009-06-01

    Ten measures of experimental electron-density-map quality are examined and the skewness of electron density is found to be the best indicator of actual map quality. A Bayesian approach to estimating map quality is developed and used in the PHENIX AutoSol wizard to make decisions during automated structure solution. Estimates of the quality of experimental maps are important in many stages of structure determination of macromolecules. Map quality is defined here as the correlation between a map and the corresponding map obtained using phases from the final refined model. Here, ten different measures of experimental map quality were examined using a set of 1359 maps calculated by re-analysis of 246 solved MAD, SAD and MIR data sets. A simple Bayesian approach to estimation of map quality from one or more measures is presented. It was found that a Bayesian estimator based on the skewness of the density values in an electron-density map is the most accurate of the ten individual Bayesian estimators of map quality examined, with a correlation between estimated and actual map quality of 0.90. A combination of the skewness of electron density with the local correlation of r.m.s. density gives a further improvement in estimating map quality, with an overall correlation coefficient of 0.92. The PHENIX AutoSol wizard carries out automated structure solution based on any combination of SAD, MAD, SIR or MIR data sets. The wizard is based on tools from the PHENIX package and uses the Bayesian estimates of map quality described here to choose the highest quality solutions after experimental phasing.

  7. Parallel computation of a maximum-likelihood estimator of a physical map.

    PubMed Central

    Bhandarkar, S M; Machaka, S A; Shete, S S; Kota, R N

    2001-01-01

    Reconstructing a physical map of a chromosome from a genomic library presents a central computational problem in genetics. Physical map reconstruction in the presence of errors is a problem of high computational complexity that provides the motivation for parallel computing. Parallelization strategies for a maximum-likelihood estimation-based approach to physical map reconstruction are presented. The estimation procedure entails a gradient descent search for determining the optimal spacings between probes for a given probe ordering. The optimal probe ordering is determined using a stochastic optimization algorithm such as simulated annealing or microcanonical annealing. A two-level parallelization strategy is proposed wherein the gradient descent search is parallelized at the lower level and the stochastic optimization algorithm is simultaneously parallelized at the higher level. Implementation and experimental results on a distributed-memory multiprocessor cluster running the parallel virtual machine (PVM) environment are presented using simulated and real hybridization data. PMID:11238392

  8. Minimizing biases in estimating the reorganization of human visual areas with BOLD retinotopic mapping

    PubMed Central

    Binda, Paola; Thomas, Jessica M.; Boynton, Geoffrey M.; Fine, Ione

    2013-01-01

    There is substantial interest in using functional magnetic resonance imaging (fMRI) retinotopic mapping techniques to examine reorganization of the occipital cortex after vision loss in humans and nonhuman primates. However, previous reports suggest that standard phase encoding and the more recent population Receptive Field (pRF) techniques give biased estimates of retinotopic maps near the boundaries of retinal or cortical scotomas. Here we examine the sources of this bias and show how it can be minimized with a simple modification of the pRF method. In normally sighted subjects, we measured fMRI responses to a stimulus simulating a foveal scotoma; we found that unbiased retinotopic map estimates can be obtained in early visual areas, as long as the pRF fitting algorithm takes the scotoma into account and a randomized “multifocal” stimulus sequence is used. PMID:23788461

  9. Simulation appraisal of the adequacy of numbers of background markers for relationship estimation in association mapping

    USDA-ARS?s Scientific Manuscript database

    The number of background markers and sample size are two common issues that need to be addressed in many association mapping studies. Our objectives were (1) to investigate the robustness of genetic relatedness estimates based on different numbers of background markers via model testing and variance...

  10. EFFECTS OF IMPROVED PRECIPITATION ESTIMATES ON AUTOMATED RUNOFF MAPPING: EASTERN UNITED STATES

    EPA Science Inventory

    We evaluated maps of runoff created by means of two automated procedures. We implemented each procedure using precipitation estimates of both 5-km and 10-km resolution from PRISM (Parameter-elevation Regressions on Independent Slopes Model). Our goal was to determine if using the...

  11. Estimation of the Local Incidence Angle Map from a Single SAR Image

    NASA Astrophysics Data System (ADS)

    Di Martino, Gerardo; Di Simone, Alessio; Iodice, Antonio; Riccio, Daniele; Ruello, Giuseppe

    2016-08-01

    The ongoing ESA SENTINEL-1 mission witnesses the key role of synthetic aperture radar (SAR) systems in Earth observation and monitoring by means of a continuous radar mapping of our planet's surface. By exploiting the peculiarities of the radiation-matter interaction, SAR data contain huge information concerning the physical and chemical properties of the illuminated surface. Due to the huge number of surface parameters influencing SAR data formation, very few scientific papers concern the estimation of such parameters directly from a single SAR image. In this paper, a technique aimed at the estimation of the local incidence angle map from a single SAR image is derived. The proposed method relies on a solid theoretical background and well-assessed models and methods. The efficacy of the new estimation technique is assessed with both simulated and actual SAR images.

  12. Class-specific weighting for Markov random field estimation: application to medical image segmentation.

    PubMed

    Monaco, James P; Madabhushi, Anant

    2012-12-01

    Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: (1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, (2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and (3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation-achieved via the incorporation of multiplicative weights into the MAP cost function-which allows certain classes to be preferred over others. This creates a natural bias for specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the MPM

  13. Estimation of flood environmental effects using flood zone mapping techniques in Halilrood Kerman, Iran.

    PubMed

    Boudaghpour, Siamak; Bagheri, Majid; Bagheri, Zahra

    2014-01-01

    High flood occurrences with large environmental damages have a growing trend in Iran. Dynamic movements of water during a flood cause different environmental damages in geographical areas with different characteristics such as topographic conditions. In general, environmental effects and damages caused by a flood in an area can be investigated from different points of view. The current essay is aiming at detecting environmental effects of flood occurrences in Halilrood catchment area of Kerman province in Iran using flood zone mapping techniques. The intended flood zone map was introduced in four steps. Steps 1 to 3 pave the way to calculate and estimate flood zone map in the understudy area while step 4 determines the estimation of environmental effects of flood occurrence. Based on our studies, wide range of accuracy for estimating the environmental effects of flood occurrence was introduced by using of flood zone mapping techniques. Moreover, it was identified that the existence of Jiroft dam in the study area can decrease flood zone from 260 hectares to 225 hectares and also it can decrease 20% of flood peak intensity. As a result, 14% of flood zone in the study area can be saved environmentally.

  14. Sensitivity of land use change emission estimates to historical land use and land cover mapping

    NASA Astrophysics Data System (ADS)

    Peng, Shushi; Ciais, Philippe; Maignan, Fabienne; Li, Wei; Chang, Jinfeng; Wang, Tao; Yue, Chao

    2017-04-01

    The carbon emissions from land use and land cover change (ELUC) are an important anthropogenic component of the global carbon budget. Yet these emissions have a large uncertainty. Uncertainty in historical land use and land cover change (LULCC) maps and their implementation in global vegetation models is one of the key sources of the spread of ELUC calculated by global vegetation models. In this study, we used the Organizing Carbon and Hydrology in Dynamic Ecosystems terrestrial biosphere model to investigate how the different transition rules to define the priority of conversion from natural vegetation to agricultural land affect the historical reconstruction of plant functional types (PFTs) and ELUC. First, we reconstructed 10 sets of historical PFT maps using different transition rules and two methods. Then, we calculated ELUC from these 10 different historical PFT maps and an additional published PFT reconstruction, using the difference between two sets of simulations (with and without LULCC). The total area of forest loss is highly correlated with the total simulated ELUC (R2 = 0.83, P < 0.001) across the reconstructed PFT maps, which indicates that the choice of transition rules is a critical (and often overlooked) decision affecting the simulated ELUC. In addition to the choice of a transition rule, the initial land cover map and the reconstruction method for the reconstruction of historical PFT maps have an important impact on the resultant estimates of ELUC.

  15. Estimated flood-inundation maps for Cowskin Creek in western Wichita, Kansas

    USGS Publications Warehouse

    Studley, Seth E.

    2003-01-01

    The October 31, 1998, flood on Cowskin Creek in western Wichita, Kansas, caused millions of dollars in damages. Emergency management personnel and flood mitigation teams had difficulty in efficiently identifying areas affected by the flooding, and no warning was given to residents because flood-inundation information was not available. To provide detailed information about future flooding on Cowskin Creek, high-resolution estimated flood-inundation maps were developed using geographic information system technology and advanced hydraulic analysis. Two-foot-interval land-surface elevation data from a 1996 flood insurance study were used to create a three-dimensional topographic representation of the study area for hydraulic analysis. The data computed from the hydraulic analyses were converted into geographic information system format with software from the U.S. Army Corps of Engineers' Hydrologic Engineering Center. The results were overlaid on the three-dimensional topographic representation of the study area to produce maps of estimated flood-inundation areas and estimated depths of water in the inundated areas for 1-foot increments on the basis of stream stage at an index streamflow-gaging station. A Web site (http://ks.water.usgs.gov/Kansas/cowskin.floodwatch) was developed to provide the public with information pertaining to flooding in the study area. The Web site shows graphs of the real-time streamflow data for U.S. Geological Survey gaging stations in the area and monitors the National Weather Service Arkansas-Red Basin River Forecast Center for Cowskin Creek flood-forecast information. When a flood is forecast for the Cowskin Creek Basin, an estimated flood-inundation map is displayed for the stream stage closest to the National Weather Service's forecasted peak stage. Users of the Web site are able to view the estimated flood-inundation maps for selected stages at any time and to access information about this report and about flooding in general. Flood

  16. Relative risk estimation for malaria disease mapping based on stochastic SIR-SI model in Malaysia

    NASA Astrophysics Data System (ADS)

    Samat, Nor Azah; Ma'arof, Syafiqah Husna Mohd Imam

    2016-10-01

    Disease mapping is a study on the geographical distribution of a disease to represent the epidemiology data spatially. The production of maps is important to identify areas that deserve closer scrutiny or more attention. In this study, a mosquito-borne disease called Malaria is the focus of our application. Malaria disease is caused by parasites of the genus Plasmodium and is transmitted to people through the bites of infected female Anopheles mosquitoes. Precautionary steps need to be considered in order to avoid the malaria virus from spreading around the world, especially in the tropical and subtropical countries, which would subsequently increase the number of Malaria cases. Thus, the purpose of this paper is to discuss a stochastic model employed to estimate the relative risk of malaria disease in Malaysia. The outcomes of the analysis include a Malaria risk map for all 16 states in Malaysia, revealing the high and low risk areas of Malaria occurrences.

  17. Estimating Sulfur hexafluoride (SF6) emissions in China using atmospheric observations and inverse modeling

    NASA Astrophysics Data System (ADS)

    Fang, X.; Thompson, R.; Saito, T.; Yokouchi, Y.; Li, S.; Kim, J.; Kim, K.; Park, S.; Graziosi, F.; Stohl, A.

    2013-12-01

    With a global warming potential of around 22800 over a 100-year time horizon, sulfur hexafluoride (SF6) is one of the greenhouse gases regulated under the Kyoto Protocol. Global SF6 emissions have been increasing since circa the year 2000. The reason for this increase has been inferred to be due to rapidly increasing emissions in developing countries that are not obligated to report their annual emissions to the United Nations Framework Convention on Climate Change, notably China. In this study, SF6 emissions during the period 2006-2012 for China and other East Asian countries were determined using in-situ atmospheric measurements and inverse modeling. We performed various inversion sensitivity tests, which show the largest uncertainties in the a posteriori Chinese emissions are associated with the a priori emissions used and their uncertainty, the station network, as well as the meteorological input data. The overall relative uncertainty of the a posteriori emissions in China is estimated to be 17% in 2008. Based on sensitivity tests, we employed the optimal parameters in our inversion setup and performed yearly inversions for the study period. Inversion results show that the total a posteriori SF6 emissions from China increased from 1420 × 245 Mg/yr in 2006 to 2741 × 472 Mg/yr in 2009 and stabilized thereafter. The rapid increase in emissions reflected a fast increase in SF6 consumption in China, a result also found in bottom-up estimates. The a posteriori emission map shows high emissions concentrated in populated parts of China. During the period 2006-2012, emissions in northwestern and northern China peaked around the year 2009, while emissions in eastern, central and northeastern China grew gradually during almost the whole period. Fluctuating emissions are observed for southwestern China. These regional differences should be caused by changes of provincial SF6 usage and by shifts of usage among different sectors. Fig. 1. Footprint emission sensitivity

  18. Estimation and mapping of uranium content of geological units in France.

    PubMed

    Ielsch, G; Cuney, M; Buscail, F; Rossi, F; Leon, A; Cushing, M E

    2017-01-01

    In France, natural radiation accounts for most of the population exposure to ionizing radiation. The Institute for Radiological Protection and Nuclear Safety (IRSN) carries out studies to evaluate the variability of natural radioactivity over the French territory. In this framework, the present study consisted in the evaluation of uranium concentrations in bedrocks. The objective was to provide estimate of uranium content of each geological unit defined in the geological map of France (1:1,000,000). The methodology was based on the interpretation of existing geochemical data (results of whole rock sample analysis) and the knowledge of petrology and lithology of the geological units, which allowed obtaining a first estimate of the uranium content of rocks. Then, this first estimate was improved thanks to some additional information. For example, some particular or regional sedimentary rocks which could present uranium contents higher than those generally observed for these lithologies, were identified. Moreover, databases on mining provided information on the location of uranium and coal/lignite mines and thus indicated the location of particular uranium-rich rocks. The geological units, defined from their boundaries extracted from the geological map of France (1:1,000,000), were finally classified into 5 categories based on their mean uranium content. The map obtained provided useful data for establishing the geogenic radon map of France, but also for mapping countrywide exposure to terrestrial radiation and for the evaluation of background levels of natural radioactivity used for impact assessment of anthropogenic activities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Estimating random signal parameters from noisy images with nuisance parameters

    PubMed Central

    Whitaker, Meredith Kathryn; Clarkson, Eric; Barrett, Harrison H.

    2008-01-01

    In a pure estimation task, an object of interest is known to be present, and we wish to determine numerical values for parameters that describe the object. This paper compares the theoretical framework, implementation method, and performance of two estimation procedures. We examined the performance of these estimators for tasks such as estimating signal location, signal volume, signal amplitude, or any combination of these parameters. The signal is embedded in a random background to simulate the effect of nuisance parameters. First, we explore the classical Wiener estimator, which operates linearly on the data and minimizes the ensemble mean-squared error. The results of our performance tests indicate that the Wiener estimator can estimate amplitude and shape once a signal has been located, but is fundamentally unable to locate a signal regardless of the quality of the image. Given these new results on the fundamental limitations of Wiener estimation, we extend our methods to include more complex data processing. We introduce and evaluate a scanning-linear estimator that performs impressively for location estimation. The scanning action of the estimator refers to seeking a solution that maximizes a linear metric, thereby requiring a global-extremum search. The linear metric to be optimized can be derived as a special case of maximum a posteriori (MAP) estimation when the likelihood is Gaussian and a slowly varying covariance approximation is made. PMID:18545527

  20. Estimation of Stand Height and Forest Volume Using High Resolution Stereo Photography and Forest Type Map

    NASA Astrophysics Data System (ADS)

    Kim, K. M.

    2016-06-01

    Traditional field methods for measuring tree heights are often too costly and time consuming. An alternative remote sensing approach is to measure tree heights from digital stereo photographs which is more practical for forest managers and less expensive than LiDAR or synthetic aperture radar. This work proposes an estimation of stand height and forest volume(m3/ha) using normalized digital surface model (nDSM) from high resolution stereo photography (25cm resolution) and forest type map. The study area was located in Mt. Maehwa model forest in Hong Chun-Gun, South Korea. The forest type map has four attributes such as major species, age class, DBH class and crown density class by stand. Overlapping aerial photos were taken in September 2013 and digital surface model (DSM) was created by photogrammetric methods(aerial triangulation, digital image matching). Then, digital terrain model (DTM) was created by filtering DSM and subtracted DTM from DSM pixel by pixel, resulting in nDSM which represents object heights (buildings, trees, etc.). Two independent variables from nDSM were used to estimate forest stand volume: crown density (%) and stand height (m). First, crown density was calculated using canopy segmentation method considering live crown ratio. Next, stand height was produced by averaging individual tree heights in a stand using Esri's ArcGIS and the USDA Forest Service's FUSION software. Finally, stand volume was estimated and mapped using aerial photo stand volume equations by species which have two independent variables, crown density and stand height. South Korea has a historical imagery archive which can show forest change in 40 years of successful forest rehabilitation. For a future study, forest volume change map (1970s-present) will be produced using this stand volume estimation method and a historical imagery archive.

  1. Arsenic risk mapping in Bangladesh: a simulation technique of cokriging estimation from regional count data.

    PubMed

    Hassan, M Manzurul; Atkins, Peter J

    2007-10-01

    Risk analysis with spatial interpolation methods from a regional database on to a continuous surface is of contemporary interest. Groundwater arsenic poisoning in Bangladesh and its impact on human health has been one of the "biggest environmental health disasters" in current years. It is ironic that so many tubewells have been installed in recent times for pathogen-free drinking water but the water pumped is often contaminated with toxic levels of arsenic. This paper seeks to analyse the spatial pattern of arsenic risk by mapping composite "problem regions" in southwest Bangladesh. It also examines the cokriging interpolation method in analysing the suitability of isopleth maps for different risk areas. GIS-based data processing and spatial analysis were used for this research, along with state-of-the-art decision-making techniques. Apart from the GIS-based buffering and overlay mapping operations, a cokriging interpolation method was adopted because of its exact interpolation capacity. The paper presents an interpolation of regional estimates of arsenic data for spatial risk mapping that overcomes the areal bias problem for administrative boundaries. Moreover, the functionality of the cokriging method demonstrates the suitability of isopleth maps that are easy to read.

  2. Soil amplification maps for estimating earthquake ground motions in the Central US

    USGS Publications Warehouse

    Bauer, R.A.; Kiefer, J.; Hester, N.

    2001-01-01

    The State Geologists of the Central United States Earthquake Consortium (CUSEC) are developing maps to assist State and local emergency managers and community officials in evaluating the earthquake hazards for the CUSEC region. The state geological surveys have worked together to produce a series of maps that show seismic shaking potential for eleven 1 X 2 degree (scale 1:250 000 or 1 in. ??? 3.9 miles) quadrangles that cover the high-risk area of the New Madrid Seismic Zone in eight states. Shear wave velocity values for the surficial materials were gathered and used to classify the soils according to their potential to amplify earthquake ground motions. Geologic base maps of surficial materials or 3-D material maps, either existing or produced for this project, were used in conjunction with shear wave velocities to classify the soils for the upper 15-30 m. These maps are available in an electronic form suitable for inclusion in the federal emergency management agency's earthquake loss estimation program (HAZUS). ?? 2001 Elsevier Science B.V. All rights reserved.

  3. Monopole and dipole estimation for multi-frequency sky maps by linear regression

    NASA Astrophysics Data System (ADS)

    Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.

    2017-01-01

    We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.

  4. Direct estimation of tracer-kinetic parameter maps from highly undersampled brain dynamic contrast enhanced MRI.

    PubMed

    Guo, Yi; Lingala, Sajan Goud; Zhu, Yinghua; Lebel, R Marc; Nayak, Krishna S

    2017-10-01

    The purpose of this work was to develop and evaluate a T1 -weighted dynamic contrast enhanced (DCE) MRI methodology where tracer-kinetic (TK) parameter maps are directly estimated from undersampled (k,t)-space data. The proposed reconstruction involves solving a nonlinear least squares optimization problem that includes explicit use of a full forward model to convert parameter maps to (k,t)-space, utilizing the Patlak TK model. The proposed scheme is compared against an indirect method that creates intermediate images by parallel imaging and compressed sensing before to TK modeling. Thirteen fully sampled brain tumor DCE-MRI scans with 5-second temporal resolution are retrospectively undersampled at rates R = 20, 40, 60, 80, and 100 for each dynamic frame. TK maps are quantitatively compared based on root mean-squared-error (rMSE) and Bland-Altman analysis. The approach is also applied to four prospectively R = 30 undersampled whole-brain DCE-MRI data sets. In the retrospective study, the proposed method performed statistically better than indirect method at R ≥ 80 for all 13 cases. This approach provided restoration of TK parameter values with less errors in tumor regions of interest, an improvement compared to a state-of-the-art indirect method. Applied prospectively, the proposed method provided whole-brain, high-resolution TK maps with good image quality. Model-based direct estimation of TK maps from k,t-space DCE-MRI data is feasible and is compatible up to 100-fold undersampling. Magn Reson Med 78:1566-1578, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  5. Predictive analysis and mapping of indoor radon concentrations in a complex environment using kernel estimation: an application to Switzerland.

    PubMed

    Kropat, Georg; Bochud, Francois; Jaboyedoff, Michel; Laedermann, Jean-Pascal; Murith, Christophe; Palacios Gruson, Martha; Baechler, Sébastien

    2015-02-01

    The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements. Copyright © 2014 Elsevier B.V. All rights

  6. The Effect of Sensor Failure on the Attitude and Rate Estimation of MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, its conclusions are more general.

  7. An evaluation of Bayesian estimators and PDF models for despeckling in the undecimated wavelet domain

    NASA Astrophysics Data System (ADS)

    Alparone, Luciano; Argenti, Fabrizio; Bianchi, Tiziano; Lapini, Alessandro

    2010-10-01

    Goal of this paper is an evaluation of Bayesian estimators: Minimum Mean Square Error (MMSE), Minimum Mean Absolute Error (MMAE) and Maximum A-posteriori Probability (MAP). Such estimations have been carried out in the undecimated wavelet domain. Bayesian estimation requires probability density function (PDF) models for the wavelet coefficients of the reflectivity and of the signal-dependent noise. In this work several combination of PDFs will be assessed. Closed-form solutions for MMSE, MMAE and MAP have been derived, whenever possible; numerical solutions otherwise. Experimental results carried out on simulated noisy images evidence the cost-performance trade off of the different estimators in conjunction with PDF models. MAP estimation with generalized Gaussian (GG) PDF for wavelet coefficients of both reflectivity and signal-dependent noise (GG - GG) yields best performances. MAP with Laplacian - Gaussian (L - G) is only 0.07 dB less performing than MAP with GG - GG. However, the former admits a closed-form solution and its computational cost is more than ten times lower than that of the latter. Results on true single look high-resolution Cosmo-SkyMed SAR images provided by Italian Space Agency (ASI), are presented and discussed.

  8. A framework for estimating forest disturbance intensity from successive remotely sensed biomass maps: moving beyond average biomass loss estimates.

    PubMed

    Hill, T C; Ryan, C M; Williams, M

    2015-12-01

    The success of satellites in mapping deforestation has been invaluable for improving our understanding of the impacts and nature of land cover change and carbon balance. However, current satellite approaches struggle to quantify the intensity of forest disturbance, i.e. whether the average rate of biomass loss for a region arises from heavy disturbance focused in a few locations, or the less severe disturbance of a wider area. The ability to distinguish between these, very different, disturbance regimes remains critical for forest managers and ecologists. We put forward a framework for describing all intensities of forest disturbance, from deforestation, to widespread low intensity disturbance. By grouping satellite observations into ensembles with a common disturbance regime, the framework is able to mitigate the impacts of poor signal-to-noise ratio that limits current satellite observations. Using an observation system simulation experiment we demonstrate that the framework can be applied to provide estimates of the mean biomass loss rate, as well as distinguish the intensity of the disturbance. The approach is robust despite the large random and systematic errors typical of biomass maps derived from radar. The best accuracies are achieved with ensembles of ≥1600 pixels (≥1 km(2) with 25 by 25 m pixels). The framework we describe provides a novel way to describe and quantify the intensity of forest disturbance, which could help to provide information on the causes of both natural and anthropogenic forest loss-such information is vital for effective forest and climate policy formulation.

  9. Quadratic functions used in the estimation of projections of antique maps

    NASA Astrophysics Data System (ADS)

    Molnar, Gabor

    2010-05-01

    "Substituting projection" is a projection defined by a set of projection parameters in GIS systems, for a map, whose "true" projection parameters are not known. Defining "substituting projection" is easy for maps with overprinted meridians and parallels. In this case any projection is acceptable as a "substituting projection" that has a same shape for geographic coordinates, as it is printed on the map. This makes it possible to find out the projection type at least (conic, cylindrical or azimuthal) using basic cartometry rules. After defining the "substituting projection", a linear or a Helmert transformation can be used to transform the image into this projection. If we do not have geographic coordinates overprinted on the map, we still can estimate them, using map features as ground control points (GCPs), and using the geographic coordinates of these GCPs. Some GIS software applied for georeferencing raster maps are capable to show the transformed grid defined by the GGPs' coordinates on the raster image. If we use second or third order polynomial (quadratic or cubic) transformation, the transformed geographical coordinate grid can be regarded as an overprinted geographical grid. In this case this "overprint" can be used for finding out the projection type and parameters. In this case, the root-mean-square (RMS) of the residual error of the GCPs measured and transformed coordinates is not negligible. We get similar RMS errors if we use modern coordinate system coordinates as GCP coordinates. In this case the magnitude of the RMS errors is almost independent of the system chosen. This RMS error is due to by the inaccurate measurement of location coordinates during the mapping process, and can not be reduced choosing another projection. If we found out a good parameter set for "substituting projection" this RMS error is the same magnitude, even if we use a linear or Helmert-type transformation. This can be used as a criteria for selecting between projection types

  10. Needlet estimation of cross-correlation between CMB lensing maps and LSS

    NASA Astrophysics Data System (ADS)

    Bianchini, Federico; Renzi, Alessandro; Marinucci, Domenico

    2016-11-01

    In this paper we develop a novel needlet-based estimator to investigate the cross-correlation between cosmic microwave background (CMB) lensing maps and large-scale structure (LSS) data. We compare this estimator with its harmonic counterpart and, in particular, we analyze the bias effects of different forms of masking. In order to address this bias, we also implement a MASTER-like technique in the needlet case. The resulting estimator turns out to have an extremely good signal-to-noise performance. Our analysis aims at expanding and optimizing the operating domains in CMB-LSS cross-correlation studies, similarly to CMB needlet data analysis. It is motivated especially by next generation experiments (such as Euclid) which will allow us to derive much tighter constraints on cosmological and astrophysical parameters through cross-correlation measurements between CMB and LSS.

  11. Combining MODIS and Landsat imagery to estimate and map boreal forest cover loss

    USGS Publications Warehouse

    Potapov, P.; Hansen, Matthew C.; Stehman, S.V.; Loveland, T.R.; Pittman, K.

    2008-01-01

    Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5 km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss.

  12. Estimating the resolution limit of the map equation in community detection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro; Rosvall, Martin

    2015-01-01

    A community detection algorithm is considered to have a resolution limit if the scale of the smallest modules that can be resolved depends on the size of the analyzed subnetwork. The resolution limit is known to prevent some community detection algorithms from accurately identifying the modular structure of a network. In fact, any global objective function for measuring the quality of a two-level assignment of nodes into modules must have some sort of resolution limit or an external resolution parameter. However, it is yet unknown how the resolution limit affects the so-called map equation, which is known to be an efficient objective function for community detection. We derive an analytical estimate and conclude that the resolution limit of the map equation is set by the total number of links between modules instead of the total number of links in the full network as for modularity. This mechanism makes the resolution limit much less restrictive for the map equation than for modularity; in practice, it is orders of magnitudes smaller. Furthermore, we argue that the effect of the resolution limit often results from shoehorning multilevel modular structures into two-level descriptions. As we show, the hierarchical map equation effectively eliminates the resolution limit for networks with nested multilevel modular structures.

  13. Estimating the social value of geologic map information: A regulatory application

    USGS Publications Warehouse

    Bernknopf, R.L.; Brookshire, D.S.; McKee, M.; Soller, D.R.

    1997-01-01

    People frequently regard the landscape as part of a static system. The mountains and rivers that cross the landscape, and the bedrock that supports the surface, change little during the course of a lifetime. Society can alter the geologic history of an area and, in so doing, affect the occurrence and impact of environmental hazards. For example, changes in land use can induce changes in erosion, sedimentation, and ground-water supply. As the environmental system is changed by both natural processes and human activities, the system's capacity to respond to additional stresses also changes. Information such as geologic maps describes the physical world and is critical for identifying solutions to land use and environmental issues. In this paper, a method is developed for estimating the economic value of applying geologic map information to siting a waste disposal facility. An improvement in geologic map information is shown to have a net positive value to society. Such maps enable planners to make superior land management decisions.

  14. Development of a Greek solar map based on solar model estimations

    NASA Astrophysics Data System (ADS)

    Kambezidis, H. D.; Psiloglou, B. E.; Kavadias, K. A.; Paliatsos, A. G.; Bartzokas, A.

    2016-05-01

    The realization of Renewable Energy Sources (RES) for power generation as the only environmentally friendly solution, moved solar systems to the forefront of the energy market in the last decade. The capacity of the solar power doubles almost every two years in many European countries, including Greece. This rise has brought the need for reliable predictions of meteorological data that can easily be utilized for proper RES-site allocation. The absence of solar measurements has, therefore, raised the demand for deploying a suitable model in order to create a solar map. The generation of a solar map for Greece, could provide solid foundations on the prediction of the energy production of a solar power plant that is installed in the area, by providing an estimation of the solar energy acquired at each longitude and latitude of the map. In the present work, the well-known Meteorological Radiation Model (MRM), a broadband solar radiation model, is engaged. This model utilizes common meteorological data, such as air temperature, relative humidity, barometric pressure and sunshine duration, in order to calculate solar radiation through MRM for areas where such data are not available. Hourly values of the above meteorological parameters are acquired from 39 meteorological stations, evenly dispersed around Greece; hourly values of solar radiation are calculated from MRM. Then, by using an integrated spatial interpolation method, a Greek solar energy map is generated, providing annual solar energy values all over Greece.

  15. POWER ASYMMETRY IN WMAP AND PLANCK TEMPERATURE SKY MAPS AS MEASURED BY A LOCAL VARIANCE ESTIMATOR

    SciTech Connect

    Akrami, Y.; Fantaye, Y.; Eriksen, H. K.; Hansen, F. K.; Shafieloo, A.; Banday, A. J.; Górski, K. M. E-mail: y.t.fantaye@astro.uio.no

    2014-04-01

    We revisit the question of hemispherical power asymmetry in the WMAP and Planck temperature sky maps by measuring the local variance over the sky and on disks of various sizes. For the 2013 Planck sky map we find that none of the 1000 available isotropic Planck ''Full Focal Plane'' simulations have a larger variance asymmetry than that estimated from the data, suggesting the presence of an anisotropic signature formally significant at least at the 3.3σ level. For the WMAP 9 year data we find that 5 out of 1000 simulations have a larger asymmetry. The preferred direction for the asymmetry from the Planck data is (l, b) = (212°, –13°), in good agreement with previous reports of the same hemispherical power asymmetry.

  16. Motion Correction for Myocardial T1 Mapping using Image Registration with Synthetic Image Estimation

    PubMed Central

    Xue, Hui; Shah, Saurabh; Greiser, Andreas; Guetter, Christoph; Littmann, Arne; Jolly, Marie-Pierre; Arai, Andrew E; Zuehlsdorff, Sven; Guehring, Jens; Kellman, Peter

    2013-01-01

    Quantification of myocardial T1 relaxation has potential value in the diagnosis of both ischemic and non-ischemic cardiomyopathies. Image acquisition using the Modified Look-Locker Inversion Recovery technique is clinically feasible for T1 mapping. However, respiratory motion limits its applicability and degrades the accuracy of T1 estimation. The robust registration of acquired inversion recovery images is particularly challenging due to the large changes in image contrast, especially for those images acquired near the signal null point of the inversion recovery and other inversion times for which there is little tissue contrast. In this paper, we propose a novel motion correction algorithm. This approach is based on estimating synthetic images presenting contrast changes similar to the acquired images. The estimation of synthetic images is formulated as a variational energy minimization problem. Validation on a consecutive patient data cohort shows that this strategy can perform robust non-rigid registration to align inversion recovery images experiencing significant motion and lead to suppression of motion induced artifacts in the T1 map. PMID:22135227

  17. Estimation of agricultural pesticide use in drainage basins using land cover maps and county pesticide data

    USGS Publications Warehouse

    Nakagaki, Naomi; Wolock, David M.

    2005-01-01

    A geographic information system (GIS) was used to estimate agricultural pesticide use in the drainage basins of streams that are studied as part of the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program. Drainage basin pesticide use estimates were computed by intersecting digital maps of drainage basin boundaries with an enhanced version of the National Land Cover Data 1992 combined with estimates of 1992 agricultural pesticide use in each United States county. This report presents the methods used to quantify agricultural pesticide use in drainage basins using a GIS and includes the estimates of atrazine use applied to row crops, small-grain crops, and fallow lands in 150 watersheds in the conterminous United States. Basin atrazine use estimates are presented to compare and analyze the results that were derived from 30-meter and 1-kilometer resolution land cover and county pesticide use data, and drainage basin boundaries at various grid cell resolutions. Comparisons of the basin atrazine use estimates derived from watershed boundaries, county pesticide use, and land cover data sets at different resolutions, indicated that overall differences were minor. The largest potential for differences in basin pesticide use estimates between those derived from the 30-meter and 1-kilometer resolution enhanced National Land Cover Data 1992 exists wherever there are abrupt agricultural land cover changes along the basin divide. Despite the limitations of the drainage basin pesticide use data described in this report, the basin estimates provide consistent and comparable indicators of agricultural pesticide application in surface-water drainage basins studied in the NAWQA Program.

  18. Estimating and Mapping Urban Impervious Surfaces: Reflection on Spectral, Spatial, and Temporal Resolutions

    NASA Astrophysics Data System (ADS)

    Weng, Q.

    2007-12-01

    Impervious surface is a key indicator of urban environmental quality and urbanization degree. Therefore, estimation and mapping of impervious surfaces in urban areas has attracted more and more attention recently by using remote sensing digital images. In this paper, satellite images with various spectral, spatial, and temporal resolutions are employed to examine the effects of these remote sensing data characteristics on mapping accuracy of urban impervious surfaces. The study area was the city proper of Indianapolis (Marion County), Indiana, United States. Linear spectral mixture analysis was applied to generate high albedo, low albedo, vegetation, and soil fraction images (endmembers) from the satellite images, and impervious surfaces were then estimated by adding high albedo and low albedo fraction images. A comparison of EO-1 ALI (multispectral) and Hyperion (hyperspectral) images indicates that the Hyperion image was more effective in discerning low albedo surface materials, especially the spectral bands in the mid-infrared region. Linear spectral mixing modeling was found more useful for medium spatial resolution images, such as Landsat TM/ETM+ and ASTER images, due to the existence of a large amount of mixed pixels in the urban areas. The model, however, may not be suitable for high spatial resolution images, such as IKONOS images, because of less influence from the mixing pixel. The shadow problem in the high spatial resolution images, caused by tall buildings and large tree crowns, is a challenge in impervious surface extraction. Alternative image processing algorithms such as decision tree classifier may be more appropriate to achieve high mapping accuracy. For mid-latitude cities, seasonal vegetation phenology has a significant effect on the spectral response of terrestrial features, and therefore, image analysis must take into account of this environmental characteristic. Three ASTER images, acquired on April 5, 2004, June 16, 2001, and October 3, 2000

  19. A simple algorithm to estimate the effective regional atmospheric parameters for thermal-inertia mapping

    USGS Publications Warehouse

    Watson, K.; Hummer-Miller, S.

    1981-01-01

    A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.

  20. Comparison of a fully mapped plot design to three alternative designs for volume and area estimates using Maine inventory data

    Treesearch

    Stanford L. Arner

    1998-01-01

    A fully mapped plot design is compared to three alternative designs using data collected for the recent inventory of Maine's forest resources. Like the fully mapped design, one alternative eliminates the bias of previous procedures, and should be less costly and more consistent. There was little difference in volume and area estimates or in sampling errors among...

  1. PDV Uncertainty Estimation & Methods Comparison

    SciTech Connect

    Machorro, E.

    2011-11-01

    Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.

  2. Nonlinear Regularization for Per Voxel Estimation of Magnetic Susceptibility Distributions from MRI Field Maps

    PubMed Central

    Kressler, Bryan; de Rochefort, Ludovic; Liu, Tian; Spincemaille, Pascal; Jiang, Quan; Wang, Yi

    2010-01-01

    Magnetic susceptibility is an important physical property of tissues, and can be used as a contrast mechanism in magnetic resonance imaging. Recently, targeting contrast agents by conjugation with signaling molecules and labeling stem cells with contrast agents have become feasible. These contrast agents are strongly paramagnetic, and the ability to quantify magnetic susceptibility could allow accurate measurement of signaling and cell localization. Presented here is a technique to estimate arbitrary magnetic susceptibility distributions by solving an ill-posed inversion problem from field maps obtained in an MRI scanner. Two regularization strategies are considered, conventional Tikhonov regularization, and a sparsity promoting nonlinear regularization using the ℓ1 norm. Proof of concept is demonstrated using numerical simulations, phantoms, and in a stroke model mouse. Initial experience indicates that the nonlinear regularization better suppresses noise and streaking artifacts common in susceptibility estimation. PMID:19502123

  3. MareyMap Online: A User-Friendly Web Application and Database Service for Estimating Recombination Rates Using Physical and Genetic Maps.

    PubMed

    Siberchicot, Aurélie; Bessy, Adrien; Guéguen, Laurent; Marais, Gabriel A B

    2017-10-01

    Given the importance of meiotic recombination in biology, there is a need to develop robust methods to estimate meiotic recombination rates. A popular approach, called the Marey map approach, relies on comparing genetic and physical maps of a chromosome to estimate local recombination rates. In the past, we have implemented this approach in an R package called MareyMap, which includes many functionalities useful to get reliable recombination rate estimates in a semi-automated way. MareyMap has been used repeatedly in studies looking at the effect of recombination on genome evolution. Here, we propose a simpler user-friendly web service version of MareyMap, called MareyMap Online, which allows a user to get recombination rates from her/his own data or from a publicly available database that we offer in a few clicks. When the analysis is done, the user is asked whether her/his curated data can be placed in the database and shared with other users, which we hope will make meta-analysis on recombination rates including many species easy in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  4. Estimating cross-validatory predictive p-values with integrated importance sampling for disease mapping models.

    PubMed

    Li, Longhai; Feng, Cindy X; Qiu, Shi

    2017-06-30

    An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Play estimation with motions and textures with automatic generation of template space-time map

    NASA Astrophysics Data System (ADS)

    Aoki, Kyota; Aita, Ryo; Fukiba, Takuro

    2015-07-01

    It is easy to retrieve the small size parts from small videos. It is also easy to retrieve the middle size part from large videos. However, we have difficulties to retrieve the small size parts from large videos. We have large needs for estimating plays in sport videos. Plays in sports are described as the motions of players. This paper proposes the play retrieving method based on both motion compensation vectors and normal color frames in MPEG sports videos. This work uses the 1-dimensional degenerated descriptions of each motion image between two adjacent frames. Connecting the 1-dimensional degenerated descriptions on time direction, we have the space-time map. This spacetime map describes a sequence of frames as a 2-dimensional image. Using this space-time map on motion compensation vector frames and normal color frames, this work shows the method to create a new better template from a single template for retrieving a small number of plays in a huge number of frames. In an experiment, the resulting F-measure marks 0.955.

  6. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    PubMed Central

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930

  7. Attenuation correction in SPECT images using attenuation map estimation with its emission data

    NASA Astrophysics Data System (ADS)

    Tavakoli, Meysam; Naji, Maryam; Abdollahi, Ali; Kalantari, Faraz

    2017-03-01

    Photon attenuation during SPECT imaging significantly degrades the diagnostic outcome and the quantitative accuracy of final reconstructed images. It is well known that attenuation correction can be done by using iterative reconstruction methods if we access to attenuation map. Two methods have been used to calculate the attenuation map: transmission-based and transmissionless techniques. In this phantom study, we evaluated the importance of attenuation correction by quantitative evaluation of errors associated with each method. For transmissionless approach, the attenuation map was estimated from the emission data only. An EM algorithm with attenuation model was developed and used for attenuation correction during image reconstruction. Finally, a comparison was done between reconstructed images using our OSEM code and analytical FBP method before and after attenuation correction. The results of measurements showed that: our programs are capable to reconstruct SPECT images and correct the attenuation effects. Moreover, to evaluate reconstructed image quality before and after attenuation correction we applied a novel approach using Image Quality Index. Attenuation correction increases the quality and quantitative accuracy in both methods. This increase is independent of activity in quantity factor and decreases with activity in quality factor. In EM algorithm, it is necessary to use regularization to obtain true distribution of attenuation coefficients.

  8. Piecewise Linear Slope Estimation.

    PubMed

    Ingle, A N; Sethares, W A; Varghese, T; Bucklew, J A

    2014-11-01

    This paper presents a method for directly estimating slope values in a noisy piecewise linear function. By imposing a Markov structure on the sequence of slopes, piecewise linear fitting is posed as a maximum a posteriori estimation problem. A dynamic program efficiently solves this by traversing a linearly growing trellis. The alternating maximization algorithm (a kind of pseudo-EM method) is used to estimate the model parameters from data and its convergence behavior is analyzed. Ultrasound shear wave imaging is presented as a primary application. The algorithm is general enough for applicability in other fields, as suggested by an application to the estimation of shifts in financial interest rate data.

  9. Piecewise Linear Slope Estimation

    PubMed Central

    Sethares, W. A.; Bucklew, J. A.

    2015-01-01

    This paper presents a method for directly estimating slope values in a noisy piecewise linear function. By imposing a Markov structure on the sequence of slopes, piecewise linear fitting is posed as a maximum a posteriori estimation problem. A dynamic program efficiently solves this by traversing a linearly growing trellis. The alternating maximization algorithm (a kind of pseudo-EM method) is used to estimate the model parameters from data and its convergence behavior is analyzed. Ultrasound shear wave imaging is presented as a primary application. The algorithm is general enough for applicability in other fields, as suggested by an application to the estimation of shifts in financial interest rate data. PMID:26229417

  10. Paddy field mapping and yield estimation by satellite imagery and in situ observations

    NASA Astrophysics Data System (ADS)

    Oyoshi, K.; Sobue, S.

    2011-12-01

    Since Asian countries are responsible for approximately 90% of the world rice production and consumptions, rice is the most significant cereal crop in Asia. In order to ensure food security and take mitigation strategies or policies to manage food shortages, timely and accurate statistics of rice production are essential. It is time and cost consuming work to create accurate statistics of rice production by ground-based measurements. Hence, satellite remote sensing is expected to contribute food security through the systematic collection of food security related information such as crop growth or yield estimation. In 2011, Japan Aerospace Exploration Agency (JAXA) is collaborating with GISTDA (Geo-Informatics and Space Technology Development Agency, Thailand) in research projects of rice yield estimation by integrating satellite imagery and in situ data. Thailand is one of the largest rice production countries and the largest rice exporting country, therefore rice related statistics are imperative for food security and economy in the country. However, satellite observation by optical sensor in tropics including Thailand is highly limited, because the area is frequently covered by cloud. In contrast, Japanese microwave sensor, namely Phased-Array L-Band Synthetic Aperture Radar (PALSAR) on board Advanced Land Observing Satellite (ALOS) is suitable for monitoring cloudy area such as Southeast Asia, because PALSAR can penetrate clouds and collect land-surface information even if the area is covered by cloud. In this study, rice crop yield over Khon Kaen, northeast part of Thailand was estimated by combining satellite imagery and in-situ observation. This study consists of mainly two parts, paddy field mapping and yield estimation by numerical crop model. First, paddy field areas were detected by integrating PALSAR and AVNIR-2 data. PALSAR imagery has much speckle noise and the border of each landcover is ambiguous compared to that of optical sensor. To overcome this

  11. Identification of change-points in the relationship between food groups in the Mediterranean diet and overall mortality: an 'a posteriori' approach.

    PubMed

    Sofi, Francesco; Abbate, Rosanna; Gensini, Gian Franco; Casini, Alessandro; Trichopoulou, Antonia; Bamia, Christina

    2012-03-01

    Adherence to Mediterranean diet has been shown to be associated with a better health and greater survival. The aim of the present study was to identify change-points in the relationship between food groups composing Mediterranean diet and overall mortality. The population of the Greek EPIC prospective cohort study (23,349 adult men and women in the Greek EPIC sample who had not previously been diagnosed as having cancer, coronary heart disease or diabetes mellitus at enrolment) was analysed. Segmented logistic regression analysis was conducted to examine the association between each of the food groups contributing to the Mediterranean diet score and overall mortality. This analysis allowed the determination of the following change-points: among men: 1 change-point for vegetables, legumes, cereals, fish and seafood and dairy products and 2 change-points for fruit and nuts, meat and meat products and ethanol; among women: 1 change-point for legumes and fish and seafood and 2 change-points for the remaining food groups. These cut-off points were used to construct an 'a posteriori' score that may be better in capturing the health-promoting potential of the traditional Mediterranean diet. Identification of change-points in the relationship between components of the Mediterranean diet and mortality can be used to increase the discriminatory ability of a widely used Mediterranean diet score in relation to mortality.

  12. A simple robust and accurate a posteriori sub-cell finite volume limiter for the discontinuous Galerkin method on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Loubère, Raphaël

    2016-08-01

    In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order

  13. A-Posteriori Error Estimates for Mixed Finite Element and Finite Volume Methods for Problems Coupled Through a Boundary with Non-Matching Grids

    DTIC Science & Technology

    2013-08-01

    i)⊗M 0−1(∆y,i)]× [M 0−1(∆x,i)⊗M 10 (∆y,i)], i = L,R, Λ h = M 1−1(∆ΓI ). The mixed finite element (mortar) method reads: Compute phi ∈W hi , uhi ∈ V hi...DL FL −DR FR 0  , (2.7) where we abuse notation to let uhi , p h i , and ξ h denote the vector of nodal values for the finite element...phi ∈W hi , uhi ∈ V hi , ξ h ∈Λ h, i = L,R, satisfying (a−1L u h L,vL)M,T − (phL,∇ · vL)+ 〈PR→L(phR),n · vL〉ΓI =−〈gL,n · vL〉ΓL,M, (∇ ·uhL ,wL) = ( fL

  14. Pollution Error in the h-Version of the Finite Element Method and the Local Quality of A-Posteriori Error Estimators

    DTIC Science & Technology

    1994-02-01

    stop; otherwise proceed to the next step. Here N denotes the number of elements in the mesh TI. 4. Compute the target error e,o, 5, for the optimal mesh...using the principle of equidistribution of error) namely etarget - biIIutIIlfl (4.2) 5. For each element 7, predict the optimal local mesh-size from...the formula h =pt h (4.3) etarget Here h*t is the predicted optimal mesh-size for the subdomain within the el- ement ’r, r is an exponent which

  15. European annual cosmic-ray dose map and estimation of population exposure

    NASA Astrophysics Data System (ADS)

    Cinelli, Giorgia; Gruber, Valeria; De Felice, Luca; Bossew, Peter; Hernández-Ceballos, Miguel Angel; Tollefsen, Tore; Mundigl, Stefan; De Cort, Marc

    2017-04-01

    The Earth is continually bombarded by high energy cosmic-ray particles and the worldwide average exposure to cosmic rays represents about 13% of the total annual effective dose received by the population. Therefore assessment of cosmic-ray exposure at ground level is of great interest to better understand population exposure to ionizing radiation. In the present work the annual effective dose resulting from cosmic radiation (photons, direct ionizing and neutron components) at ground level has been calculated following a simple methodology based only on elevation data. The European annual cosmic-ray dose map, at 1 km resolution, is presented and described. It reports the annual effective dose that a person may receive from cosmic rays at ground level, and it ranges from about 300 to 4000 microSv. The spatial distribution of the cosmic-ray dose rate over Europe obviously reflects the elevation map. The map shows that for half of the considered territory the annual cosmic-ray dose is below 360 microSv and for less than 1% above 1000 μmicroSv. The highest values are obtained at the highest places of Europe, such as the Alps, the Pyrenees and in eastern Turkey (with mountains above 3000 masl), in the latter reaching the maximum value of 4000 microSv. On the contrary, the minimum value of 300 microSv at sea level coincides mainly with coastal locations. The map is part of the European Atlas of Natural Radiation, and it will be useful to estimate the annual dose that the public may receive from natural radioactivity. Moreover, thanks to the availability of population data, the annual cosmic-ray collective dose has been evaluated and population-weighted average annual effective dose (per capita) due to cosmic ray has been estimated for each European country considered. The values range from about 300 microSv (Iceland) to 400 microSv (Turkey) per capita. The average value for all the countries considered is 330 microSv per capita. This work represents a starting point in

  16. Estimating the age of healthy infants from quantitative myelin water fraction maps.

    PubMed

    Dean, Douglas C; O'Muircheartaigh, Jonathan; Dirks, Holly; Waskiewicz, Nicole; Lehman, Katie; Walker, Lindsay; Piryatinsky, Irene; Deoni, Sean C L

    2015-04-01

    The trajectory of the developing brain is characterized by a sequence of complex, nonlinear patterns that occur at systematic stages of maturation. Although significant prior neuroimaging research has shed light on these patterns, the challenge of accurately characterizing brain maturation, and identifying areas of accelerated or delayed development, remains. Altered brain development, particularly during the earliest stages of life, is believed to be associated with many neurological and neuropsychiatric disorders. In this work, we develop a framework to construct voxel-wise estimates of brain age based on magnetic resonance imaging measures sensitive to myelin content. 198 myelin water fraction (VF(M) ) maps were acquired from healthy male and female infants and toddlers, 3 to 48 months of age, and used to train a sigmoidal-based maturational model. The validity of the approach was then established by testing the model on 129 different VF(M) datasets. Results revealed the approach to have high accuracy, with a mean absolute percent error of 13% in males and 14% in females, and high predictive ability, with correlation coefficients between estimated and true ages of 0.945 in males and 0.94 in females. This work represents a new approach toward mapping brain maturity, and may provide a more faithful staging of brain maturation in infants beyond chronological or gestation-corrected age, allowing earlier identification of atypical regional brain development.

  17. The potential of more accurate InSAR covariance matrix estimation for land cover mapping

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin

    2017-04-01

    Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.

  18. 3D viscosity maps for Greenland and effect on GRACE mass balance estimates

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Xu, Zheng

    2016-04-01

    The GRACE satellite mission measures mass loss of the Greenland ice sheet. To correct for glacial isostatic adjustment numerical models are used. Although generally found to be a small signal, the full range of possible GIA models has not been explored yet. In particular, low viscosities due to a wet mantle and high temperatures due to the nearby Iceland hotspot could have a significant effect on GIA gravity rates. The goal of this study is to present a range of possible viscosity maps, and investigate the effect on GRACE mass balance estimates. Viscosity is derived using flow laws for olivine. Mantle temperature is computed from global seismology models, based on temperature derivatives for different mantle compositions. An indication for grain sizes is obtained by xenolith findings at a few locations. We also investigate the weakening effect of the presence of melt. To calculate gravity rates, we use a finite-element GIA model with the 3D viscosity maps and the ICE-5G loading history. GRACE mass balances for mascons in Greenland are derived with a least-squares inversion, using separate constraints for the inland and coastal areas in Greenland. Biases in the least-squares inversion are corrected using scale factors estimated from a simulation based on a surface mass balance model (Xu et al., submitted to The Cryosphere). Model results show enhanced gravity rates in the west and south of Greenland with 3D viscosity maps, compared to GIA models with 1D viscosity. The effect on regional mass balance is up to 5 Gt/year. Regional low viscosity can make present-day gravity rates sensitivity to ice thickness changes in the last decades. Therefore, an improved ice loading history for these time scales is needed.

  19. Winter wheat mapping combining variations before and after estimated heading dates

    NASA Astrophysics Data System (ADS)

    Qiu, Bingwen; Luo, Yuhan; Tang, Zhenghong; Chen, Chongcheng; Lu, Difei; Huang, Hongyu; Chen, Yunzhi; Chen, Nan; Xu, Weiming

    2017-01-01

    Accurate and updated information on winter wheat distribution is vital for food security. The intra-class variability of the temporal profiles of vegetation indices presents substantial challenges to current time series-based approaches. This study developed a new method to identify winter wheat over large regions through a transformation and metric-based approach. First, the trend surfaces were established to identify key phenological parameters of winter wheat based on altitude and latitude with references to crop calendar data from the agro-meteorological stations. Second, two phenology-based indicators were developed based on the EVI2 differences between estimated heading and seedling/harvesting dates and the change amplitudes. These two phenology-based indicators revealed variations during the estimated early and late growth stages. Finally, winter wheat data were extracted based on these two metrics. The winter wheat mapping method was applied to China based on the 250 m 8-day composite Moderate Resolution Imaging Spectroradiometer (MODIS) 2-band Enhanced Vegetation Index (EVI2) time series datasets. Accuracy was validated with field survey data, agricultural census data, and Landsat-interpreted results in test regions. When evaluated with 653 field survey sites and Landsat image interpreted data, the overall accuracy of MODIS-derived images in 2012-2013 was 92.19% and 88.86%, respectively. The MODIS-derived winter wheat areas accounted for over 82% of the variability at the municipal level when compared with agricultural census data. The winter wheat mapping method developed in this study demonstrates great adaptability to intra-class variability of the vegetation temporal profiles and has great potential for further applications to broader regions and other types of agricultural crop mapping.

  20. Video attention deviation estimation using inter-frame visual saliency map analysis

    NASA Astrophysics Data System (ADS)

    Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng

    2012-01-01

    A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.

  1. Leptospirosis in American Samoa – Estimating and Mapping Risk Using Environmental Data

    PubMed Central

    Lau, Colleen L.; Clements, Archie C. A.; Skelly, Chris; Dobson, Annette J.; Smythe, Lee D.; Weinstein, Philip

    2012-01-01

    Background The recent emergence of leptospirosis has been linked to many environmental drivers of disease transmission. Accurate epidemiological data are lacking because of under-diagnosis, poor laboratory capacity, and inadequate surveillance. Predictive risk maps have been produced for many diseases to identify high-risk areas for infection and guide allocation of public health resources, and are particularly useful where disease surveillance is poor. To date, no predictive risk maps have been produced for leptospirosis. The objectives of this study were to estimate leptospirosis seroprevalence at geographic locations based on environmental factors, produce a predictive disease risk map for American Samoa, and assess the accuracy of the maps in predicting infection risk. Methodology and Principal Findings Data on seroprevalence and risk factors were obtained from a recent study of leptospirosis in American Samoa. Data on environmental variables were obtained from local sources, and included rainfall, altitude, vegetation, soil type, and location of backyard piggeries. Multivariable logistic regression was performed to investigate associations between seropositivity and risk factors. Using the multivariable models, seroprevalence at geographic locations was predicted based on environmental variables. Goodness of fit of models was measured using area under the curve of the receiver operating characteristic, and the percentage of cases correctly classified as seropositive. Environmental predictors of seroprevalence included living below median altitude of a village, in agricultural areas, on clay soil, and higher density of piggeries above the house. Models had acceptable goodness of fit, and correctly classified ∼84% of cases. Conclusions and Significance Environmental variables could be used to identify high-risk areas for leptospirosis. Environmental monitoring could potentially be a valuable strategy for leptospirosis control, and allow us to move from disease

  2. Use of plume mapping data to estimate chlorinated solvent mass loss

    USGS Publications Warehouse

    Barbaro, J.R.; Neupane, P.P.

    2006-01-01

    Results from a plume mapping study from November 2000 through February 2001 in the sand-and-gravel surficial aquifer at Dover Air Force Base, Delaware, were used to assess the occurrence and extent of chlorinated solvent mass loss by calculating mass fluxes across two transverse cross sections and by observing changes in concentration ratios and mole fractions along a longitudinal cross section through the core of the plume. The plume mapping investigation was conducted to determine the spatial distribution of chlorinated solvents migrating from former waste disposal sites. Vertical contaminant concentration profiles were obtained with a direct-push drill rig and multilevel piezometers. These samples were supplemented with additional ground water samples collected with a minipiezometer from the bed of a perennial stream downgradient of the source areas. Results from the field program show that the plume, consisting mainly of tetrachloroethylene (PCE), trichloroethene (TCE), and cis-1,2-dichloroethene (cis-1,2-DCE), was approximately 670 m in length and 120 m in width, extended across much of the 9- to 18-m thickness of the surficial aquifer, and discharged to the stream in some areas. The analyses of the plume mapping data show that losses of the parent compounds, PCE and TCE, were negligible downgradient of the source. In contrast, losses of cis-1,2-DCE, a daughter compound, were observed in this plume. These losses very likely resulted from biodegradation, but the specific reaction mechanism could not be identified. This study demonstrates that plume mapping data can be used to estimate the occurrence and extent of chlorinated solvent mass loss from biodegradation and assess the effectiveness of natural attenuation as a remedial measure.

  3. A system to geometrically rectify and map airborne scanner imagery and to estimate ground area. [by computer

    NASA Technical Reports Server (NTRS)

    Spencer, M. M.; Wolf, J. M.; Schall, M. A.

    1974-01-01

    A system of computer programs were developed which performs geometric rectification and line-by-line mapping of airborne multispectral scanner data to ground coordinates and estimates ground area. The system requires aircraft attitude and positional information furnished by ancillary aircraft equipment, as well as ground control points. The geometric correction and mapping procedure locates the scan lines, or the pixels on each line, in terms of map grid coordinates. The area estimation procedure gives ground area for each pixel or for a predesignated parcel specified in map grid coordinates. The results of exercising the system with simulated data showed the uncorrected video and corrected imagery and produced area estimates accurate to better than 99.7%.

  4. Mapping.

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1979-01-01

    The area of geological mapping in the United States in 1978 increased greatly over that reported in 1977; state geological maps were added for California, Idaho, Nevada, and Alaska last year. (Author/BB)

  5. Zero and first order phase shift correction for field map estimation with dual-echo GRE using bipolar gradients

    PubMed Central

    Yeo, Desmond T. B.; Chenevert, Thomas L.; Fessler, Jeffrey A.; Kim, Boklye

    2007-01-01

    A simple phase error correction technique used for field map estimation with a generally available dual-echo GRE sequence is presented. Magnetic field inhomogeneity maps estimated using two separate GRE volume acquisitions at different echo times are prone to dynamic motion errors between acquisitions. By using the dual-echo sequence the data are collected during two back-to-back readout gradients in opposite polarity after a single RF pulse, and inter-echo motion artifacts and alignment errors in field map estimation can be factored out. Residual phase error from the asymmetric readout pulses is modeled as an affine term in the readout direction. Results from phantom and human data suggest that the first order phase correction term stays constant over time and, hence, can be applied to different data acquired with the same protocol over time. The zero order phase correction term may change with time and is estimated empirically for different scans. PMID:17442524

  6. Hydrograph sensitivity to estimates of map impervious cover: a WinHSPF BASINS case study

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Somerlot, Christopher; Hassett, James M.

    2003-04-01

    The BASINS geographic information system hydrologic toolkit was designed to compute total maximum daily loads, which are often derived by combining water quantity estimates with pollutant concentration estimates. In this paper the BASINS toolkit PLOAD and WinHSPF sub-models are briefly described, and then a 0·45 km2 headwater watershed in the New York Croton River area is used for a case study illustrating a full WinHSPF implementation. The goal of the Croton study was to determine the sensitivity of WinHSPF hydrographs to changes in land cover map inputs. This scenario occurs when scaling the WinHSPF model from the smaller 0·45 km2 watershed to the larger 1000 km2 management basin of the entire Croton area. Methods used to test model sensitivity include first calibrating the WinHSPF hydrograph using research-monitored precipitation and discharge data together with high spatial resolution and accuracy land cover data of impervious and pervious areas, and then swapping three separate land cover files, known as GIRAS, MRLC, and DOQQ data, into the calibrated model. Research results indicated that the WinHSPF land cover swapping had peak flow sensitivity in December 2001 hydrographs between 35% underestimation and 20% overestimation, and that errors in land-cover-derived runoff ratios for storm totals and peak flows tracked with the land cover data estimates of impervious area.

  7. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  8. A Unified Maximum Likelihood Framework for Simultaneous Motion and $T_{1}$ Estimation in Quantitative MR $T_{1}$ Mapping.

    PubMed

    Ramos-Llorden, Gabriel; den Dekker, Arnold J; Van Steenkiste, Gwendolyn; Jeurissen, Ben; Vanhevel, Floris; Van Audekerke, Johan; Verhoye, Marleen; Sijbers, Jan

    2017-02-01

    In quantitative MR T1 mapping, the spin-lattice relaxation time T1 of tissues is estimated from a series of T1 -weighted images. As the T1 estimation is a voxel-wise estimation procedure, correct spatial alignment of the T1 -weighted images is crucial. Conventionally, the T1 -weighted images are first registered based on a general-purpose registration metric, after which the T1 map is estimated. However, as demonstrated in this paper, such a two-step approach leads to a bias in the final T1 map. In our work, instead of considering motion correction as a preprocessing step, we recover the motion-free T1 map using a unified estimation approach. In particular, we propose a unified framework where the motion parameters and the T1 map are simultaneously estimated with a Maximum Likelihood (ML) estimator. With our framework, the relaxation model, the motion model as well as the data statistics are jointly incorporated to provide substantially more accurate motion and T1 parameter estimates. Experiments with realistic Monte Carlo simulations show that the proposed unified ML framework outperforms the conventional two-step approach as well as state-of-the-art model-based approaches, in terms of both motion and T1 map accuracy and mean-square error. Furthermore, the proposed method was additionally validated in a controlled experiment with real T1 -weighted data and with two in vivo human brain T1 -weighted data sets, showing its applicability in real-life scenarios.

  9. A parametric estimation approach to instantaneous spectral imaging.

    PubMed

    Oktem, Figen S; Kamalabadi, Farzad; Davila, Joseph M

    2014-12-01

    Spectral imaging, the simultaneous imaging and spectroscopy of a radiating scene, is a fundamental diagnostic technique in the physical sciences with widespread application. Due to the intrinsic limitation of two-dimensional (2D) detectors in capturing inherently three-dimensional (3D) data, spectral imaging techniques conventionally rely on a spatial or spectral scanning process, which renders them unsuitable for dynamic scenes. In this paper, we present a nonscanning (instantaneous) spectral imaging technique that estimates the physical parameters of interest by combining measurements with a parametric model and solving the resultant inverse problem computationally. The associated inverse problem, which can be viewed as a multiframe semiblind deblurring problem (with shift-variant blur), is formulated as a maximum a posteriori (MAP) estimation problem since in many such experiments prior statistical knowledge of the physical parameters can be well estimated. Subsequently, an efficient dynamic programming algorithm is developed to find the global optimum of the nonconvex MAP problem. Finally, the algorithm and the effectiveness of the spectral imaging technique are illustrated for an application in solar spectral imaging. Numerical simulation results indicate that the physical parameters can be estimated with the same order of accuracy as state-of-the-art slit spectroscopy but with the added benefit of an instantaneous, 2D field-of-view. This technique will be particularly useful for studying the spectra of dynamic scenes encountered in space remote sensing.

  10. Economic analysis of the first 20 years of universal hepatitis B vaccination program in Italy: an a posteriori evaluation and forecast of future benefits.

    PubMed

    Boccalini, Sara; Taddei, Cristina; Ceccherini, Vega; Bechini, Angela; Levi, Miriam; Bartolozzi, Dario; Bonanni, Paolo

    2013-05-01

    Italy was one of the first countries in the world to introduce a routine vaccination program against HBV for newborns and 12-y-old children. From a clinical point of view, such strategy was clearly successful. The objective of our study was to verify whether, at 20 y from its implementation, hepatitis B universal vaccination had positive effects also from an economic point of view. An a posteriori analysis evaluated the impact that the hepatitis B immunization program had up to the present day. The implementation of vaccination brought an extensive reduction of the burden of hepatitis B-related diseases in the Italian population. As a consequence, the past and future savings due to clinical costs avoided are particularly high. We obtained a return on investment nearly equal to 1 from the National Health Service perspective, and a benefit-to-cost ratio slightly less than 1 for the Societal perspective, considering only the first 20 y from the start of the program. In the longer-time horizon, ROI and BCR values were positive (2.78 and 2.46, respectively). The break-even point was already achieved few years ago for the NHS and for the Society, and since then more and more money is progressively saved. The implementation of universal hepatitis B vaccination was very favorable during the first 20 y of adoption, and further benefits will be increasingly evident in the future. The hepatitis B vaccination program in Italy is a clear example of the great impact that universal immunization is able to provide in the medium-long-term when health care authorities are so wise as to invest in prevention.

  11. DMI measurements impact on a position estimation with lack of GNSS signals during Mobile Mapping

    NASA Astrophysics Data System (ADS)

    Bobkowka, K.; Nykiel, G.; Tysiąc, P.

    2017-07-01

    Nowadays, Mobile Laser Scanning is common in use in addition to geodesy measurements. The data which are provided by the system characterizes with high precision and flexibility. To precise mapping, the accuracy of the data should be maintained. In Poland, according to the minister’s dispositions, the accuracy of the data should not exceeded 10 cm. With fully operated system it is easy to uphold, but there is a situation when a signal from an INS is not enough to preserve it. This paper is presenting the solution of a DMI use in Mobile Laser Scanning measurements as the support for position estimation during lack of satellites signal situation when the vehicle with the platform was entered the tunnel. To comparison the results a several of entrances was performed. This research helps understand the use of DMI in mobile data acquisition in different acquiring situations.

  12. Mapping Antarctic Crustal Thickness using Gravity Inversion and Comparison with Seismic Estimates

    NASA Astrophysics Data System (ADS)

    Kusznir, Nick; Ferraccioli, Fausto; Jordan, Tom

    2017-04-01

    Using gravity anomaly inversion, we produce comprehensive regional maps of crustal thickness and oceanic lithosphere distribution for Antarctica and the Southern Ocean. Crustal thicknesses derived from gravity inversion are consistent with seismic estimates. We determine Moho depth, crustal basement thickness, continental lithosphere thinning (1-1/β) and ocean-continent transition location using a 3D spectral domain gravity inversion method, which incorporates a lithosphere thermal gravity anomaly correction (Chappell & Kusznir 2008). The gravity anomaly contribution from ice thickness is included in the gravity inversion, as is the contribution from sediments which assumes a compaction controlled sediment density increase with depth. Data used in the gravity inversion are elevation and bathymetry, free-air gravity anomaly, the Bedmap 2 ice thickness and bedrock topography compilation south of 60 degrees south and relatively sparse constraints on sediment thickness. Ocean isochrons are used to define the cooling age of oceanic lithosphere. Crustal thicknesses from gravity inversion are compared with independent seismic estimates, which are still relatively sparse over Antarctica. Our gravity inversion study predicts thick crust (> 45 km) under interior East Antarctica, which is penetrated by narrow continental rifts featuring relatively thinner crust. The largest crustal thicknesses predicted from gravity inversion lie in the region of the Gamburtsev Subglacial Mountains, and are consistent with seismic estimates. The East Antarctic Rift System (EARS), a major Permian to Cretaceous age rift system, is imaged by our inversion and appears to extend from the continental margin at the Lambert Rift to the South Pole region, a distance of 2500 km. Offshore an extensive region of either thick oceanic crust or highly thinned continental crust lies adjacent to Oates Land and north Victoria Land, and also off West Antarctica around the Amundsen Ridges. Thin crust is

  13. Multi-crop area estimation and mapping on a microprocessor/mainframe network

    NASA Technical Reports Server (NTRS)

    Sheffner, E.

    1985-01-01

    The data processing system is outlined for a 1985 test aimed at determining the performance characteristics of area estimation and mapping procedures connected with the California Cooperative Remote Sensing Project. The project is a joint effort of the USDA Statistical Reporting Service-Remote Sensing Branch, the California Department of Water Resources, NASA-Ames Research Center, and the University of California Remote Sensing Research Program. One objective of the program was to study performance when data processing is done on a microprocessor/mainframe network under operational conditions. The 1985 test covered the hardware, software, and network specifications and the integration of these three components. Plans for the year - including planned completion of PEDITOR software, testing of software on MIDAS, and accomplishment of data processing on the MIDAS-VAX-CRAY network - are discussed briefly.

  14. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: A Novel Method for the Initial-Condition Estimation of a Tent Map

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Gao, Yong; Yang, Yuan

    2009-07-01

    Based on the connection between the tent map and the saw tooth map or Bernoulli map, a novel method for the initial-condition estimation of the tent map is presented. In the method, firstly the symbolic sequence generated from the tent map is converted to the forms obtained from the saw tooth map and Bernoulli map, and then the relationship between the symbolic sequence and the initial condition of the tent map can be obtained from the initial-condition estimation equations, which can be easily obtained, hence the estimation of the tent map can be achieved finally. The method is computationally simple and the error of the estimator is less than 1/2N. The method is verified by software simulation.

  15. Reducing Uncertainties in Satellite-derived Forest Aboveground Biomass Estimates using a High Resolution Forest Cover Map

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Ganguly, S.; Nemani, R. R.; Milesi, C.; Basu, S.; Kumar, U.

    2014-12-01

    Several studies to date have provided an extensive knowledge base for estimating forest aboveground biomass (AGB) and recent advances in space-based modeling of the 3-D canopy structure, combined with canopy reflectance measured by passive optical sensors and radar backscatter, are providing improved satellite-derived AGB density mapping for large scale carbon monitoring applications. A key limitation in forest AGB estimation from remote sensing, however, is the large uncertainty in forest cover estimates from the coarse-to-medium resolution satellite-derived land cover maps (present resolution is limited to 30-m of the USGS NLCD Program). The uncertainties in forest cover estimates at the Landsat scale result in high uncertainties for AGB estimation, predominantly in heterogeneous forest and urban landscapes. We have successfully developed an approach using a machine learning algorithm and High-Performance-Computing with NAIP air-borne imagery data for mapping tree cover at 1-m over California and Maryland. In a comparison with high resolution LiDAR data available over selected regions in the two states, we found our results to be promising both in terms of accuracy as well as our ability to scale nationally. The generated 1-m forest cover map will be aggregated to the Landsat spatial grid to demonstrate differences in AGB estimates (pixel-level AGB density, total AGB at aggregated scales like ecoregions and counties) when using a native 30-m forest cover map versus a 30-m map derived from a higher resolution dataset. The process will also be complemented with a LiDAR derived AGB estimate at the 30-m scale to aid in true validation.

  16. Magnitude, Location, and Ground Motion Estimates Derived From the Community Internet Intensity Maps

    NASA Astrophysics Data System (ADS)

    Quitoriano, V.; Wald, D. J.; Hattori, M. F.; Ebel, J. E.

    2002-12-01

    As is typical for stable continental region events, the 2002 Au Sable Forks, NY, and Evansville, IN, earthquakes had a dearth of ground motion recordings. In contrast, the USGS collected over 9,300 and 6600 Internet responses for these two events, respectively, through the Community Internet Intensity Map (CIIM) Web pages providing a valuable collection of intensity data. CIIM is an automatic system for rapidly generating seismic intensity maps based on shaking and damage reports collected from Internet users immediately following felt earthquakes in the United States. These intensities (CII) have been shown to be comparable to USGS Modified Mercalli Intensities (MMI). Given the CII for an event, we have developed tools to make it possible to generate ground motion estimates in the absence of data from seismic instruments. We compare both mean ground motion estimates based on the ShakeMap instrumental intensity relations with values computed from a Bayesian approach, based on combining probabilities of ground motion amplitudes for a given intensity with those for regionally-appropriate attenuation relationships. We also present a method for deriving earthquake magnitude and location automatically, updated as a function of time, from online responses based on the algorithm of Bakun and Wentworth. We perform a grid search centered on the area with the highest intensity responses, treat each node as a `trial epicenter', and determine the magnitude and intensity centroid that best fits the CII observation points according a region-dependent intensity-distance attenuation relation. We use the M4.9 2002 Gilroy, CA, event to test all these new tools since it was well recorded by strong motion instruments and had an impressive CIIM response. We show that the epicenter and ground motions determined from the CIIM data correlate well with instrumentally derived parameters. We then apply these methods to the Au Sable Forks, NY, and Evansville, IN, events. To show the

  17. Maps of Dust Infrared Emission for Use in Estimation of Reddening and Cosmic Microwave Background Radiation Foregrounds

    NASA Astrophysics Data System (ADS)

    Schlegel, David J.; Finkbeiner, Douglas P.; Davis, Marc

    1998-06-01

    We present a full-sky 100 μm map that is a reprocessed composite of the COBE/DIRBE and IRAS/ISSA maps, with the zodiacal foreground and confirmed point sources removed. Before using the ISSA maps, we remove the remaining artifacts from the IRAS scan pattern. Using the DIRBE 100 and 240 μm data, we have constructed a map of the dust temperature so that the 100 μm map may be converted to a map proportional to dust column density. The dust temperature varies from 17 to 21 K, which is modest but does modify the estimate of the dust column by a factor of 5. The result of these manipulations is a map with DIRBE quality calibration and IRAS resolution. A wealth of filamentary detail is apparent on many different scales at all Galactic latitudes. In high-latitude regions, the dust map correlates well with maps of H I emission, but deviations are coherent in the sky and are especially conspicuous in regions of saturation of H I emission toward denser clouds and of formation of H2 in molecular clouds. In contrast, high-velocity H I clouds are deficient in dust emission, as expected. To generate the full-sky dust maps, we must first remove zodiacal light contamination, as well as a possible cosmic infrared background (CIB). This is done via a regression analysis of the 100 μm DIRBE map against the Leiden-Dwingeloo map of H I emission, with corrections for the zodiacal light via a suitable expansion of the DIRBE 25 μm flux. This procedure removes virtually all traces of the zodiacal foreground. For the 100 μm map no significant CIB is detected. At longer wavelengths, where the zodiacal contamination is weaker, we detect the CIB at surprisingly high flux levels of 32 +/- 13 nW m-2 sr-1 at 140 μm and of 17 +/- 4 nW m-2 sr-1 at 240 μm (95% confidence). This integrated flux ~2 times that extrapolated from optical galaxies in the Hubble Deep Field. The primary use of these maps is likely to be as a new estimator of Galactic extinction. To calibrate our maps, we assume a

  18. Multi-Scale hierarchical generation of PET parametric maps: application and testing on a [11C]DPN study.

    PubMed

    Rizzo, G; Turkheimer, F E; Keihaninejad, S; Bose, S K; Hammers, A; Bertoldo, A

    2012-02-01

    We propose a general approach to generate parametric maps. It consists in a multi-stage hierarchical scheme where, starting from the kinetic analysis of the whole brain, we then cascade the kinetic information to anatomical systems that are akin in terms of receptor densities, and then down to the voxel level. A-priori classes of voxels are generated either by anatomical atlas segmentation or by functional segmentation using unsupervised clustering. Kinetic properties are transmitted to the voxels in each class using maximum a posteriori (MAP) estimation method. We validate the novel method on a [11C]diprenorphine (DPN) test-retest data-set that represents a challenge to estimation given [11C]DPN's slow equilibration in tissue. The estimated parametric maps of volume of distribution (VT) reflect the opioid receptor distributions known from previous [11C]DPN studies. When priors are derived from the anatomical atlas, there is an excellent agreement and strong correlation among voxel MAP and ROI results and excellent test-retest reliability for all subjects but one. Voxel level results did not change when priors were defined through unsupervised clustering. This new method is fast (i.e. 15 min per subject) and applied to [11C]DPN data achieves accurate quantification of VT as well as high quality VT images. Moreover, the way the priors are defined (i.e. using an anatomical atlas or unsupervised clustering) does not affect the estimates.

  19. THREaD Mapper Studio: a novel, visual web server for the estimation of genetic linkage maps.

    PubMed

    Cheema, Jitender; Ellis, T H Noel; Dicks, Jo

    2010-07-01

    The estimation of genetic linkage maps is a key component in plant and animal research, providing both an indication of the genetic structure of an organism and a mechanism for identifying candidate genes associated with traits of interest. Because of this importance, several computational solutions to genetic map estimation exist, mostly implemented as stand-alone software packages. However, the estimation process is often largely hidden from the user. Consequently, problems such as a program crashing may occur that leave a user baffled. THREaD Mapper Studio (http://cbr.jic.ac.uk/threadmapper) is a new web site that implements a novel, visual and interactive method for the estimation of genetic linkage maps from DNA markers. The rationale behind the web site is to make the estimation process as transparent and robust as possible, while also allowing users to use their expert knowledge during analysis. Indeed, the 3D visual nature of the tool allows users to spot features in a data set, such as outlying markers and potential structural rearrangements that could cause problems with the estimation procedure and to account for them in their analysis. Furthermore, THREaD Mapper Studio facilitates the visual comparison of genetic map solutions from third party software, aiding users in developing robust solutions for their data sets.

  20. THREaD Mapper Studio: a novel, visual web server for the estimation of genetic linkage maps

    PubMed Central

    Cheema, Jitender; Ellis, T. H. Noel; Dicks, Jo

    2010-01-01

    The estimation of genetic linkage maps is a key component in plant and animal research, providing both an indication of the genetic structure of an organism and a mechanism for identifying candidate genes associated with traits of interest. Because of this importance, several computational solutions to genetic map estimation exist, mostly implemented as stand-alone software packages. However, the estimation process is often largely hidden from the user. Consequently, problems such as a program crashing may occur that leave a user baffled. THREaD Mapper Studio (http://cbr.jic.ac.uk/threadmapper) is a new web site that implements a novel, visual and interactive method for the estimation of genetic linkage maps from DNA markers. The rationale behind the web site is to make the estimation process as transparent and robust as possible, while also allowing users to use their expert knowledge during analysis. Indeed, the 3D visual nature of the tool allows users to spot features in a data set, such as outlying markers and potential structural rearrangements that could cause problems with the estimation procedure and to account for them in their analysis. Furthermore, THREaD Mapper Studio facilitates the visual comparison of genetic map solutions from third party software, aiding users in developing robust solutions for their data sets. PMID:20494977

  1. Stability estimate for the aligned magnetic field in a periodic quantum waveguide from Dirichlet-to-Neumann map

    SciTech Connect

    Mejri, Youssef

    2016-06-15

    In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.

  2. Unsupervised partial volume estimation using 3D and statistical priors

    NASA Astrophysics Data System (ADS)

    Tardif, Pierre M.

    2001-07-01

    Our main objective is to compute the volume of interest of images from magnetic resonance imaging (MRI). We suggest a method based on maximum a posteriori. Using texture models, we propose a new partial volume determination. We model tissues using generalized gaussian distributions fitted from a mixture of their gray levels and texture information. Texture information relies on estimation errors from multiresolution and multispectral autoregressive models. A uniform distribution solves large estimation errors, when dealing with unknown tissues. An initial segmentation, needed by the multiresolution segmentation deterministic relaxation algorithm, is found using an anatomical atlas. To model the a priori information, we use a full 3-D extension of Markov random fields. Our 3-D extension is straightforward, easily implemented, and includes single label probability. Using initial segmentation map and initial tissues models, iterative updates are made on the segmentation map and tissue models. Updating tissue models remove field inhomogeneities. Partial volumes are computed from final segmentation map and tissue models. Preliminary results are encouraging.

  3. Mapping Oil and Gas Development Potential in the US Intermountain West and Estimating Impacts to Species

    PubMed Central

    Copeland, Holly E.; Doherty, Kevin E.; Naugle, David E.; Pocewicz, Amy; Kiesecker, Joseph M.

    2009-01-01

    Background Many studies have quantified the indirect effect of hydrocarbon-based economies on climate change and biodiversity, concluding that a significant proportion of species will be threatened with extinction. However, few studies have measured the direct effect of new energy production infrastructure on species persistence. Methodology/Principal Findings We propose a systematic way to forecast patterns of future energy development and calculate impacts to species using spatially-explicit predictive modeling techniques to estimate oil and gas potential and create development build-out scenarios by seeding the landscape with oil and gas wells based on underlying potential. We illustrate our approach for the greater sage-grouse (Centrocercus urophasianus) in the western US and translate the build-out scenarios into estimated impacts on sage-grouse. We project that future oil and gas development will cause a 7–19 percent decline from 2007 sage-grouse lek population counts and impact 3.7 million ha of sagebrush shrublands and 1.1 million ha of grasslands in the study area. Conclusions/Significance Maps of where oil and gas development is anticipated in the US Intermountain West can be used by decision-makers intent on minimizing impacts to sage-grouse. This analysis also provides a general framework for using predictive models and build-out scenarios to anticipate impacts to species. These predictive models and build-out scenarios allow tradeoffs to be considered between species conservation and energy development prior to implementation. PMID:19826472

  4. Mapping oil and gas development potential in the US Intermountain West and estimating impacts to species.

    PubMed

    Copeland, Holly E; Doherty, Kevin E; Naugle, David E; Pocewicz, Amy; Kiesecker, Joseph M

    2009-10-14

    Many studies have quantified the indirect effect of hydrocarbon-based economies on climate change and biodiversity, concluding that a significant proportion of species will be threatened with extinction. However, few studies have measured the direct effect of new energy production infrastructure on species persistence. We propose a systematic way to forecast patterns of future energy development and calculate impacts to species using spatially-explicit predictive modeling techniques to estimate oil and gas potential and create development build-out scenarios by seeding the landscape with oil and gas wells based on underlying potential. We illustrate our approach for the greater sage-grouse (Centrocercus urophasianus) in the western US and translate the build-out scenarios into estimated impacts on sage-grouse. We project that future oil and gas development will cause a 7-19 percent decline from 2007 sage-grouse lek population counts and impact 3.7 million ha of sagebrush shrublands and 1.1 million ha of grasslands in the study area. Maps of where oil and gas development is anticipated in the US Intermountain West can be used by decision-makers intent on minimizing impacts to sage-grouse. This analysis also provides a general framework for using predictive models and build-out scenarios to anticipate impacts to species. These predictive models and build-out scenarios allow tradeoffs to be considered between species conservation and energy development prior to implementation.

  5. Global epidemiology of sickle haemoglobin in neonates: a contemporary geostatistical model-based map and population estimates

    PubMed Central

    Piel, Frédéric B; Patil, Anand P; Howes, Rosalind E; Nyangiri, Oscar A; Gething, Peter W; Dewi, Mewahyu; Temperley, William H; Williams, Thomas N; Weatherall, David J; Hay, Simon I

    2013-01-01

    Summary Background Reliable estimates of populations affected by diseases are necessary to guide efficient allocation of public health resources. Sickle haemoglobin (HbS) is the most common and clinically significant haemoglobin structural variant, but no contemporary estimates exist of the global populations affected. Moreover, the precision of available national estimates of heterozygous (AS) and homozygous (SS) neonates is unknown. We aimed to provide evidence-based estimates at various scales, with uncertainty measures. Methods Using a database of sickle haemoglobin surveys, we created a contemporary global map of HbS allele frequency distribution within a Bayesian geostatistical model. The pairing of this map with demographic data enabled calculation of global, regional, and national estimates of the annual number of AS and SS neonates. Subnational estimates were also calculated in data-rich areas. Findings Our map shows subnational spatial heterogeneities and high allele frequencies across most of sub-Saharan Africa, the Middle East, and India, as well as gene flow following migrations to western Europe and the eastern coast of the Americas. Accounting for local heterogeneities and demographic factors, we estimated that the global number of neonates affected by HbS in 2010 included 5 476 000 (IQR 5 291 000–5 679 000) AS neonates and 312 000 (294 000–330 000) SS neonates. These global estimates are higher than previous conservative estimates. Important differences predicted at the national level are discussed. Interpretation HbS will have an increasing effect on public health systems. Our estimates can help countries and the international community gauge the need for appropriate diagnoses and genetic counselling to reduce the number of neonates affected. Similar mapping and modelling methods could be used for other inherited disorders. Funding The Wellcome Trust. PMID:23103089

  6. Global epidemiology of sickle haemoglobin in neonates: a contemporary geostatistical model-based map and population estimates.

    PubMed

    Piel, Frédéric B; Patil, Anand P; Howes, Rosalind E; Nyangiri, Oscar A; Gething, Peter W; Dewi, Mewahyu; Temperley, William H; Williams, Thomas N; Weatherall, David J; Hay, Simon I

    2013-01-12

    Reliable estimates of populations affected by diseases are necessary to guide efficient allocation of public health resources. Sickle haemoglobin (HbS) is the most common and clinically significant haemoglobin structural variant, but no contemporary estimates exist of the global populations affected. Moreover, the precision of available national estimates of heterozygous (AS) and homozygous (SS) neonates is unknown. We aimed to provide evidence-based estimates at various scales, with uncertainty measures. Using a database of sickle haemoglobin surveys, we created a contemporary global map of HbS allele frequency distribution within a Bayesian geostatistical model. The pairing of this map with demographic data enabled calculation of global, regional, and national estimates of the annual number of AS and SS neonates. Subnational estimates were also calculated in data-rich areas. Our map shows subnational spatial heterogeneities and high allele frequencies across most of sub-Saharan Africa, the Middle East, and India, as well as gene flow following migrations to western Europe and the eastern coast of the Americas. Accounting for local heterogeneities and demographic factors, we estimated that the global number of neonates affected by HbS in 2010 included 5,476,000 (IQR 5,291,000-5,679,000) AS neonates and 312,000 (294,000-330,000) SS neonates. These global estimates are higher than previous conservative estimates. Important differences predicted at the national level are discussed. HbS will have an increasing effect on public health systems. Our estimates can help countries and the international community gauge the need for appropriate diagnoses and genetic counselling to reduce the number of neonates affected. Similar mapping and modelling methods could be used for other inherited disorders. The Wellcome Trust. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Estimation of Purkinje trees from electro-anatomical mapping of the left ventricle using minimal cost geodesics.

    PubMed

    Cárdenes, Rubén; Sebastian, Rafael; Soto-Iglesias, David; Berruezo, Antonio; Camara, Oscar

    2015-08-01

    The electrical activation of the heart is a complex physiological process that is essential for the understanding of several cardiac dysfunctions, such as ventricular tachycardia (VT). Nowadays, patient-specific activation times on ventricular chambers can be estimated from electro-anatomical maps, providing crucial information to clinicians for guiding cardiac radio-frequency ablation treatment. However, some relevant electrical pathways such as those of the Purkinje system are very difficult to interpret from these maps due to sparsity of data and the limited spatial resolution of the system. We present here a novel method to estimate these fast electrical pathways from the local activations maps (LATs) obtained from electro-anatomical maps. The location of Purkinje-myocardial junctions (PMJs) is estimated considering them as critical points of a distance map defined by the activation maps, and then minimal cost geodesic paths are computed on the ventricular surface between the detected junctions. Experiments to validate the proposed method have been carried out in simplified and realistic simulated data, showing good performance on recovering the main characteristics of simulated Purkinje networks (e.g. PMJs). A feasibility study with real cases of fascicular VT was also performed, showing promising results.

  8. National-scale crop type mapping and area estimation using multi-resolution remote sensing and field survey

    NASA Astrophysics Data System (ADS)

    Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.

    2016-12-01

    Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in

  9. Integration of Self-Organizing Map (SOM) and Kernel Density Estimation (KDE) for network intrusion detection

    NASA Astrophysics Data System (ADS)

    Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

    2009-09-01

    This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

  10. Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map.

    PubMed

    Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S

    2008-04-11

    A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker-Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes.

  11. Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map

    PubMed Central

    Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S.

    2010-01-01

    SUMMARY A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker–Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes. PMID:20454468

  12. Multiresolution MAP despeckling of SAR images based on locally adaptive generalized Gaussian pdf modeling.

    PubMed

    Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano

    2006-11-01

    In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.

  13. Parameter adaptive estimation of random processes

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Vanlandingham, H. F.

    1975-01-01

    This paper is concerned with the parameter adaptive least squares estimation of random processes. The main result is a general representation theorem for the conditional expectation of a random variable on a product probability space. Using this theorem along with the general likelihood ratio expression, the least squares estimate of the process is found in terms of the parameter conditioned estimates. The stochastic differential for the a posteriori probability and the stochastic differential equation for the a posteriori density are found by using simple stochastic calculus on the representations obtained. The results are specialized to the case when the parameter has a discrete distribution. The results can be used to construct an implementable recursive estimator for certain types of nonlinear filtering problems. This is illustrated by some simple examples.

  14. Bayesian optimization of perfusion and transit time estimation in PASL-MRI.

    PubMed

    Santos, Nuno; Sanches, João; Figueiredo, Patrícia

    2010-01-01

    Pulsed Arterial Spin Labeling (PASL) techniques potentially allow the absolute, non-invasive quantification of brain perfusion and arterial transit time. This can be achieved by fitting a kinetic model to the data acquired at a number of inversion time points (TI). The intrinsically low SNR of PASL data, together with the uncertainty in the model parameters, can hinder the estimation of the parameters of interest. Here, a two-compartment kinetic model is used to estimate perfusion and transit time, based on a Maximum a Posteriori (MAP) criterion. A priori information concerning the physiological variation of the multiple model parameters is used to guide the solution. Monte Carlo simulations are performed to compare the accuracy of our proposed Bayesian estimation method with a conventional Least Squares (LS) approach, using four different sets of TI points. Each set is obtained either with a uniform distribution or an optimal sampling strategy designed based on the same MAP criterion. We show that the estimation errors are minimized when our proposed Bayesian estimation method is employed in combination with an optimal set of sampling points. In conclusion, our results indicate that PASL perfusion and transit time measurements would benefit from a Bayesian approach for the optimization of both the sampling strategy and the estimation algorithm, whereby prior information on the parameters is used.

  15. MAPS

    Atmospheric Science Data Center

    2014-07-03

    ... Measurement of Air Pollution from Satellites (MAPS) data were collected during Space Shuttle flights in 1981, ... Facts Correlative Data  - CDIAC - Spring & Fall 1994 - Field and Aircraft Campaigns SCAR-B Block:  ...

  16. A neural-network based estimator to search for primordial non-Gaussianity in Planck CMB maps

    SciTech Connect

    Novaes, C.P.; Bernui, A.; Ferreira, I.S.; Wuensche, C.A. E-mail: bernui@on.br E-mail: ca.wuensche@inpe.br

    2015-09-01

    We present an upgraded combined estimator, based on Minkowski Functionals and Neural Networks, with excellent performance in detecting primordial non-Gaussianity in simulated maps that also contain a weighted mixture of Galactic contaminations, besides real pixel's noise from Planck cosmic microwave background radiation data. We rigorously test the efficiency of our estimator considering several plausible scenarios for residual non-Gaussianities in the foreground-cleaned Planck maps, with the intuition to optimize the training procedure of the Neural Network to discriminate between contaminations with primordial and secondary non-Gaussian signatures. We look for constraints of primordial local non-Gaussianity at large angular scales in the foreground-cleaned Planck maps. For the SMICA map we found f{sub NL} = 33 ± 23, at 1σ confidence level, in excellent agreement with the WMAP-9yr and Planck results. In addition, for the other three Planck maps we obtain similar constraints with values in the interval f{sub NL}  element of  [33, 41], concomitant with the fact that these maps manifest distinct features in reported analyses, like having different pixel's noise intensities.

  17. A neural-network based estimator to search for primordial non-Gaussianity in Planck CMB maps

    NASA Astrophysics Data System (ADS)

    Novaes, C. P.; Bernui, A.; Ferreira, I. S.; Wuensche, C. A.

    2015-09-01

    We present an upgraded combined estimator, based on Minkowski Functionals and Neural Networks, with excellent performance in detecting primordial non-Gaussianity in simulated maps that also contain a weighted mixture of Galactic contaminations, besides real pixel's noise from Planck cosmic microwave background radiation data. We rigorously test the efficiency of our estimator considering several plausible scenarios for residual non-Gaussianities in the foreground-cleaned Planck maps, with the intuition to optimize the training procedure of the Neural Network to discriminate between contaminations with primordial and secondary non-Gaussian signatures. We look for constraints of primordial local non-Gaussianity at large angular scales in the foreground-cleaned Planck maps. For the SMICA map we found fNL = 33 ± 23, at 1σ confidence level, in excellent agreement with the WMAP-9yr and Planck results. In addition, for the other three Planck maps we obtain similar constraints with values in the interval fNL in [33, 41], concomitant with the fact that these maps manifest distinct features in reported analyses, like having different pixel's noise intensities.

  18. Estimates of the Lightning NOx Profile in the Vicinity of the North Alabama Lightning Mapping Array

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Peterson, Harold S.; McCaul, Eugene W.; Blazar, Arastoo

    2010-01-01

    The NASA Marshall Space Flight Center Lightning Nitrogen Oxides Model (LNOM) is applied to August 2006 North Alabama Lightning Mapping Array (NALMA) data to estimate the (unmixed and otherwise environmentally unmodified) vertical source profile of lightning nitrogen oxides, NOx = NO + NO2. Data from the National Lightning Detection Network (Trademark) (NLDN) is also employed. This is part of a larger effort aimed at building a more realistic lightning NOx emissions inventory for use by the U.S. Environmental Protection Agency (EPA) Community Multiscale Air Quality (CMAQ) modeling system. Overall, special attention is given to several important lightning variables including: the frequency and geographical distribution of lightning in the vicinity of the NALMA network, lightning type (ground or cloud flash), lightning channel length, channel altitude, channel peak current, and the number of strokes per flash. Laboratory spark chamber results from the literature are used to convert 1-meter channel segments (that are located at a particular known altitude; i.e., air density) to NOx concentration. The resulting lightning NOx source profiles are discussed.

  19. Estimates of the Lightning NOx Profile in the Vicinity of the North Alabama Lightning Mapping Array

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Peterson, Harold

    2010-01-01

    The NASA Marshall Space Flight Center Lightning Nitrogen Oxides Model (LNOM) is applied to August 2006 North Alabama Lightning Mapping Array (LMA) data to estimate the raw (i.e., unmixed and otherwise environmentally unmodified) vertical profile of lightning nitrogen oxides, NOx = NO + NO 2 . This is part of a larger effort aimed at building a more realistic lightning NOx emissions inventory for use by the U.S. Environmental Protection Agency (EPA) Community Multiscale Air Quality (CMAQ) modeling system. Data from the National Lightning Detection Network TM (NLDN) is also employed. Overall, special attention is given to several important lightning variables including: the frequency and geographical distribution of lightning in the vicinity of the LMA network, lightning type (ground or cloud flash), lightning channel length, channel altitude, channel peak current, and the number of strokes per flash. Laboratory spark chamber results from the literature are used to convert 1-meter channel segments (that are located at a particular known altitude; i.e., air density) to NOx concentration. The resulting raw NOx profiles are discussed.

  20. Estimation and Mapping of Coastal Mangrove Biomass Using Both Passive and Active Remote Sensing Method

    NASA Astrophysics Data System (ADS)

    Yiqiong, L.; Lu, W.; Zhou, J.; Gan, W.; Cui, X.; Lin, G., Sr.

    2015-12-01

    Mangrove forests play an important role in global carbon cycle, but carbon stocks in different mangrove forests are not easily measured at large scale. In this research, both active and passive remote sensing methods were used to estimate the aboveground biomass of dominant mangrove communities in Zhanjiang National Mangrove Nature Reserve in Guangdong, China. We set up a decision tree including spectral, texture, position and geometry indexes to achieve mangrove inter-species classification among 5 main species named Aegiceras corniculatum, Aricennia marina, Bruguiera gymnorrhiza, Kandelia candel, Sonneratia apetala by using 5.8m multispectral ZY-3 images. In addition, Lidar data were collected and used to obtain the canopy height of different mangrove species. Then, regression equations between the field measured aboveground biomass and the canopy height deduced from Lidar data were established for these mangrove species. By combining these results, we were able to establish a relatively accurate method for differentiating mangrove species and mapping their aboveground biomass distribution at the estuary scale, which could be applied to mangrove forests in other regions.

  1. Flood Finder: Mobile-based automated water level estimation and mapping during floods

    NASA Astrophysics Data System (ADS)

    Pongsiriyaporn, B.; Jariyavajee, C.; Laoharawee, N.; Narkthong, N.; Pitichat, T.; Goldin, S. E.

    2014-02-01

    Every year, Southeast Asia faces numerous flooding disasters, resulting in very high human and economic loss. Responding to a sudden flood is difficult due to the lack of accurate and up-to- date information about the incoming water status. We have developed a mobile application called Flood Finder to solve this problem. Flood Finder allows smartphone users to measure, share and search for water level information at specified locations. The application uses image processing to compute the water level from a photo taken by users. The photo must be of a known reference object with a standard size. These water levels are more reliable and consistent than human estimates since they are derived from an algorithmic measuring function. Flood Finder uploads water level readings to the server, where they can be searched and mapped by other users via the mobile phone app or standard browsers. Given the widespread availability of smartphones in Asia, Flood Finder can provide more accurate and up-to-date information for better preparation for a flood disaster as well as life safety and property protection.

  2. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    SciTech Connect

    Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.

    2012-04-15

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.

  3. Fractal-Based Lightning Channel Length Estimation from Convex-Hull Flash Areas for DC3 Lightning Mapping Array Data

    NASA Technical Reports Server (NTRS)

    Bruning, Eric C.; Thomas, Ronald J.; Krehbiel, Paul R.; Rison, William; Carey, Larry D.; Koshak, William; Peterson, Harold; MacGorman, Donald R.

    2013-01-01

    We will use VHF Lightning Mapping Array data to estimate NOx per flash and per unit channel length, including the vertical distribution of channel length. What s the best way to find channel length from VHF sources? This paper presents the rationale for the fractal method, which is closely related to the box-covering method.

  4. Fusion of Kinect depth data with trifocal disparity estimation for near real-time high quality depth maps generation

    NASA Astrophysics Data System (ADS)

    Boisson, Guillaume; Kerbiriou, Paul; Drazic, Valter; Bureller, Olivier; Sabater, Neus; Schubert, Arno

    2014-03-01

    Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.

  5. Wind resource estimation and mapping at the National Renewable Energy Laboratory

    SciTech Connect

    Schwartz, M.

    1999-07-01

    The National Renewable Energy Laboratory (NREL) has developed an automated technique for wind resource mapping to aid in the acceleration of wind energy deployment. The new automated mapping system was developed with the following two primary goals: (1) to produce a more consistent and detailed analysis of the wind resource for a variety of physiographic settings, particularly in areas of complex terrain; and (2) to generate high quality map products on a timely basis. Using computer mapping techniques reduces the time it takes to produce a wind map that reflects a consistent analysis of the distribution of the wind resource throughout the region of interest. NREL's mapping system uses commercially available geographic information system software packages. Regional wind resource maps using this new system have been produced for areas of the US, Mexico, Chile, Indonesia, and China. Countrywide wind resource assessments are under way for the Philippines, the Dominican Republic, and Mongolia. Regional assessments in Argentina and Russia are scheduled to begin soon.

  6. Hyper-resolution aquifer map of North America: estimating alluvial aquifer thickness, vertical structure, and conductivities

    NASA Astrophysics Data System (ADS)

    de Graaf, I.; Condon, L. E.; Maxwell, R. M.

    2016-12-01

    The lack of robust, spatially distributed data of the subsurface is a major limitation for complex and realistic groundwater dynamics within large-scale land surface, hydrologic, and climate models. Improving these inputs will enable a more realistic physical representation of the groundwater system and are especially needed as these models more the higher resolutions. Here, we present a new parameterization of aquifer stratification and three-dimensional input dataset over Continental North America. We estimated thickness of alluvial aquifers for North America at 250m2 resolution based on terrain attributes, such as curvature, and calibrated this with U.S. groundwater studies. Also, spatial distribution, thickness, and depth of confining layers are estimated by using information of U.S. groundwater studies. A dataset of aquifer thickness was not previously available at this level of detail, over this extent. The new derived aquifer map is used as an input to the integrated physical hydrological model ParFlow. Two smaller domains, representing different hydrogeological settings (i.e. parts of the Central Valley and High Plains) were selected for a sensitivity analysis. In this sensitivity analysis, we perturbed model parameter values and vertical and horizontal resolution under stead-state forcing (precipitation - evaporation). Specifically, the model was run with various conductivities, using global scale data of Gleeson et al. (2014) and regional scale data of U.S.G.S. groundwater studies, and various vertical and horizontal (i.e. 1km2, 250m2) discretization. Simulated groundwater depths and streamflow were evaluated against observations. The results show that model performance improves with increased horizontal and vertical discretization, and that variation in conductivity has the highest impact on spatial distribution of groundwater depth and simulated streamflow. In future work, human water demands will be added to the model to study the sensitivity of

  7. Improving parenchyma segmentation by simultaneous estimation of tissue property T1 map and group-wise registration of inversion recovery MR breast images.

    PubMed

    Xing, Ye; Xue, Zhong; Englander, Sarah; Schnall, Mitchell; Shen, Dinggang

    2008-01-01

    The parenchyma tissue in the breast has a strong relation with predictive biomarkers of breast cancer. To better segment parenchyma, we perform segmentation on estimated tissue property T1 map. To improve the estimation of tissue property (T1) which is the basis for parenchyma segmentation, we present an integrated algorithm for simultaneous T1 map estimation, T1 map based parenchyma segmentation and group-wise registration on series of inversion recovery magnetic resonance (MR) breast images. The advantage of using this integrated algorithm is that the simultaneous T1 map estimation (E-step) and group-wise registration (R-step) could benefit each other and jointly improve parenchyma segmentation. In particular, in E-step, T1 map based segmentation could help perform an edge-preserving smoothing on the tentatively estimated noisy T1 map, and could also help provide tissue probability maps to be robustly registered in R-step. Meanwhile, the improved estimation of T1 map could help segment parenchyma in a more accurate way. In R-step, for robust registration, the group-wise registration is performed on the tissue probability maps produced in E-step, rather than the original inversion recovery MR images, since tissue probability maps are the intrinsic tissue property which is invariant to the use of different imaging parameters. The better alignment of images achieved in R-step can help improve T1 map estimation and indirectly the T1 map based parenchyma segmentation. By iteratively performing E-step and R-step, we can simultaneously obtain better results for T1 map estimation, T1 map based segmentation, group-wise registration, and finally parenchyma segmentation.

  8. A New Way to estimate volcanic hazards and present multi-hazard maps

    NASA Astrophysics Data System (ADS)

    Germa, A.; Connor, C.; Connor, L.; Malservisi, R.

    2013-12-01

    To understand long term hazards in distributed volcanic systems, we are developing a research framework to relate statistical models of spatial intensity (vents per unit area), volume intensity (erupted volume per unit area) and volume-flux intensity (erupted volume per unit time and area) to conceptual models of the subsurface processes of magma storage and transport. The distribution of mapped vents and volumes erupted from these vents are used to develop nonparametric (kernel density) statistical models for distributed volcanic systems. Using radiometric age determinations of vents and erupted units, we then estimate the recurrence rate of volcanism and associated uncertainty using a Monte Carlo approach. The outputs of Monte Carlo simulation of recurrence rates allow us to produce dynamic statistical maps that reveal the spatio-temporal evolution of volcanic activity within the field studied. To further improve our research framework, we have implemented solutions to differential equations governing magma production and transport to model subsurface processes of magma ascent. This behavior can be statistically approximated by modeling the flow of a viscous fluid within a homogeneous porous medium using Darcy's law with variable conductivity dependent on flow rate and lithospheric stresses (Bonafede and Boschi, 1992; Bonafede and Cenni, 1998). Using this continuous formulation, additional complexities that influence magma migration such as complex sources, magma generation, magma rheology, tectonic stresses, and/or anisotropic/heterogeneous behavior of the porous medium, can be simply implemented by varying the choice of source and conductivity parameters. In this way we can explore physical processes that may give rise to heterogeneous flux in numerical models and relate these outputs to observed vent distributions and volume flux at the surface. Overall, data extracted from our research framework should link statistical models of volcano distribution with the

  9. Estimation of the parameter covariance matrix for aone-compartment cardiac perfusion model estimated from a dynamic sequencereconstructed using map iterative reconstruction algorithms

    SciTech Connect

    Gullberg, Grant T.; Huesman, Ronald H.; Reutter, Bryan W.; Qi,Jinyi; Ghosh Roy, Dilip N.

    2004-01-01

    In dynamic cardiac SPECT estimates of kinetic parameters ofa one-compartment perfusion model are usually obtained in a two stepprocess: 1) first a MAP iterative algorithm, which properly models thePoisson statistics and the physics of the data acquisition, reconstructsa sequence of dynamic reconstructions, 2) then kinetic parameters areestimated from time activity curves generated from the dynamicreconstructions. This paper provides a method for calculating thecovariance matrix of the kinetic parameters, which are determined usingweighted least squares fitting that incorporates the estimated varianceand covariance of the dynamic reconstructions. For each transaxial slicesets of sequential tomographic projections are reconstructed into asequence of transaxial reconstructions usingfor each reconstruction inthe time sequence an iterative MAP reconstruction to calculate themaximum a priori reconstructed estimate. Time-activity curves for a sumof activity in a blood region inside the left ventricle and a sum in acardiac tissue region are generated. Also, curves for the variance of thetwo estimates of the sum and for the covariance between the two ROIestimates are generated as a function of time at convergence using anexpression obtained from the fixed-point solution of the statisticalerror of the reconstruction. A one-compartment model is fit to the tissueactivity curves assuming a noisy blood input function to give weightedleast squares estimates of blood volume fraction, wash-in and wash-outrate constants specifying the kinetics of 99mTc-teboroxime for theleftventricular myocardium. Numerical methods are used to calculate thesecond derivative of the chi-square criterion to obtain estimates of thecovariance matrix for the weighted least square parameter estimates. Eventhough the method requires one matrix inverse for each time interval oftomographic acquisition, efficient estimates of the tissue kineticparameters in a dynamic cardiac SPECT study can be obtained with

  10. Scatterer size and concentration estimation technique based on a 3D acoustic impedance map from histologic sections

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Oelze, Michael L.; O'Brien, William D.; Zachary, James F.

    2004-05-01

    Accurate estimates of scatterer parameters (size and acoustic concentration) are beneficial adjuncts to characterize disease from ultrasonic backscatterer measurements. An estimation technique was developed to obtain parameter estimates from the Fourier transform of the spatial autocorrelation function (SAF). A 3D impedance map (3DZM) is used to obtain the SAF of tissue. 3DZMs are obtained by aligning digitized light microscope images from histologic preparations of tissue. Estimates were obtained for simulated 3DZMs containing spherical scatterers randomly located: relative errors were less than 3%. Estimates were also obtained from a rat fibroadenoma and a 4T1 mouse mammary tumor (MMT). Tissues were fixed (10% neutral-buffered formalin), embedded in paraffin, serially sectioned and stained with H&E. 3DZM results were compared to estimates obtained independently against ultrasonic backscatter measurements. For the fibroadenoma and MMT, average scatterer diameters were 91 and 31.5 μm, respectively. Ultrasonic measurements yielded average scatterer diameters of 105 and 30 μm, respectively. The 3DZM estimation scheme showed results similar to those obtained by the independent ultrasonic measurements. The 3D impedance maps show promise as a powerful tool to characterize ultrasonic scattering sites of tissue. [Work supported by the University of Illinois Research Board.

  11. A priori and a posteriori investigations for developing large eddy simulations of multi-species turbulent mixing under high-pressure conditions

    SciTech Connect

    Borghesi, Giulio; Bellan, Josette

    2015-03-15

    , and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.

  12. Efficient estimation of thermodynamic state incorporating Bayesian model order selection

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Cooper, Matthew L.; Miller, Michael I.

    1999-08-01

    The recognition of targets in infrared scenes is complicated by the wide variety of appearances associated with different thermodynamic states. We represent the variability in the thermodynamic signatures of targets via an expansion in terms of 'eigentanks' derived from a principal component analysis performed over the target's surface. Employing a Poisson sensor likelihood, or equivalently a likelihood based on Csiszar's I-divergence, a natural discrepancy measure for nonnegative images, yields a coupled set of nonlinear equations which must be solved to computed maximum a posteriori estimates of the thermodynamic expansion coefficients. We propose a weighted least-squares approximation to the Poisson loglikelihood for which the MAP estimates are solutions of linear equations. Bayesian model order estimation techniques are employed to choose the number of coefficients; this prevents target models with numerous eigentanks in their representation from having an unfair advantage over simple target models. The Bayesian integral is approximated by Schwarz's application of Laplace's method of integration; this technique is closely related to Rissanen's minimum description length and Wallace's minimum message length criteria. Our implementation of these techniques on Silicon Graphics computers exploits the flexible nature of their rendering engines. The implementation is illustrated in estimating the orientation of a tank and the optimum number of representative eigentanks for real data provided by the U.S. Army Night Vision and Electronic Sensors Directorate.

  13. Mapping Transient Hyperventilation Induced Alterations with Estimates of the Multi-Scale Dynamics of BOLD Signal

    PubMed Central

    Kiviniemi, Vesa; Remes, Jukka; Starck, Tuomo; Nikkinen, Juha; Haapea, Marianne; Silven, Olli; Tervonen, Osmo

    2009-01-01

    Temporal blood oxygen level dependent (BOLD) contrast signals in functional MRI during rest may be characterized by power spectral distribution (PSD) trends of the form 1/fα. Trends with 1/f characteristics comprise fractal properties with repeating oscillation patterns in multiple time scales. Estimates of the fractal properties enable the quantification of phenomena that may otherwise be difficult to measure, such as transient, non-linear changes. In this study it was hypothesized that the fractal metrics of 1/f BOLD signal trends can map changes related to dynamic, multi-scale alterations in cerebral blood flow (CBF) after a transient hyperventilation challenge. Twenty-three normal adults were imaged in a resting-state before and after hyperventilation. Different variables (1/f trend constant α, fractal dimension Df, and, Hurst exponent H) characterizing the trends were measured from BOLD signals. The results show that fractal metrics of the BOLD signal follow the fractional Gaussian noise model, even during the dynamic CBF change that follows hyperventilation. The most dominant effect on the fractal metrics was detected in grey matter, in line with previous hyperventilation vaso-reactivity studies. The α was able to differentiate also blood vessels from grey matter changes. Df was most sensitive to grey matter. H correlated with default mode network areas before hyperventilation but this pattern vanished after hyperventilation due to a global increase in H. In the future, resting-state fMRI combined with fractal metrics of the BOLD signal may be used for analyzing multi-scale alterations of cerebral blood flow. PMID:19636388

  14. Terrestrial Laser Scanning to Estimate Hydraulic Resistance for Floodplain Mapping and Hydraulic Studies

    NASA Astrophysics Data System (ADS)

    Minear, J. T.

    2016-12-01

    One of the primary unknown parameters in the hydraulic analyses used for floodplain studies is hydraulic resistance. A better understanding of hydraulic resistance would be highly useful for understanding future floods, improving higher dimensional flood modeling (2D+), as well as correctly calculating flood discharges for floods that are not directly measured. The relationship of measured floodplain parameters to hydraulic resistance is difficult to objectively quantify in the field, partially because resistance occurs at a variety of scales (i.e. grain, unit and reach) and because individual resistance elements, such as trees, grass and sediment grains, are inherently difficult to measure. Terrestrial Laser Scanning (TLS, also known as Ground-based LiDAR) has shown great ability to rapidly collect high-resolution topographic datasets for geomorphic and hydrodynamic studies and can be used to objectively quantify hydraulic resistance parameters in the field. Because of its speed in data collection and remote sensing ability, TLS can be used both for pre-flood and post-flood studies that require relatively quick response in relatively dangerous settings. Using datasets collected from experimental flume runs as well as field studies of several rivers in California and post-flood rivers in Colorado, this study evaluates the use of TLS to estimate hydraulic resistance, particularly from grain-scale elements. Experimental laboratory runs with bed grain size held constant but with varying grain-scale protusion as measured by TLS have shown a nearly twenty-fold variation in measured hydraulic resistance. The ideal application of these TLS datasets would be in combination with a vegetation and bedform element resistance and aerial lidar to extrapolate resistance measurements to much larger areas for floodplain mapping and hydraulic studies.

  15. MAP reconstruction for Fourier rebinned TOF-PET data.

    PubMed

    Bai, Bing; Lin, Yanguang; Zhu, Wentao; Ren, Ran; Li, Quanzheng; Dahlbom, Magnus; DiFilippo, Frank; Leahy, Richard M

    2014-02-21

    Time-of-flight (TOF) information improves the signal-to-noise ratio in positron emission tomography (PET). The computation cost in processing TOF-PET sinograms is substantially higher than for nonTOF data because the data in each line of response is divided among multiple TOF bins. This additional cost has motivated research into methods for rebinning TOF data into lower dimensional representations that exploit redundancies inherent in TOF data. We have previously developed approximate Fourier methods that rebin TOF data into either three-dimensional (3D) nonTOF or 2D nonTOF formats. We refer to these methods respectively as FORET-3D and FORET-2D. Here we describe maximum a posteriori (MAP) estimators for use with FORET rebinned data. We first derive approximate expressions for the variance of the rebinned data. We then use these results to rescale the data so that the variance and mean are approximately equal allowing us to use the Poisson likelihood model for MAP reconstruction. MAP reconstruction from these rebinned data uses a system matrix in which the detector response model accounts for the effects of rebinning. Using these methods we compare the performance of FORET-2D and 3D with TOF and nonTOF reconstructions using phantom and clinical data. Our phantom results show a small loss in contrast recovery at matched noise levels using FORET compared to reconstruction from the original TOF data. Clinical examples show FORET images that are qualitatively similar to those obtained from the original TOF-PET data but with a small increase in variance at matched resolution. Reconstruction time is reduced by a factor of 5 and 30 using FORET3D+MAP and FORET2D+MAP respectively compared to 3D TOF MAP, which makes these methods attractive for clinical applications.

  16. State estimation in large-scale open channel networks using sequential Monte Carlo methods: Optimal sampling importance resampling and implicit particle filters

    NASA Astrophysics Data System (ADS)

    Rafiee, Mohammad; Barrau, Axel; Bayen, Alexandre M.

    2013-06-01

    This article investigates the performance of Monte Carlo-based estimation methods for estimation of flow state in large-scale open channel networks. After constructing a state space model of the flow based on the Saint-Venant equations, we implement the optimal sampling importance resampling filter to perform state estimation in a case in which measurements are available at every time step. Considering a case in which measurements become available intermittently, a random-map implementation of the implicit particle filter is applied to estimate the state trajectory in the interval between the measurements. Finally, some heuristics are proposed, which are shown to improve the estimation results and lower the computational cost. In the first heuristics, considering the case in which measurements are available at every time step, we apply the implicit particle filter over time intervals of a desired size while incorporating all the available measurements over the corresponding time interval. As a second heuristic method, we introduce a maximum a posteriori (MAP) method, which does not require sampling. It will be seen, through implementation, that the MAP method provides more accurate results in the case of our application while having a smaller computational cost. All estimation methods are tested on a network of 19 tidally forced subchannels and 1 reservoir, Clifton Court Forebay, in Sacramento-San Joaquin Delta in California, and numerical results are presented.

  17. A genetic map and germplasm diversity estimation of Mangifera indica (mango) with SNPs

    USDA-ARS?s Scientific Manuscript database

    Mango (Mangifera indica) is often referred to as the “King of Fruits”. As the first steps in developing a mango genomics project, we genotyped 582 individuals comprising six mapping populations with 1054 SNP markers. The resulting consensus map had 20 linkage groups defined by 726 SNP markers with...

  18. Suitability estimation for urban development using multi-hazard assessment map.

    PubMed

    Bathrellos, George D; Skilodimou, Hariklia D; Chousianitis, Konstantinos; Youssef, Ahmed M; Pradhan, Biswajeet

    2017-01-01

    Preparation of natural hazards maps are vital and essential for urban development. The main scope of this study is to synthesize natural hazard maps in a single multi-hazard map and thus to identify suitable areas for the urban development. The study area is the drainage basin of Xerias stream (Northeastern Peloponnesus, Greece) that has frequently suffered damages from landslides, floods and earthquakes. Landslide, flood and seismic hazard assessment maps were separately generated and further combined by applying the Analytical Hierarchy Process (AHP) and utilizing a Geographical Information System (GIS) to produce a multi-hazard map. This map represents the potential suitability map for urban development in the study area and was evaluated by means of uncertainty analysis. The outcome revealed that the most suitable areas are distributed in the southern part of the study area, where the landslide, flood and seismic hazards are at low and very low level. The uncertainty analysis shows small differences on the spatial distribution of the suitability zones. The produced suitability map for urban development proves a satisfactory agreement between the suitability zones and the landslide and flood phenomena that have affected the study area. Finally, 40% of the existing urban pattern boundaries and 60% of the current road network are located within the limits of low and very low suitability zones.

  19. Model Estimation and Selection towardsUnconstrained Real-Time Tracking and Mapping.

    PubMed

    Gauglitz, Steffen; Sweeney, Chris; Ventura, Jonathan; Turk, Matthew; Höllerer, Tobias

    2014-06-01

    We present an approach and prototype implementation to initialization-free real-time tracking and mapping that supports any type of camera motion in 3D environments, that is, parallax-inducing as well as rotation-only motions. Our approach effectively behaves like a keyframe-based Simultaneous Localization and Mapping system or a panorama tracking and mapping system, depending on the camera movement. It seamlessly switches between the two modes and is thus able to track and map through arbitrary sequences of parallax-inducing and rotation-only camera movements. The system integrates both model-based and model-free tracking, automatically choosing between the two depending on the situation, and subsequently uses the "Geometric Robust Information Criterion" to decide whether the current camera motion can best be represented as a parallax-inducing motion or a rotation-only motion. It continues to collect and map data after tracking failure by creating separate tracks which are later merged if they are found to overlap. This is in contrast to most existing tracking and mapping systems, which suspend tracking and mapping and thus discard valuable data until relocalization with respect to the initial map is successful. We tested our prototype implementation on a variety of video sequences, successfully tracking through different camera motions and fully automatically building combinations of panoramas and 3D structure.

  20. Estimating missing hourly climatic data using artificial neural network for energy balance based ET mapping applications

    USDA-ARS?s Scientific Manuscript database

    Remote sensing based evapotranspiration (ET) mapping has become an important tool for water resources management at a regional scale. Accurate hourly climatic data and reference ET are crucial input for successfully implementing remote sensing based ET models such as Mapping ET with internal calibra...

  1. Chemical species separation with simultaneous estimation of field map and T2* using a k-space formulation.

    PubMed

    Honorato, Jose Luis; Parot, Vicente; Tejos, Cristian; Uribe, Sergio; Irarrazaval, Pablo

    2012-08-01

    Chemical species separation techniques in image space are prone to incorporate several distortions. Some of these are signal accentuation in borders and geometrical warping from field inhomogeneity. These errors come from neglecting intraecho time variations. In this work, we present a new approach for chemical species separation in MRI with simultaneous estimation of field map and T2* decay, formulated entirely in k-space. In this approach, the time map is used to model the phase accrual from off-resonance precession and also the amplitude decay due to T2*. Our technique fits the signal model directly in k-space with the acquired data minimizing the l(2)-norm with an interior-point algorithm. Standard two dimensional gradient echo sequences in the thighs and head were used for demonstrating the technique. With this approach, we were able to obtain excellent estimation for the species, the field inhomogeneity, and T2* decay images. The results do not suffer from geometric distortions derived from the chemical shift or the field inhomogeneity. Importantly, as the T2* map is well positioned, the species signal in borders is correctly estimated. Considering intraecho time variations in a complete signal model in k-space for separating species yields superior estimation of the variables of interest when compared to existing methods.

  2. COSMIC MICROWAVE BACKGROUND POLARIZATION AND TEMPERATURE POWER SPECTRA ESTIMATION USING LINEAR COMBINATION OF WMAP 5 YEAR MAPS

    SciTech Connect

    Samal, Pramoda Kumar; Jain, Pankaj; Saha, Rajib; Prunet, Simon; Souradeep, Tarun

    2010-05-01

    We estimate cosmic microwave background (CMB) polarization and temperature power spectra using Wilkinson Microwave Anisotropy Probe (WMAP) 5 year foreground contaminated maps. The power spectrum is estimated by using a model-independent method, which does not utilize directly the diffuse foreground templates nor the detector noise model. The method essentially consists of two steps: (1) removal of diffuse foregrounds contamination by making linear combination of individual maps in harmonic space and (2) cross-correlation of foreground cleaned maps to minimize detector noise bias. For the temperature power spectrum we also estimate and subtract residual unresolved point source contamination in the cross-power spectrum using the point source model provided by the WMAP science team. Our TT, TE, and EE power spectra are in good agreement with the published results of the WMAP science team. We perform detailed numerical simulations to test for bias in our procedure. We find that the bias is small in almost all cases. A negative bias at low l in TT power spectrum has been pointed out in an earlier publication. We find that the bias-corrected quadrupole power (l(l + 1)C{sub l} /2{pi}) is 532 {mu}K{sup 2}, approximately 2.5 times the estimate (213.4 {mu}K{sup 2}) made by the WMAP team.

  3. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  4. Searching for optimal setting conditions in technological processes using parametric estimation models and neural network mapping approach: a tutorial.

    PubMed

    Fjodorova, Natalja; Novič, Marjana

    2015-09-03

    Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits.

  5. G6PD Deficiency Prevalence and Estimates of Affected Populations in Malaria Endemic Countries: A Geostatistical Model-Based Map

    PubMed Central

    Howes, Rosalind E.; Piel, Frédéric B.; Patil, Anand P.; Nyangiri, Oscar A.; Gething, Peter W.; Dewi, Mewahyu; Hogg, Mariana M.; Battle, Katherine E.; Padilla, Carmencita D.; Baird, J. Kevin; Hay, Simon I.

    2012-01-01

    Background Primaquine is a key drug for malaria elimination. In addition to being the only drug active against the dormant relapsing forms of Plasmodium vivax, primaquine is the sole effective treatment of infectious P. falciparum gametocytes, and may interrupt transmission and help contain the spread of artemisinin resistance. However, primaquine can trigger haemolysis in patients with a deficiency in glucose-6-phosphate dehydrogenase (G6PDd). Poor information is available about the distribution of individuals at risk of primaquine-induced haemolysis. We present a continuous evidence-based prevalence map of G6PDd and estimates of affected populations, together with a national index of relative haemolytic risk. Methods and Findings Representative community surveys of phenotypic G6PDd prevalence were identified for 1,734 spatially unique sites. These surveys formed the evidence-base for a Bayesian geostatistical model adapted to the gene's X-linked inheritance, which predicted a G6PDd allele frequency map across malaria endemic countries (MECs) and generated population-weighted estimates of affected populations. Highest median prevalence (peaking at 32.5%) was predicted across sub-Saharan Africa and the Arabian Peninsula. Although G6PDd prevalence was generally lower across central and southeast Asia, rarely exceeding 20%, the majority of G6PDd individuals (67.5% median estimate) were from Asian countries. We estimated a G6PDd allele frequency of 8.0% (interquartile range: 7.4–8.8) across MECs, and 5.3% (4.4–6.7) within malaria-eliminating countries. The reliability of the map is contingent on the underlying data informing the model; population heterogeneity can only be represented by the available surveys, and important weaknesses exist in the map across data-sparse regions. Uncertainty metrics are used to quantify some aspects of these limitations in the map. Finally, we assembled a database of G6PDd variant occurrences to inform a national-level index of

  6. Estimating and mapping the incidence of dengue and chikungunya in Honduras during 2015 using Geographic Information Systems (GIS).

    PubMed

    Zambrano, Lysien I; Sierra, Manuel; Lara, Bredy; Rodríguez-Núñez, Iván; Medina, Marco T; Lozada-Riascos, Carlos O; Rodríguez-Morales, Alfonso J

    Geographical information systems (GIS) use for development of epidemiological maps in dengue has been extensively used, however not in other emerging arboviral diseases, nor in Central America. Surveillance cases data (2015) were used to estimate annual incidence rates of dengue and chikungunya (cases/100,000 pop) to develop the first maps in the departments and municipalities of Honduras. The GIS software used was Kosmo Desktop 3.0RC1(®). Four thematic maps were developed according departments, municipalities, diseases incidence rates. A total of 19,289 cases of dengue and 85,386 of chikungunya were reported (median, 726 cases/week for dengue and 1460 for chikungunya). Highest peaks were observed at weeks 25th and 27th, respectively. There was association between progression by weeks (p<0.0001). The cumulated crude national rate was estimated in 224.9 cases/100,000 pop for dengue and 995.6 for chikungunya. The incidence rates ratio between chikungunya and dengue is 4.42 (ranging in municipalities from 0.0 up to 893.0 [San Vicente Centenario]). Burden of both arboviral diseases is concentrated in capital Central District (>37%, both). Use of GIS-based epidemiological maps allow to guide decisions-taking for prevention and control of diseases that still represents significant issues in the region and the country, but also in emerging conditions. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  7. Real-time temperature estimation and monitoring of HIFU ablation through a combined modeling and passive acoustic mapping approach.

    PubMed

    Jensen, C R; Cleveland, R O; Coussios, C C

    2013-09-07

    Passive acoustic mapping (PAM) has been recently demonstrated as a method of monitoring focused ultrasound therapy by reconstructing the emissions created by inertially cavitating bubbles (Jensen et al 2012 Radiology 262 252-61). The published method sums energy emitted by cavitation from the focal region within the tissue and uses a threshold to determine when sufficient energy has been delivered for ablation. The present work builds on this approach to provide a high-intensity focused ultrasound (HIFU) treatment monitoring software that displays both real-time temperature maps and a prediction of the ablated tissue region. This is achieved by determining heat deposition from two sources: (i) acoustic absorption of the primary HIFU beam which is calculated via a nonlinear model, and (ii) absorption of energy from bubble acoustic emissions which is estimated from measurements. The two sources of heat are used as inputs to the bioheat equation that gives an estimate of the temperature of the tissue as well as estimates of tissue ablation. The method has been applied to ex vivo ox liver samples and the estimated temperature is compared to the measured temperature and shows good agreement, capturing the effect of cavitation-enhanced heating on temperature evolution. In conclusion, it is demonstrated that by using PAM and predictions of heating it is possible to produce an evolving estimate of cell death during exposure in order to guide treatment for monitoring ablative HIFU therapy.

  8. Mapping land water and energy balance relations through conditional sampling of remote sensing estimates of atmospheric forcing and surface states

    NASA Astrophysics Data System (ADS)

    Farhadi, Leila; Entekhabi, Dara; Salvucci, Guido

    2016-04-01

    In this study, we develop and apply a mapping estimation capability for key unknown parameters that link the surface water and energy balance equations. The method is applied to the Gourma region in West Africa. The accuracy of the estimation method at point scale was previously examined using flux tower data. In this study, the capability is scaled to be applicable with remotely sensed data products and hence allow mapping. Parameters of the system are estimated through a process that links atmospheric forcing (precipitation and incident radiation), surface states, and unknown parameters. Based on conditional averaging of land surface temperature and moisture states, respectively, a single objective function is posed that measures moisture and temperature-dependent errors solely in terms of observed forcings and surface states. This objective function is minimized with respect to parameters to identify evapotranspiration and drainage models and estimate water and energy balance flux components. The uncertainty of the estimated parameters (and associated statistical confidence limits) is obtained through the inverse of Hessian of the objective function, which is an approximation of the covariance matrix. This calibration-free method is applied to the mesoscale region of Gourma in West Africa using multiplatform remote sensing data. The retrievals are verified against tower-flux field site data and physiographic characteristics of the region. The focus is to find the functional form of the evaporative fraction dependence on soil moisture, a key closure function for surface and subsurface heat and moisture dynamics, using remote sensing data.

  9. Towards a publicly available, map-based regional software tool to estimate unregulated daily streamflow at ungauged rivers

    USGS Publications Warehouse

    Archfield, Stacey A.; Steeves, Peter A.; Guthrie, John D.; Ries, Kernell G.

    2013-01-01

    Streamflow information is critical for addressing any number of hydrologic problems. Often, streamflow information is needed at locations that are ungauged and, therefore, have no observations on which to base water management decisions. Furthermore, there has been increasing need for daily streamflow time series to manage rivers for both human and ecological functions. To facilitate negotiation between human and ecological demands for water, this paper presents the first publicly available, map-based, regional software tool to estimate historical, unregulated, daily streamflow time series (streamflow not affected by human alteration such as dams or water withdrawals) at any user-selected ungauged river location. The map interface allows users to locate and click on a river location, which then links to a spreadsheet-based program that computes estimates of daily streamflow for the river location selected. For a demonstration region in the northeast United States, daily streamflow was, in general, shown to be reliably estimated by the software tool. Estimating the highest and lowest streamflows that occurred in the demonstration region over the period from 1960 through 2004 also was accomplished but with more difficulty and limitations. The software tool provides a general framework that can be applied to other regions for which daily streamflow estimates are needed.

  10. Use of Pan-Tropical Biomass Maps and Deforestation Datasets to Derive Carbon Loss Estimates for the Amazon Biome

    NASA Astrophysics Data System (ADS)

    Langner, Andeas; Shimabukuro, Yosio; Achard, Frederic; Simonetti, Dario; Mitchard, Edward

    2015-04-01

    IPCC Tier 1 above-ground biomass (AGB) default values per ecological zone have high uncertainties. Remote sensing based pan-tropical biomass maps can be used to derive more realistic Tier 1 values and furthermore allow a pixel-level analysis. Such approach enables more robust AGB estimates at ecological scale as the geospatial pattern of AGB in tropical forests is taken into account. Our study investigates the impact of different activity (deforestation) datasets and carbon emission factors on carbon loss over the last decade for the Brazilian Amazon. Estimates of the carbon loss vary strongly: up to 83% and 66% relative differences depending upon the emission and activity datasets used respectively. While the Brazilian carbon map delivers higher carbon estimates than the remote sensing based AGB datasets, the Brazilian activity dataset shows lower deforestation rates than the Tree-cover product. However, the sample-based TREES approach delivers deforestation estimates which are quite close to the official Brazilian data. The combination of emission and activity data over the period 2007-2012 leads to carbon loss estimates that range from 59 to 172 megatons per year. When applying a spatially explicit approach low uncertainties at pixel-level are required. Thus, a combination between highly accurate activity data and spatially explicit AGB data, such as provided by the newly developed data fusion model, is recommended.

  11. Bayes filter modification for drivability map estimation with observations from stereo vision

    NASA Astrophysics Data System (ADS)

    Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri

    2017-02-01

    Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.

  12. Fine mapping and single nucleotide polymorphism effects estimation on pig chromosomes 1, 4, 7, 8, 17 and X

    PubMed Central

    Hidalgo, André M.; Lopes, Paulo S.; Paixão, Débora M.; Silva, Fabyano F.; Bastiaansen, John W.M.; Paiva, Samuel R.; Faria, Danielle A.; Guimarães, Simone E.F.

    2013-01-01

    Fine mapping of quantitative trait loci (QTL) from previous linkage studies was performed on pig chromosomes 1, 4, 7, 8, 17, and X which were known to harbor QTL. Traits were divided into: growth performance, carcass, internal organs, cut yields, and meat quality. Fifty families were used of a F2 population produced by crossing local Brazilian Piau boars with commercial sows. The linkage map consisted of 237 SNP and 37 microsatellite markers covering 866 centimorgans. QTL were identified by regression interval mapping using GridQTL. Individual marker effects were estimated by Bayesian LASSO regression using R. In total, 32 QTL affecting the evaluated traits were detected along the chromosomes studied. Seven of the QTL were known from previous studies using our F2 population, and 25 novel QTL resulted from the increased marker coverage. Six of the seven QTL that were significant at the 5% genome-wide level had SNPs within their confidence interval whose effects were among the 5% largest effects. The combined use of microsatellites along with SNP markers increased the saturation of the genome map and led to smaller confidence intervals of the QTL. The results showed that the tested models yield similar improvements in QTL mapping accuracy. PMID:24385854

  13. A priori and a posteriori approaches for finding genes of evolutionary interest in non-model species: osmoregulatory genes in the kidney transcriptome of the desert rodent Dipodomys spectabilis (banner-tailed kangaroo rat).

    PubMed

    Marra, Nicholas J; Eo, Soo Hyung; Hale, Matthew C; Waser, Peter M; DeWoody, J Andrew

    2012-12-01

    One common goal in evolutionary biology is the identification of genes underlying adaptive traits of evolutionary interest. Recently next-generation sequencing techniques have greatly facilitated such evolutionary studies in species otherwise depauperate of genomic resources. Kangaroo rats (Dipodomys sp.) serve as exemplars of adaptation in that they inhabit extremely arid environments, yet require no drinking water because of ultra-efficient kidney function and osmoregulation. As a basis for identifying water conservation genes in kangaroo rats, we conducted a priori bioinformatics searches in model rodents (Mus musculus and Rattus norvegicus) to identify candidate genes with known or suspected osmoregulatory function. We then obtained 446,758 reads via 454 pyrosequencing to characterize genes expressed in the kidney of banner-tailed kangaroo rats (Dipodomys spectabilis). We also determined candidates a posteriori by identifying genes that were overexpressed in the kidney. The kangaroo rat sequences revealed nine different a priori candidate genes predicted from our Mus and Rattus searches, as well as 32 a posteriori candidate genes that were overexpressed in kidney. Mutations in two of these genes, Slc12a1 and Slc12a3, cause human renal diseases that result in the inability to concentrate urine. These genes are likely key determinants of physiological water conservation in desert rodents. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. METRIC model for the estimation and mapping of evapotranspiration in a super intensive olive orchard in Southern Portugal

    NASA Astrophysics Data System (ADS)

    Pôças, Isabel; Nogueira, António; Paço, Teresa A.; Sousa, Adélia; Valente, Fernanda; Silvestre, José; Andrade, José A.; Santos, Francisco L.; Pereira, Luís S.; Allen, Richard G.

    2013-04-01

    Satellite-based surface energy balance models have been successfully applied to estimate and map evapotranspiration (ET). The METRICtm model, Mapping EvapoTranspiration at high Resolution using Internalized Calibration, is one of such models. METRIC has been widely used over an extensive range of vegetation types and applications, mostly focusing annual crops. In the current study, the single-layer-blended METRIC model was applied to Landsat5 TM and Landsat7 ETM+ images to produce estimates of evapotranspiration (ET) in a super intensive olive orchard in Southern Portugal. In sparse woody canopies as in olive orchards, some adjustments in METRIC application related to the estimation of vegetation temperature and of momentum roughness length and sensible heat flux (H) for tall vegetation must be considered. To minimize biases in H estimates due to uncertainties in the definition of momentum roughness length, the Perrier function based on leaf area index and tree canopy architecture, associated with an adjusted estimation of crop height, was used to obtain momentum roughness length estimates. Additionally, to minimize the biases in surface temperature simulations, due to soil and shadow effects, the computation of radiometric temperature considered a three-source condition, where Ts=fcTc+fshadowTshadow+fsunlitTsunlit. As such, the surface temperature (Ts), derived from the thermal band of the Landsat images, integrates the temperature of the canopy (Tc), the temperature of the shaded ground surface (Tshadow), and the temperature of the sunlit ground surface (Tsunlit), according to the relative fraction of vegetation (fc), shadow (fshadow) and sunlit (fsunlit) ground surface, respectively. As the sunlit canopies are the primary source of energy exchange, the effective temperature for the canopy was estimated by solving the three-source condition equation for Tc. To evaluate METRIC performance to estimate ET over the olive grove, several parameters derived from the

  15. MAPL: Tissue microstructure estimation using Laplacian-regularized MAP-MRI and its application to HCP data.

    PubMed

    Fick, Rutger H J; Wassermann, Demian; Caruyer, Emmanuel; Deriche, Rachid

    2016-07-01

    The recovery of microstructure-related features of the brain's white matter is a current challenge in diffusion MRI. To robustly estimate these important features from multi-shell diffusion MRI data, we propose to analytically regularize the coefficient estimation of the Mean Apparent Propagator (MAP)-MRI method using the norm of the Laplacian of the reconstructed signal. We first compare our approach, which we call MAPL, with competing, state-of-the-art functional basis approaches. We show that it outperforms the original MAP-MRI implementation and the recently proposed modified Spherical Polar Fourier (mSPF) basis with respect to signal fitting and reconstruction of the Ensemble Average Propagator (EAP) and Orientation Distribution Function (ODF) in noisy, sparsely sampled data of a physical phantom with reference gold standard data. Then, to reduce the variance of parameter estimation using multi-compartment tissue models, we propose to use MAPL's signal fitting and extrapolation as a preprocessing step. We study the effect of MAPL on the estimation of axon diameter using a simplified Axcaliber model and axonal dispersion using the Neurite Orientation Dispersion and Density Imaging (NODDI) model. We show the positive effect of using it as a preprocessing step in estimating and reducing the variances of these parameters in the Corpus Callosum of six different subjects of the MGH Human Connectome Project. Finally, we correlate the estimated axon diameter, dispersion and restricted volume fractions with Fractional Anisotropy (FA) and clearly show that changes in FA significantly correlate with changes in all estimated parameters. Overall, we illustrate the potential of using a well-regularized functional basis together with multi-compartment approaches to recover important microstructure tissue parameters with much less variability, thus contributing to the challenge of better understanding microstructure-related features of the brain's white matter.

  16. Crop Frequency Mapping for Land Use Intensity Estimation During Three Decades

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael; Tindall, Dan

    2016-08-01

    Crop extent and frequency maps are an important input to inform the debate around land value and competitive land uses, food security and sustainability of agricultural practices. Such spatial datasets are likely to support decisions on natural resource management, planning and policy. The complete Landsat Time Series (LTS) archive for 23 Landsat footprints in western Queensland from 1987 to 2015 was used in a multi-temporal mapping approach. Spatial, spectral and temporal information were combined in multiple crop-modelling steps, supported by on ground training data sampled across space and time for the classes Crop and No-Crop. Temporal information within summer and winter growing seasons for each year were summarised, and combined with various vegetation indices and band ratios computed from a mid-season spectral-composite image. All available temporal information was spatially aggregated to the scale of image segments in the mid- season composite for each growing season and used to train a random forest classifier for a Crop and No- Crop classification. Validation revealed that the predictive accuracy varied by growing season and region to be within k = 0.88 to 0.97 and are thus suitable for mapping current and historic cropping activity. Crop frequency maps were produced for all regions at different time intervals. The crop frequency maps were validated separately with a historic crop information time series. Different land use intensities and conversions e.g. from agricultural to pastures are apparent and potential drivers of these conversions are discussed.

  17. Kalman estimator- and general linear model-based on-line brain activation mapping by near-infrared spectroscopy

    PubMed Central

    2010-01-01

    Background Near-infrared spectroscopy (NIRS) is a non-invasive neuroimaging technique that recently has been developed to measure the changes of cerebral blood oxygenation associated with brain activities. To date, for functional brain mapping applications, there is no standard on-line method for analysing NIRS data. Methods In this paper, a novel on-line NIRS data analysis framework taking advantages of both the general linear model (GLM) and the Kalman estimator is devised. The Kalman estimator is used to update the GLM coefficients recursively, and one critical coefficient regarding brain activities is then passed to a t-statistical test. The t-statistical test result is used to update a topographic brain activation map. Meanwhile, a set of high-pass filters is plugged into the GLM to prevent very low-frequency noises, and an autoregressive (AR) model is used to prevent the temporal correlation caused by physiological noises in NIRS time series. A set of data recorded in finger tapping experiments is studied using the proposed framework. Results The obtained results suggest that the method can effectively track the task related brain activation areas, and prevent the noise distortion in the estimation while the experiment is running. Thereby, the potential of the proposed method for real-time NIRS-based brain imaging was demonstrated. Conclusions This paper presents a novel on-line approach for analysing NIRS data for functional brain mapping applications. This approach demonstrates the potential of a real-time-updating topographic brain activation map. PMID:21138595

  18. Evaluation of a direct 4D reconstruction method using generalised linear least squares for estimating nonlinear micro-parametric maps.

    PubMed

    Angelis, Georgios I; Matthews, Julian C; Kotasidis, Fotis A; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J

    2014-11-01

    Estimation of nonlinear micro-parameters is a computationally demanding and fairly challenging process, since it involves the use of rather slow iterative nonlinear fitting algorithms and it often results in very noisy voxel-wise parametric maps. Direct reconstruction algorithms can provide parametric maps with reduced variance, but usually the overall reconstruction is impractically time consuming with common nonlinear fitting algorithms. In this work we employed a recently proposed direct parametric image reconstruction algorithm to estimate the parametric maps of all micro-parameters of a two-tissue compartment model, used to describe the kinetics of [[Formula: see text]F]FDG. The algorithm decouples the tomographic and the kinetic modelling problems, allowing the use of previously developed post-reconstruction methods, such as the generalised linear least squares (GLLS) algorithm. Results on both clinical and simulated data showed that the proposed direct reconstruction method provides considerable quantitative and qualitative improvements for all micro-parameters compared to the conventional post-reconstruction fitting method. Additionally, region-wise comparison of all parametric maps against the well-established filtered back projection followed by post-reconstruction non-linear fitting, as well as the direct Patlak method, showed substantial quantitative agreement in all regions. The proposed direct parametric reconstruction algorithm is a promising approach towards the estimation of all individual microparameters of any compartment model. In addition, due to the linearised nature of the GLLS algorithm, the fitting step can be very efficiently implemented and, therefore, it does not considerably affect the overall reconstruction time.

  19. Stability estimate for the hyperbolic inverse boundary value problem by local Dirichlet-to-Neumann map

    NASA Astrophysics Data System (ADS)

    Bellassoued, M.; Jellali, D.; Yamamoto, M.

    2008-07-01

    In this paper we consider the stability of the inverse problem of determining a function q(x) in a wave equation in a bounded smooth domain in from boundary observations. This information is enclosed in the hyperbolic (dynamic) Dirichlet-to-Neumann map associated to the solutions to the wave equation. We prove in the case of n[greater-or-equal, slanted]2 that q(x) is uniquely determined by the range restricted to a subboundary of the Dirichlet-to-Neumann map whose stability is a type of double logarithm.

  20. A feasibility study on estimation of tissue mixture contributions in 3D arterial spin labeling sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing

    2017-03-01

    Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.

  1. A COMPARISON OF MAPPED ESTIMATES OF LONG-TERM RUNOFF IN THE NORTHEAST UNITED STATES

    EPA Science Inventory

    We evaluated the relative accuracy of four methods of producing maps of long-term runoff for part of the northeast United States: MAN, a manual procedure that incorporates expert opinion in contour placement; RPRIS, an automated procedure based on water balance considerations, Pn...

  2. Estimating missing hourly climatic data using artificial neural network for energy balance based ET mapping applications

    USDA-ARS?s Scientific Manuscript database

    Remote sensing based evapotranspiration (ET) mapping is an important improvement for water resources management. Hourly climatic data and reference ET are crucial for implementing remote sensing based ET models such as METRIC and SEBAL. In Turkey, data on all climatic variables may not be available ...

  3. Mapping of prospectivity and estimation of number of undiscovered prospects for lode gold, southwestern Ashanti Belt, Ghana

    NASA Astrophysics Data System (ADS)

    Carranza, Emmanuel John M.; Owusu, Emmanuel A.; Hale, Martin

    2009-07-01

    In the southwestern part of the Ashanti Belt, the results of fractal and Fry analyses of the spatial pattern of 51 known mines/prospects of (mostly lode) gold deposits and the results of analysis of their spatial associations with faults and fault intersections suggest different predominant structural controls on lode gold mineralisation at local and district scales. Intersections of NNE- and NW-trending faults were likely predominantly involved in local-scale structural controls on lode gold mineralisation, whilst NNE-trending faults were likely predominantly involved in district-scale structural controls on lode gold mineralisation. The results of the spatial analyses facilitate the conceptualisation and selection of spatial evidence layers for lode gold prospectivity mapping in the study area. The applications of the derived map of lode gold prospectivity and a map of radial density of spatially coherent lode gold mines/prospects results in a one-level prediction of 37 undiscovered lode gold prospects. The applications of quantified radial density fractal dimensions of the spatial pattern of spatially coherent lode gold mines/prospects result in an estimate of 40 undiscovered lode gold prospects. The study concludes finally that analysis of the spatial pattern of discovered mineral deposits is the key to a strong link between mineral prospectivity mapping and assessment of undiscovered mineral deposits.

  4. Using satellite image-based maps and ground inventory data to estimate the area of the remaining Atlantic forest in the Brazilian state of Santa Catarina

    Treesearch

    Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti

    2013-01-01

    Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...

  5. Mapping global land cover in 2001 and 2010 with spatial-temporal consistency at 250 m resolution

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Zhao, Yuanyuan; Li, Congcong; Yu, Le; Liu, Desheng; Gong, Peng

    2015-05-01

    Global land cover types in 2001 and 2010 were mapped at 250 m resolution with multiple year time series Moderate Resolution Imaging Spectrometer (MODIS) data. The map for each single year was produced not only from data of that particular year but also from data acquired in the preceding and subsequent years as temporal context. Slope data and geographical coordinates of pixels were also used. The classification system was derived from the finer resolution observation and monitoring of global land cover (FROM-GLC) project. Samples were based on the 2010 FROM-GLC project and samples for other years were obtained by excluding those changed from 2010. A random forest classifier was used to obtain original class labels and to estimate class probabilities for 2000-2002, and 2009-2011. The overall accuracies estimated from cross validation of samples are 74.93% for 2001 and 75.17% for 2010. The classification results were further improved through post processing. A spatial-temporal consistency model, Maximum a Posteriori Markov Random Fields (MAP-MRF), was first applied to improve land cover classification for each 3 consecutive years. The MRF outputs for 2001 and 2010 were then processed with a rule-based label adjustment method with MOD44B, slope and composited EVI series as auxiliary data. The label adjustment process relabeled the over-classified forests, water bodies and barren lands to alternative classes with maximum probabilities.

  6. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  7. New features added to EVALIDator: ratio estimation and county choropleth maps

    Treesearch

    Patrick D. Miles; Mark H. Hansen

    2012-01-01

    The EVALIDator Web application, developed in 2007, provides estimates and sampling errors for many user selected forest statistics from the Forest Inventory and Analysis Database (FIADB). Among the statistics estimated are forest area, number of trees, biomass, volume, growth, removals, and mortality. A new release of EVALIDator, developed in 2012, has an option to...

  8. Considerations in Forest Growth Estimation Between Two Measurements of Mapped Forest Inventory Plots

    Treesearch

    Michael T. Thompson

    2006-01-01

    Several aspects of the enhanced Forest Inventory and Analysis (FIA) program?s national plot design complicate change estimation. The design incorporates up to three separate plot sizes (microplot, subplot, and macroplot) to sample trees of different sizes. Because multiple plot sizes are involved, change estimators designed for polyareal plot sampling, such as those...

  9. Using a remote sensing-based, percent tree cover map to enhance forest inventory estimation

    Treesearch

    Ronald E. McRoberts; Greg C. Liknes; Grant M. Domke

    2014-01-01

    For most national forest inventories, the variables of primary interest to users are forest area and growing stock volume. The precision of estimates of parameters related to these variables can be increased using remotely sensed auxiliary variables, often in combination with stratified estimators. However, acquisition and processing of large amounts of remotely sensed...

  10. Extending the Precipitation Map Offshore Using Daily and 3-Hourly Combined Precipitation Estimates

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Curtis, Scott; Einaudi, Franco (Technical Monitor)

    2001-01-01

    One of the difficulties in studying landfalling extratropical cyclones along the Pacific Coast is the lack of antecedent data over the ocean, including precipitation. Recent research on combining various satellite-based precipitation estimates opens the possibility of realistic precipitation estimates on a global 1 deg. x 1 deg. latitude-longitude grid at the daily or even 3-hourly interval. The goal in this work is to provide quantitative precipitation estimates that correctly represent the precipitation- related variables in the hydrological cycle: surface accumulations (fresh-water flux into oceans), frequency and duration statistics, net latent heating, etc.

  11. The Kullback-Leibler divergence as an estimator of the statistical properties of CMB maps

    SciTech Connect

    Ben-David, Assaf; Jackson, Andrew D.; Liu, Hao E-mail: liuhao@nbi.dk

    2015-06-01

    The identification of unsubtracted foreground residuals in the cosmic microwave background maps on large scales is of crucial importance for the analysis of polarization signals. These residuals add a non-Gaussian contribution to the data. We propose the Kullback-Leibler (KL) divergence as an effective, non-parametric test on the one-point probability distribution function of the data. With motivation in information theory, the KL divergence takes into account the entire range of the distribution and is highly non-local. We demonstrate its use by analyzing the large scales of the Planck 2013 SMICA temperature fluctuation map and find it consistent with the expected distribution at a level of 6%. Comparing the results to those obtained using the more popular Kolmogorov-Smirnov test, we find the two methods to be in general agreement.

  12. Heavy metals of Santiago Island (Cape Verde) top soils: Estimated Background Value maps and environmental risk assessment

    NASA Astrophysics Data System (ADS)

    Cabral Pinto, M. M. S.; Ferreira da Silva, E.; Silva, M. M. V. G.; Melo-Gonçalves, P.

    2015-01-01

    In this work we present maps of estimates of background values of some harmful metals (As, Cd, Co, Cr, Cu, Hg, Mn, Ni, Pb, V, and Zn) in the soils of Santiago Island, Cape Verde, analyse their relationships with the geological cartography, and assess their environmental risks. The geochemical survey (soil sampling at a spatial resolution of 3 sites per 10 km2, sample preparation, geochemical analysis, data treatment, and mapping) was conducted following the guidelines proposed by the International Projects IGCP 259 and IGCP 360. The concentration of the selected elements was determined in the fraction <2 mm. Each sample was digested with aqua regia and analysed by ICP-MS. The Estimated Background Value spatial distributions of the studied metals are found to be strongly linked to the geological cartography. These links are identified by a direct comparison of the geochemical maps with the geological cartography, and confirmed by either simple statistics and a Principal Component Analysis. The metals with higher loadings in the first Principal Component, Ni, Cr, Co, Cu, and V, clearly show the influence of a lithology rich in siderophile elements, typical of basic rocks and of its related minerals. The elements with higher loadings in the second Principal Component, Mn, Zn, Pb, As, Hg, and Cd, are chalcophile elements, except for Mn, but an anthropogenic contamination for these elements cannot be discarded. We propose an index to numerically access the environmental risk of one element, which we denominate by Environmental Risk Index, and a Multi-element Index which is simply the average taken over all elements. The occurrence of values greater than 1 in the maps of the Environmental Risk Index shows where the content of the respective element is above the permissible levels according to the available legislation for agricultural and residential purposes. The same applies to the multi-element risk index maps. High values of these risk indices are found, both for

  13. Decoding fMRI events in sensorimotor motor network using sparse paradigm free mapping and activation likelihood estimates.

    PubMed

    Tan, Francisca M; Caballero-Gaudes, César; Mullinger, Karen J; Cho, Siu-Yeung; Zhang, Yaping; Dryden, Ian L; Francis, Susan T; Gowland, Penny A

    2017-08-16

    Most functional MRI (fMRI) studies map task-driven brain activity using a block or event-related paradigm. Sparse paradigm free mapping (SPFM) can detect the onset and spatial distribution of BOLD events in the brain without prior timing information, but relating the detected events to brain function remains a challenge. In this study, we developed a decoding method for SPFM using a coordinate-based meta-analysis method of activation likelihood estimation (ALE). We defined meta-maps of statistically significant ALE values that correspond to types of events and calculated a summation overlap between the normalized meta-maps and SPFM maps. As a proof of concept, this framework was applied to relate SPFM-detected events in the sensorimotor network (SMN) to six motor functions (left/right fingers, left/right toes, swallowing, and eye blinks). We validated the framework using simultaneous electromyography (EMG)-fMRI experiments and motor tasks with short and long duration, and random interstimulus interval. The decoding scores were considerably lower for eye movements relative to other movement types tested. The average successful rate for short and long motor events were 77 ± 13% and 74 ± 16%, respectively, excluding eye movements. We found good agreement between the decoding results and EMG for most events and subjects, with a range in sensitivity between 55% and 100%, excluding eye movements. The proposed method was then used to classify the movement types of spontaneous single-trial events in the SMN during resting state, which produced an average successful rate of 22 ± 12%. Finally, this article discusses methodological implications and improvements to increase the decoding performance. Hum Brain Mapp, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Black-backed woodpecker habitat suitability mapping using conifer snag basal area estimated from airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Casas Planes, Á.; Garcia, M.; Siegel, R.; Koltunov, A.; Ramirez, C.; Ustin, S.

    2015-12-01

    Occupancy and habitat suitability models for snag-dependent wildlife species are commonly defined as a function of snag basal area. Although critical for predicting or assessing habitat suitability, spatially distributed estimates of snag basal area are not generally available across landscapes at spatial scales relevant for conservation planning. This study evaluates the use of airborne laser scanning (ALS) to 1) identify individual conifer snags and map their basal area across a recently burned forest, and 2) map habitat suitability for a wildlife species known to be dependent on snag basal area, specifically the black-backed woodpecker (Picoides arcticus). This study focuses on the Rim Fire, a megafire that took place in 2013 in the Sierra Nevada Mountains of California, creating large patches of medium- and high-severity burned forest. We use forest inventory plots, single-tree ALS-derived metrics and Gaussian processes classification and regression to identify conifer snags and estimate their stem diameter and basal area. Then, we use the results to map habitat suitability for the black-backed woodpecker using thresholds for conifer basal area from a previously published habitat suitability model. Local maxima detection and watershed segmentation algorithms resulted in 75% detection of trees with stem diameter larger than 30 cm. Snags are identified with an overall accuracy of 91.8 % and conifer snags are identified with an overall accuracy of 84.8 %. Finally, Gaussian process regression reliably estimated stem diameter (R2 = 0.8) using height and crown area. This work provides a fast and efficient methodology to characterize the extent of a burned forest at the tree level and a critical tool for early wildlife assessment in post-fire forest management and biodiversity conservation.

  15. Advancement of estimation fidelity in continuous quantum measurement

    NASA Astrophysics Data System (ADS)

    Diósi, Lajos

    2002-03-01

    We estimate an unknown qubit from the long sequence of n random polarization measurements of precision Δ. Using the standard Ito stochastic equations of the a posteriori state in the continuous measurement limit, we calculate the advancement of fidelity. We show that the standard optimum value 2/3 is achieved asymptotically for n ≫ Δ2/96 ≫ 1. We append a brief derivation of novel Ito equations for the estimate state.

  16. Improved 3D Look-Locker Acquisition Scheme and Angle Map Filtering Procedure for T1 Estimation

    PubMed Central

    Hui, CheukKai; Esparza-Coss, Emilio; Narayana, Ponnada A

    2013-01-01

    The 3D Look-Locker (LL) acquisition is a widely used fast and efficient T1 mapping method. However, the multi-shot approach of 3D LL acquisition can introduce reconstruction artifacts that result in intensity distortions. Traditional 3D LL acquisition generally utilizes centric encoding scheme that is limited to a single phase encoding direction in the k-space. To optimize the k-space segmentation, an elliptical scheme with two phase encoding directions is implemented for the LL acquisition. This elliptical segmentation can reduce the intensity errors in the reconstructed images and improve the final T1 estimation. One of the major sources of error in LL based T1 estimation is lack of accurate knowledge of the actual flip angle. Multi-parameter curve fitting procedure can account for some of the variability in the flip angle. However, curve fitting can also introduce errors in the estimated flip angle that can result in incorrect T1 values. A filtering procedure based on goodness of fit (GOF) is proposed to reduce the effect of false flip angle estimates. Filtering based on the GOF weighting can remove likely incorrect angles that result in bad curve fit. Simulation, phantom, and in-vivo studies have demonstrated that these techniques can improve the accuracy of 3D LL T1 estimation. PMID:23784967

  17. Estimation and Mapping of the Winter-Time Increase of the Water Ice Amount in the Martian Surface Soil Based on the TES TI Seasonal Variations Analysis

    NASA Astrophysics Data System (ADS)

    Kuzmin, R. O.; Zabalueva, E. V.; Christensen, P. R.

    2008-03-01

    In the work we presents the preliminary results of new method for estimation and global mapping of the winter-time increase of the water ice in the martian surface soil based on the TES TI data analysis.

  18. Construction of invariant whiskered tori by a parameterization method. Part II: Quasi-periodic and almost periodic breathers in coupled map lattices

    NASA Astrophysics Data System (ADS)

    Fontich, Ernest; de la Llave, Rafael; Sire, Yannick

    2015-09-01

    We construct quasi-periodic and almost periodic solutions for coupled Hamiltonian systems on an infinite lattice which is translation invariant. The couplings can be long range, provided that they decay moderately fast with respect to the distance. For the solutions we construct, most of the sites are moving in a neighborhood of a hyperbolic fixed point, but there are oscillating sites clustered around a sequence of nodes. The amplitude of these oscillations does not need to tend to zero. In particular, the almost periodic solutions do not decay at infinity. The main result is an a posteriori theorem. We formulate an invariance equation. Solutions of this equation are embeddings of an invariant torus on which the motion is conjugate to a rotation. We show that, if there is an approximate solution of the invariance equation that satisfies some non-degeneracy conditions, there is a true solution close by. This does not require that the system is close to integrable, hence it can be used to validate numerical calculations or formal expansions. The proof of this a posteriori theorem is based on a Nash-Moser iteration, which does not use transformation theory. Simpler versions of the scheme were developed in [28]. One technical tool, important for our purposes, is the use of weighted spaces that capture the idea that the maps under consideration are local interactions. Using these weighted spaces, the estimates of iterative steps are similar to those in finite dimensional spaces. In particular, the estimates are independent of the number of nodes that get excited. Using these techniques, given two breathers, we can place them apart and obtain an approximate solution, which leads to a true solution nearby. By repeating the process infinitely often, we can get solutions with infinitely many frequencies which do not tend to zero at infinity.

  19. Estimated flood-inundation mapping for the Lower Blue River in Kansas City, Missouri, 2003-2005

    USGS Publications Warehouse

    Kelly, Brian P.; Rydlund, Jr., Paul H.

    2006-01-01

    The U.S. Geological Survey, in cooperation with the city of Kansas City, Missouri, began a study in 2003 of the lower Blue River in Kansas City, Missouri, from Gregory Boulevard to the mouth at the Missouri River to determine the estimated extent of flood inundation in the Blue River valley from flooding on the lower Blue River and from Missouri River backwater. Much of the lower Blue River flood plain is covered by industrial development. Rapid development in the upper end of the watershed has increased the volume of runoff, and thus the discharge of flood events for the Blue River. Modifications to the channel of the Blue River began in late 1983 in response to the need for flood control. By 2004, the channel had been widened and straightened from the mouth to immediately downstream from Blue Parkway to convey a 30-year flood. A two-dimensional depth-averaged flow model was used to simulate flooding within a 2-mile study reach of the Blue River between 63rd Street and Blue Parkway. Hydraulic simulation of the study reach provided information for the design and performance of proposed hydraulic structures and channel improvements and for the production of estimated flood-inundation maps and maps representing an areal distribution of water velocity, both magnitude and direction. Flood profiles of the Blue River were developed between Gregory Boulevard and 63rd Street from stage elevations calculated from high water marks from the flood of May 19, 2004; between 63rd Street and Blue Parkway from two-dimensional hydraulic modeling conducted for this study; and between Blue Parkway and the mouth from an existing one-dimensional hydraulic model by the U.S. Army Corps of Engineers. Twelve inundation maps were produced at 2-foot intervals for Blue Parkway stage elevations from 750 to 772 feet. Each map is associated with National Weather Service flood-peak forecast locations at 63rd Street, Blue Parkway, Stadium Drive, U.S. Highway 40, 12th Street, and the Missouri River

  20. Detection, mapping and estimation of rate of spread of grass fires from southern African ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Wightman, J. M.

    1973-01-01

    Sequential band-6 imagery of the Zambesi Basin of southern Africa recorded substantial changes in burn patterns resulting from late dry season grass fires. One example from northern Botswana, indicates that a fire consumed approximately 70 square miles of grassland over a 24-hour period. Another example from western Zambia indicates increased fire activity over a 19-day period. Other examples clearly define the area of widespread grass fires in Angola, Botswana, Rhodesia and Zambia. From the fire patterns visible on the sequential portions of the imagery, and the time intervals involved, the rates of spread of the fires are estimated and compared with estimates derived from experimental burning plots in Zambia and Canada. It is concluded that sequential ERTS-1 imagery, of the quality studied, clearly provides the information needed to detect and map grass fires and to monitor their rates of spread in this region during the late dry season.

  1. 3D modelling of soil texture: mapping and incertitude estimation in centre-France

    NASA Astrophysics Data System (ADS)

    Ciampalini, Rossano; Martin, Manuel P.; Saby, Nicolas P. A.; Richer de Forges, Anne C.; Nehlig, Pierre; Martelet, Guillaume; Arrouays, Dominique

    2014-05-01

    Soil texture is an important component of all soil physical-chemical processes. The spatial variability of soil texture plays a crucial role in the evaluation and modelling of all distributed processes. The object of this study is to determine the spatial variation of soil granulometric fractions (i.e., clay, silt, sand) in the region "Centre" of France in relation to the main controlling factors, and to create extended maps of these properties following GlobalSoilMap specifications. For this purpose we used 2487 soil profiles of the French soil database (IGCS - Inventory Management and Soil Conservation) and continuum depth values of the properties within the soil profiles have been calculated with a quadratic splines methodology optimising the spline parameters in each soil profile. We used environmental covariates to predict soil properties within the region at depth intervals 0-5, 5-15, 15-30, 30-60, 60-100, and 100-200 cm. Concerning environmental covariates, we used SRTM and ASTER DEM with 90m and 30m resolution, respectively, to generate terrain parameters and topographic indexes. Other covariates we used are Gamma Ray maps, Corine land cover, available geological and soil maps of the region at scales 1M, 250k and 50k. Soil texture is modeled with the application of the compositional data analysis theory namely, alr-transform (Aitchison, 1986) which considers in statistical calculation the complementary dependence between the different granulometric classes (i.e. 100% constraint). The prediction models of the alr-transformed variables have been developed with the use of boosting regression trees (BRT), then, using a LMM - Linear Mixed Model - that separates a fixed effect from a random effect related to the continuous spatially correlated variation of the property. In this case, the LMM is applied to the two co-regionalized properties (clay and sand alr-transforms). Model uncertainty mapping represents a practical way to describe efficiency and limits of

  2. Estimating Spatial Variations in Soil Organic Carbon Using Hyperspectral Data and Map Algebra

    NASA Astrophysics Data System (ADS)

    Jaber, S.; Lant, C.

    2009-04-01

    Soil organic carbon (SOC) sequestration is a component of larger strategies to control the accumulation of greenhouse gases that are causing global warming. To implement this approach, it is necessary to improve the methods of measuring SOC content under normal field conditions. Among these methods are indirect remote sensing and geographic information systems (GIS) techniques that are required to provide non-intrusive, low cost, and spatially continuous information that cover large areas on a repetitive basis. This study evaluates the effectiveness of hyperspectral data in improving existing remote sensing methodologies for measuring SOC content. The study area is Big Creek Watershed (BCW) in Southern Illinois, USA. Composite soil samples were collected from 303 representative pixels along the Hyperion coverage area of the watershed. Two linear multiple regression models predicting SOC were calibrated and validated: an all-variables model and a raster-variables only model. Map algebra was implemented to extrapolate the raster variables only model and produce a SOC map for the BCW. Hyperion data improved the predictability of SOC compared to multispectral satellite remote sensing sensors with a correlation coefficient (R) of 0.37 and a root mean square error (RMSE) of 3.19 metric tons per hectare to a 15-cm depth in the validation sample. Hyperspectral data cannot capture small annual variations in SOC, but can measure decadal variations associated with changes in tillage or crop rotation with fair accuracy; RMSEs are as low as 34 percent of field-measured changes in SOC due to changes in tillage and as low as 59 percent for changes in crop rotation. These ranges of error likely need to be reduced further if hyperspectral data were to be used as the basis of carbon sequestration credit programs. Hyperspectral data combined with map algebra can measure total SOC pools in various ecosystem or soil types to within a few percent error.

  3. Mapping of GDOP estimates through the use of LiDAR data

    NASA Astrophysics Data System (ADS)

    Amolins, Krista

    The positioning accuracy of the Global Positioning System (GPS) and other Global Navigation Satellite Systems is affected by the configuration of visible satellites. Dilution of Precision (DOP) values are a measure of the strength of the satellite configuration but the software tools currently available for calculating DOP values have a limited ability to take into account obstructions. Determining when the best satellite configuration will be observable at a particular location requires identifying obstructions in the area and ascertaining whether they are blocking satellite signals. In this research, Light Detection and Ranging (LiDAR) data were used to locate all the obstructions around each terrain point by extracting and comparing two surfaces, one that represented obstructions and one that represented the terrain. Once all the obstructions in a selected area had been identified, GPS satellite location data were used to determine satellite visibility at different epochs and to calculate GDOP (Geometrical DOP) at locations where at least four satellites were visible. Maps were then generated for each epoch showing the GDOP values over the selected area. Some small differences were noted between the clear sky GDOP values calculated by the proposed method and those output by an available software planning tool and in a few cases there was a discrepancy in the number of visible satellites identified due to slight differences in the calculated satellite elevations. Nevertheless, the maps produced by the proposed method give a better representation of the GDOP values in the field than do traditional methods or other software tools.

  4. Effects of shipping on marine acoustic habitats in Canadian Arctic estimated via probabilistic modeling and mapping.

    PubMed

    Aulanier, Florian; Simard, Yvan; Roy, Nathalie; Gervaise, Cédric; Bandet, Marion

    2017-08-29

    Canadian Arctic and Subarctic regions experience a rapid decrease of sea ice accompanied with increasing shipping traffic. The resulting time-space changes in shipping noise are studied for four key regions of this pristine environment, for 2013 traffic conditions and a hypothetical tenfold traffic increase. A probabilistic modeling and mapping framework, called Ramdam, which integrates the intrinsic variability and uncertainties of shipping noise and its effects on marine habitats, is developed and applied. A substantial transformation of soundscapes is observed in areas where shipping noise changes from present occasional-transient contributor to a dominant noise source. Examination of impacts on low-frequency mammals within ecologically and biologically significant areas reveals that shipping noise has the potential to trigger behavioral responses and masking in the future, although no risk of temporary or permanent hearing threshold shifts is noted. Such probabilistic modeling and mapping is strategic in marine spatial planning of this emerging noise issues. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  5. Enhancing the applicability of Kohonen Self-Organizing Map (KSOM) estimator for gap-filling in hydrometeorological timeseries data

    NASA Astrophysics Data System (ADS)

    Nanda, Trushnamayee; Sahoo, Bhabagrahi; Chatterjee, Chandranath

    2017-06-01

    The Kohonen Self-Organizing Map (KSOM) estimator is prescribed as a useful tool for infilling the missing data in hydrometeorology. However, in this study, when the performance of the KSOM estimator is tested for gap-filling in the streamflow, rainfall, evapotranspiration (ET), and temperature timeseries data, collected from 30 gauging stations in India under missing data situations, it is felt that the KSOM modeling performance could be further improved. Consequently, this study tries to answer the research questions as to whether the length of record of the historical data and its variability has any effect on the performance of the KSOM? Whether inclusion of temporal distribution of timeseries data and the nature of outliers in the KSOM framework enhances its performance further? Subsequently, it is established that the KSOM framework should include the coefficient of variation of the datasets for determination of the number of map units, without considering it as a single value function of the sample data size. This could help to upscale and generalize the applicability of KSOM for varied hydrometeorological data types.

  6. OligoHeatMap (OHM): an online tool to estimate and display hybridizations of oligonucleotides onto DNA sequences.

    PubMed

    Croce, Olivier; Chevenet, François; Christen, Richard

    2008-07-01

    The efficiency of molecular methods involving DNA/DNA hybridizations depends on the accurate prediction of the melting temperature (T(m)) of the duplex. Many softwares are available for T(m) calculations, but difficulties arise when one wishes to check if a given oligomer (PCR primer or probe) hybridizes well or not on more than a single sequence. Moreover, the presence of mismatches within the duplex is not sufficient to estimate specificity as it does not always significantly decrease the T(m). OHM (OligoHeatMap) is an online tool able to provide estimates of T(m) for a set of oligomers and a set of aligned sequences, not only as text files of complete results but also in a graphical way: T(m) values are translated into colors and displayed as a heat map image, either stand alone or to be used by softwares such as TreeDyn to be included in a phylogenetic tree. OHM is freely available at http://bioinfo.unice.fr/ohm/, with links to the full source code and online help.

  7. Estimation and mapping of wet and dry mercury deposition across northeastern North America

    USGS Publications Warehouse

    Miller, E.K.; Vanarsdale, A.; Keeler, G.J.; Chalmers, A.; Poissant, L.; Kamman, N.C.; Brulotte, R.

    2005-01-01

    Whereas many ecosystem characteristics and processes influence mercury accumulation in higher trophic-level organisms, the mercury flux from the atmosphere to a lake and its watershed is a likely factor in potential risk to biota. Atmospheric deposition clearly affects mercury accumulation in soils and lake sediments. Thus, knowledge of spatial patterns in atmospheric deposition may provide information for assessing the relative risk for ecosystems to exhibit excessive biotic mercury contamination. Atmospheric mercury concentrations in aerosol, vapor, and liquid phases from four observation networks were used to estimate regional surface concentration fields. Statistical models were developed to relate sparsely measured mercury vapor and aerosol concentrations to the more commonly measured mercury concentration in precipitation. High spatial resolution deposition velocities for different phases (precipitation, cloud droplets, aerosols, and reactive gaseous mercury (RGM)) were computed using inferential models. An empirical model was developed to estimate gaseous elemental mercury (GEM) deposition. Spatial patterns of estimated total mercury deposition were complex. Generally, deposition was higher in the southwest and lower in the northeast. Elevation, land cover, and proximity to urban areas modified the general pattern. The estimated net GEM and RGM fluxes were each greater than or equal to wet deposition in many areas. Mercury assimilation by plant foliage may provide a substantial input of methyl-mercury (MeHg) to ecosystems. ?? 2005 Springer Science+Business Media, Inc.

  8. Estimating and mapping the incidence of giardiasis in Colombia, 2009-2013.

    PubMed

    Rodríguez-Morales, Alfonso J; Granados-Álvarez, Santiago; Escudero-Quintero, Harold; Vera-Polania, Felipe; Mondragon-Cardona, Alvaro; Díaz-Quijano, Fredi Alexander; Sosa-Valencia, Leonardo; Lozada-Riascos, Carlos O; Escobedo, Angel A; Liseth, Olivia; Haque, Ubydul

    2016-08-01

    Giardiasis is one of the most common intestinal infections in the world. There have been no national studies on the morbidity of giardiasis in Colombia. In this study, incidence rates of giardiasis were estimated for the years 2009-2013. An observational, retrospective study of the giardiasis incidence in Colombia, 2009-2013, was performed using data extracted from the personal health records system (Registro Individual de Prestación de Servicios, RIPS). Official population estimates from the National Department of Statistics (DANE) were used for the estimation of crude and adjusted incidence rates (cases/100 000 population). During the period studied, 15 851 cases were reported (median 3233/year; 5-year cumulated crude national rate of 33.97 cases/100 000 population). Of these, 50.3% were female; 58.4% were <10 years old and 14.8% were 10-19 years old. By region, 17.7% were from Bogotá (10.07 cases/100 000 population, 2009), 10.9% from Antioquia (9.42, 2009), 8.6% from Atlántico (15.67, 2009), and 6.5% from Risaralda (33.38, 2009). Cases were reported in all departments (even insular areas). As giardiasis is neglected in many countries, surveillance is not regularly undertaken. Despite its limitations, this study is the first attempt to provide estimates of national giardiasis incidence with consistent findings regarding affected age groups and geographical distribution. Copyright © 2016. Published by Elsevier Ltd.

  9. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  10. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  11. Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition

    NASA Astrophysics Data System (ADS)

    Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.

    2016-12-01

    Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.

  12. Multipoint linkage mapping using sibpairs: non-parametric estimation of trait effects with quantitative covariates.

    PubMed

    Chiou, Jeng-Min; Liang, Kung-Yee; Chiu, Yen-Feng

    2005-01-01

    Multipoint linkage analysis using sibpair designs remains a common approach to help investigators to narrow chromosomal regions for traits (either qualitative or quantitative) of interest. Despite its popularity, the success of this approach depends heavily on how issues such as genetic heterogeneity, gene-gene, and gene-environment interactions are properly handled. If addressed properly, the likelihood of detecting genetic linkage and of efficiently estimating the location of the trait locus would be enhanced, sometimes drastically. Previously, we have proposed an approach to deal with these issues by modeling the genetic effect of the target trait locus as a function of covariates pertained to the sibpairs. Here the genetic effect is simply the probability that a sibpair shares the same allele at the trait locus from their parents. Such modeling helps to divide the sibpairs into more homogeneous subgroups, which in turn helps to enhance the chance to detect linkage. One limitation of this approach is the need to categorize the covariates so that a small and fixed number of genetic effect parameters are introduced. In this report, we take advantage of the fact that nowadays multiple markers are readily available for genotyping simultaneously. This suggests that one could estimate the dependence of the generic effect on the covariates nonparametrically. We present an iterative procedure to estimate (1) the genetic effect nonparametrically and (2) the location of the trait locus through estimating functions developed by Liang et al. ([2001a] Hum Hered 51:67-76). We apply this new method to the linkage study of schizophrenia to illustrate how the onset ages of each sibpair may help to address the issue of genetic heterogeneity. This analysis sheds new light on the dependence of the trait effect on onset ages from affected sibpairs, an observation not revealed previously. In addition, we have carried out some simulation work, which suggests that this method provides

  13. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    NASA Astrophysics Data System (ADS)

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.; Church, Sarah E.; Wechsler, Risa H.

    2017-09-01

    Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for our fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.

  14. Nonprofit health care services marketing: persuasive messages based on multidimensional concept mapping and direct magnitude estimation.

    PubMed

    Hall, Michael L

    2009-01-01

    Persuasive messages for marketing healthcare services in general and coordinated care in particular are more important now for providers, hospitals, and third-party payers than ever before. The combination of measurement-based information and creativity may be among the most critical factors in reaching markets or expanding markets. The research presented here provides an approach to marketing coordinated care services which allows healthcare managers to plan persuasive messages given the market conditions they face. Using market respondents' thinking about product attributes combined with distance measurement between pairs of product attributes, a conceptual marketing map is presented and applied to advertising, message copy, and delivery. The data reported here are representative of the potential caregivers for which the messages are intended. Results are described with implications for application to coordinated care services. Theory building and marketing practice are discussed in the light of findings and methodology.

  15. Site-specific Probabilistic Seismic Hazard Map of Himachal Pradesh, India. Part II. Hazard Estimation

    NASA Astrophysics Data System (ADS)

    Muthuganeisan, Prabhu; Raghukanth, S. T. G.

    2016-08-01

    This article presents site-specific probable seismic hazard of the Himachal Pradesh province, situated in a seismically active region of northwest Himalaya, using the ground motion relations presented in a companion article. Seismic recurrence parameters for all the documented probable sources are established from an updated earthquake catalogue. The contour maps of probable spectral acceleration at 0, 0.2, and 1 s (5% damping) are presented for 475 and 2475 years return periods. Also, the hazard curves and uniform hazard response spectrums are presented for all the important cities in this province. Results indicate that the present codal provision underestimates the seismic hazard at cities of Bilaspur, Shimla, Hamirpur, Chamba, Mandi, and Solan. In addition, regions near Bilaspur and Chamba exhibit higher hazard levels than what is reported in literature.

  16. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    DOE PAGES

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.; ...

    2017-08-31

    Here, line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small formore » our fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  17. Mapping anuran habitat suitability to estimate effects of grassland and wetland conservation programs

    USGS Publications Warehouse

    Mushet, David M.; Euliss, Ned H.; Stockwell, Craig A.

    2012-01-01

    The conversion of the Northern Great Plains of North America to a landscape favoring agricultural commodity production has negatively impacted wildlife habitats. To offset impacts, conservation programs have been implemented by the U.S. Department of Agriculture and other agencies to restore grassland and wetland habitat components. To evaluate effects of these efforts on anuran habitats, we used call survey data and environmental data in ecological niche factor analyses implemented through the program Biomapper to quantify habitat suitability for five anuran species within a 196 km2 study area. Our amphibian call surveys identified Northern Leopard Frogs (Lithobates pipiens), Wood Frogs (Lithobates sylvaticus), Boreal Chorus Frogs (Pseudacris maculata), Great Plains Toads (Anaxyrus cognatus), and Woodhouse’s Toads (Anaxyrus woodhousii) occurring within the study area. Habitat suitability maps developed for each species revealed differing patterns of suitable habitat among species. The most significant findings of our mapping effort were 1) the influence of deep-water overwintering wetlands on suitable habitat for all species encountered except the Boreal Chorus Frog; 2) the lack of overlap between areas of core habitat for both the Northern Leopard Frog and Wood Frog compared to the core habitat for both toad species; and 3) the importance of conservation programs in providing grassland components of Northern Leopard Frog and Wood Frog habitat. The differences in habitats suitable for the five species we studied in the Northern Great Plains, i.e., their ecological niches, highlight the importance of utilizing an ecosystem based approach that considers the varying needs of multiple species in the development of amphibian conservation and management plans.

  18. Estimation and Mapping of Clouds and Rainfall Areas with an Interactive Computer.

    DTIC Science & Technology

    1982-12-01

    NITH AN INTERACTIVE COMPUTER (U) NAYAL POSTGRADUATE SCHOOL MONTEREY CA C A NELSON DEC 92 UNLSSIFIED F/G 9/2 NUC MENOMONE NONI smhhhhhhhhohh...seete o test analysis. Th. satellite imagezy was mnally evaluated and ompared to the computer guerated output. beasmbly good patterns of cloud types...ppcoved for public release; distribution unlimited Estimation and Napping of :loud anl Rainfall Areas with an Interact ive Computer by Cynthia ana Nelson

  19. Strategies for statistical thresholding of source localization maps in magnetoencephalography and estimating source extent.

    PubMed

    Maksymenko, Kostiantyn; Giusiano, Bernard; Roehri, Nicolas; Bénar, Christian-G; Badier, Jean-Michel

    2017-10-01

    Magnetoencephalography allows defining non-invasively the spatio-temporal activation of brain networks thanks to source localization algorithms. A major difficulty of MNE and beamforming methods, two classically used techniques, is the definition of proper thresholds that allow deciding the extent of activated cortex. We investigated two strategies for computing a threshold, taking into account the difficult multiple comparison issue. The strategies were based either on parametric statistics (Bonferroni, FDR correction) or on empirical estimates (local FDR and a custom measure based on the survival function). We found thanks to the simulations that parametric methods based on the sole estimation of H0 (Bonferroni, FDR) performed poorly, in particular in high SNR situations. This is due to the spatial leakage originating from the source localization methods, which give a 'blurred' reconstruction of the patch extension: the higher the SNR, the more this effect is visible. Adaptive methods such as local FDR or our proposed 'concavity threshold' performed better than Bonferroni or classical FDR. We present an application to real data originating from auditory stimulation in MEG. In order to estimate source extent, adaptive strategies should be preferred to parametric statistics when dealing with 'leaking' source reconstruction algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Transmission map estimation of weather-degraded images using a hybrid of recurrent fuzzy cerebellar model articulation controller and weighted strategy

    NASA Astrophysics Data System (ADS)

    Wang, Jyun-Guo; Tai, Shen-Chuan; Lin, Cheng-Jian

    2016-08-01

    This study proposes a hybrid of a recurrent fuzzy cerebellar model articulation controller (RFCMAC) and a weighted strategy for solving single-image visibility in a degraded image. The proposed RFCMAC model is used to estimate the transmission map. The average value of the brightest 1% in a hazy image is calculated for atmospheric light estimation. A new adaptive weighted estimation is then used to refine the transmission map and remove the halo artifact from the sharp edges. Experimental results show that the proposed method has better dehazing capability compared to state-of-the-art techniques and is suitable for real-world applications.

  1. Low Dose CT Filtering in the Image Domain Using MAP Algorithms

    NASA Astrophysics Data System (ADS)

    Geraldo, Rafael J.; Cura, Luis M. V.; Cruvinel, Paulo E.; Mascarenhas, Nelson D. A.

    2017-06-01

    The purpose of this paper is to present two new noise reduction filters techniques in the CT image space, in order to provide a better quality to the images acquired with low radiation exposure. For the noise reduction, a new denoising technique is presented based on a pointwise Maximum a Posteriori (MAP). The noise is considered Gaussian with zero mean, as observed experimentally, and the variance is estimated considering a signal-independent noise. For the a priori density of the signal, we used different non-negative probability densities (reflecting the fact that the pixels of an image are non-negative). In another approach, the histogram of the images were segmented into unimodal parts and each segment was filtered using the filter based on the MAP criterion with the a priori density that best fits it. After filtering, the evaluation of the method is performed using the following criteria: Peak Signal-to-Noise Ratio, Universal Image Quality Index and Structural Similarity Index. The 2D filtering results are compared with the results obtained by pointwise Wiener filter. Simulation and real CT images results show that the proposed techniques increase the image quality and improve the use of a low-dose CT protocol.

  2. Computer code for estimating installed performance of aircraft gas turbine engines. Volume 3: Library of maps

    NASA Technical Reports Server (NTRS)

    Kowalski, E. J.

    1979-01-01

    A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. The use of two data base files to represent the engine and the inlet/nozzle/aftbody performance characteristics is discussed. The existing library of performance characteristics for inlets and nozzle/aftbodies and an example of the 1000 series of engine data tables is presented.

  3. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection

    PubMed Central

    Lian, Feng; Zhang, Guang-Hua; Duan, Zhan-Sheng; Han, Chong-Zhao

    2016-01-01

    The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities. PMID:26828499

  4. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection.

    PubMed

    Lian, Feng; Zhang, Guang-Hua; Duan, Zhan-Sheng; Han, Chong-Zhao

    2016-01-28

    The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities.

  5. Glacier Facies Mapping and movement Estimation using Remote Sening Techniques: A Case Study at Samudra Tapu Glacier

    NASA Astrophysics Data System (ADS)

    Sood, S.; Thakur, P. K.

    2016-12-01

    Glaciers are directly affected by the recent trends of global warming. Himalayan glaciers are located near Tropic of Cancer this belt receives more heat thus Himalayan glaciers are more sensitive to climate change. Due to highly rugged terrain and inaccessibility of certain areas satellite obtained information can be used to monitor glaciers. Samudra Tapu glacier, used in this study, located in the Great Himalayan range of north-west Himalaya. Distinct glacier facies are visible using multi-temporal SAR datasets representing different seasons. Fully polarimetric SAR data were used to identify different glacier facies. The identified glacier facies are percolation facies, ice walls, ice facies, refreeze snow and supraglacial debris. Object oriented classification was used to map various glacier facies. Using the classified maps altitude on snow line and firn line was detected. More than 50% of the total glacier area is found as accumulation region. Interferometric Synthetic Aperture Radar (InSAR) technique was used for glacier surface velocity estimation using European Remote Sensing Satellite (ERS-1/2) tandem data. High value of coherence was obtained from the SAR return signal for one day temporal difference. A mean velocity of 24cm/day was estimated for the month of May, highest flow rate were seen in the high accumulation area of the northern branch. Spatial analysis of velocity patterns with respect to slope and aspect show that high rates of flow was found in southern slopes and movement rates generally increase with increase in slope. Feature tracking approach was used to estimate the glacier flow for long term and seasonal basis using SAR and optical datasets. The obtained results clearly suggest that glacier flow varies with season and there has been change in the rate of ice flow over the years. Mapping the extent of accumulation and ablation areas and also the rate at which the ice flows in these regions as these are important factors directly related to

  6. Modified total variation norm for the maximum a posteriori ordered subsets expectation maximization reconstruction in fan-beam SPECT brain perfusion imaging

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Yang, Zhaoxia; Xu, Yuesheng; Wismüller, Axel; Feiglin, David H.

    2011-03-01

    The anisotropic geometry of Fan-Beam Collimator (FBC) Single Photon Emission Tomography (SPECT) is used in brain perfusion imaging with clinical goals to quantify regional cerebral blood flow and to accurately determine the location and extent of the brain perfusion defects. One of the difficult issues that need to be addressed is partial volume effect. The purpose of this study was to minimize the partial volume effect while preserving the optimal tradeoff between noise and bias, and maintaining spatial resolution in the reconstructed images acquired in FBC geometry. We modified conventional isotropic TV (L1) norm, which has only one hyperparameter, and replaced it with two independent TV (L1u) norms (TVxy and TVz) along two orthogonal basis vectors (XY, Z) in 3D reconstruction space. We investigated if the anisotropic norm with two hyperparameters (βxy and βz, where z is parallel to the axis-of-rotation) performed better in FBC-SPECT reconstruction, as compared to the conventional isotropic norm with one hyperparameter (β) only. We found that MAP-OSEM reconstruction with modified TV norm produced images with smaller partial volume effect, as compared to the conventional TV norm at a cost of slight increase in the bias and noise.

  7. Remote sensing techniques for mapping range sites and estimating range yield

    NASA Technical Reports Server (NTRS)

    Benson, L. A.; Frazee, C. J.; Waltz, F. A.; Reed, C.; Carey, R. L.; Gropper, J. L.

    1974-01-01

    Image interpretation procedures for determining range yield and for extrapolating range information were investigated for an area of the Pine Ridge Indian Reservation in southwestern South Dakota. Soil and vegetative data collected in the field utilizing a grid sampling design and digital film data from color infrared film and black and white films were analyzed statistically using correlation and regression techniques. The pattern recognition techniques used were K-class, mode seeking, and thresholding. The herbage yield equation derived for the detailed test site was used to predict yield for an adjacent similar field. The herbage yield estimate for the adjacent field was 1744 lbs. of dry matter per acre and was favorably compared to the mean yield of 1830 lbs. of dry matter per acre based upon ground observations. Also an inverse relationship was observed between vegetative cover and the ratio of MSS 5 to MSS 7 of ERTS-1 imagery.

  8. Computation of probabilistic hazard maps and source parameter estimation for volcanic ash transport and dispersion

    SciTech Connect

    Madankan, R.; Pouget, S.; Singla, P.; Bursik, M.; Dehn, J.; Jones, M.; Patra, A.; Pavolonis, M.; Pitman, E.B.; Singh, T.; Webley, P.

    2014-08-15

    Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This paper presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.

  9. Estimated Flood Discharges and Map of Flood-Inundated Areas for Omaha Creek, near Homer, Nebraska, 2005

    USGS Publications Warehouse

    Dietsch, Benjamin J.; Wilson, Richard C.; Strauch, Kellan R.

    2008-01-01

    Repeated flooding of Omaha Creek has caused damage in the Village of Homer. Long-term degradation and bridge scouring have changed substantially the channel characteristics of Omaha Creek. Flood-plain managers, planners, homeowners, and others rely on maps to identify areas at risk of being inundated. To identify areas at risk for inundation by a flood having a 1-percent annual probability, maps were created using topographic data and water-surface elevations resulting from hydrologic and hydraulic analyses. The hydrologic analysis for the Omaha Creek study area was performed using historical peak flows obtained from the U.S. Geological Survey streamflow gage (station number 06601000). Flood frequency and magnitude were estimated using the PEAKFQ Log-Pearson Type III analysis software. The U.S. Army Corps of Engineers' Hydrologic Engineering Center River Analysis System, version 3.1.3, software was used to simulate the water-surface elevation for flood events. The calibrated model was used to compute streamflow-gage stages and inundation elevations for the discharges corresponding to floods of selected probabilities. Results of the hydrologic and hydraulic analyses indicated that flood inundation elevations are substantially lower than from a previous study.

  10. Temperature mapping in bread dough using SE and GE two-point MRI methods: experimental and theoretical estimation of uncertainty.

    PubMed

    Lucas, Tiphaine; Musse, Maja; Bornert, Mélanie; Davenel, Armel; Quellec, Stéphane

    2012-04-01

    Two-dimensional (2D)-SE, 2D-GE and tri-dimensional (3D)-GE two-point T(1)-weighted MRI methods were evaluated in this study in order to maximize the accuracy of temperature mapping of bread dough during thermal processing. Uncertainties were propagated throughout each protocol of measurement, and comparisons demonstrated that all the methods with comparable acquisition times minimized the temperature uncertainty to similar extent. The experimental uncertainties obtained with low-field MRI were also compared to the theoretical estimations. Some discrepancies were reported between experimental and theoretical values of uncertainties of temperature; however, experimental and theoretical trends with varying parameters agreed to a large extent for both SE and GE methods. The 2D-SE method was chosen for further applications on prefermented dough because of its lower sensitivity to susceptibility differences in porous media. It was applied for temperature mapping in prefermented dough during chilling prior to freezing and compared locally to optical fiber measurements. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. [Application of weighted minimum-norm estimation with Tikhonov regularization for neuromagnetic source imaging].

    PubMed

    Hu, Jing; Hu, Jie; Wang, Yuanmei

    2003-03-01

    In magnetoencepholography(MEG) inverse research, according to the point source model and distributed source model, the neuromagnetic source reconstruction methods are classified as parametric current dipole localization and nonparametric source imaging (or current density reconstruction). MEG source imaging technique can be formulated as an inherent ill-posed and highly underdetermined linear inverse problem. In order to yield a robust and plausible neural current distribution image, various approaches have been proposed. Among those, the weighted minimum-norm estimation with Tikhonov regularization is a popular technique. The authors present a relatively overall theoretical framework Followed by a discussion of the development, several regularized minimum-norm algorithms have been described in detail, including the depth normalization, low resolution electromagnetic tomography(LORETA), focal underdetermined system solver(FOCUSS), selective minimum-norm(SMN). In addition, some other imaging methods, e.g., maximum entropy method(MEM), the method incorporating other brain functional information such as fMRI data and maximum a posteriori(MAP) method using Markov random field model, are explained as well. From the generalized point of view based on minimum-norm estimation with Tikhonov regularization, all these algorithms are aiming to resolve the tradeoff between fidelity to the measured data and the constraints assumptions about the neural source configuration such as anatomical and physiological information. In conclusion, almost all the source imaging approaches can be consistent with the regularized minimum-norm estimation to some extent.

  12. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  13. Evaluating the condition of a mangrove forest of the Mexican Pacific based on an estimated leaf area index mapping approach.

    PubMed

    Kovacs, J M; King, J M L; Flores de Santiago, F; Flores-Verdugo, F

    2009-10-01

    Given the alarming global rates of mangrove forest loss it is important that resource managers have access to updated information regarding both the extent and condition of their mangrove forests. Mexican mangroves in particular have been identified as experiencing an exceptional high annual rate of loss. However, conflicting studies, using remote sensing techniques, of the current state of many of these forests may be hindering all efforts to conserve and manage what remains. Focusing on one such system, the Teacapán-Agua Brava-Las Haciendas estuarine-mangrove complex of the Mexican Pacific, an attempt was made to develop a rapid method of mapping the current condition of the mangroves based on estimated LAI. Specifically, using an AccuPAR LP-80 Ceptometer, 300 indirect in situ LAI measurements were taken at various sites within the black mangrove (Avicennia germinans) dominated forests of the northern section of this system. From this sample, 225 measurements were then used to develop linear regression models based on their relationship with corresponding values derived from QuickBird very high resolution optical satellite data. Specifically, regression analyses of the in situ LAI with both the normalized difference vegetation index (NDVI) and the simple ration (SR) vegetation index revealed significant positive relationships [LAI versus NDVI (R (2) = 0.63); LAI versus SR (R (2) = 0.68)]. Moreover, using the remaining sample, further examination of standard errors and of an F test of the residual variances indicated little difference between the two models. Based on the NDVI model, a map of estimated mangrove LAI was then created. Excluding the dead mangrove areas (i.e. LAI = 0), which represented 40% of the total 30.4 km(2) of mangrove area identified in the scene, a mean estimated LAI value of 2.71 was recorded. By grouping the healthy fringe mangrove with the healthy riverine mangrove and by grouping the dwarf mangrove together with the poor condition

  14. Estimating population abundance and mapping distribution of wintering sea ducks in coastal waters of the mid-Atlantic

    USGS Publications Warehouse

    Koneff, M.D.; Royle, J. Andrew; Forsell, D.J.; Wortham, J.S.; Boomer, G.S.; Perry, M.C.

    2005-01-01

    Survey design for wintering scoters (Melanitta sp.) and other sea ducks that occur in offshore waters is challenging because these species have large ranges, are subject to distributional shifts among years and within a season, and can occur in aggregations. Interest in winter sea duck population abundance surveys has grown in recent years. This interest stems from concern over the population status of some sea ducks, limitations of extant breeding waterfowl survey programs in North America and logistical challenges and costs of conducting surveys in northern breeding regions, high winter area philopatry in some species and potential conservation implications, and increasing concern over offshore development and other threats to sea duck wintering habitats. The efficiency and practicality of statistically-rigorous monitoring strategies for mobile, aggregated wintering sea duck populations have not been sufficiently investigated. This study evaluated a 2-phase adaptive stratified strip transect sampling plan to estimate wintering population size of scoters, long-tailed ducks (Clangua hyemalis), and other sea ducks and provide information on distribution. The sampling plan results in an optimal allocation of a fixed sampling effort among offshore strata in the U.S. mid-Atlantic coast region. Phase I transect selection probabilities were based on historic distribution and abundance data, while Phase 2 selection probabilities were based on observations made during Phase 1 flights. Distance sampling methods were used to estimate detection rates. Environmental variables thought to affect detection rates were recorded during the survey and post-stratification and covariate modeling were investigated to reduce the effect of heterogeneity on detection estimation. We assessed cost-precision tradeoffs under a number of fixed-cost sampling scenarios using Monte Carlo simulation. We discuss advantages and limitations of this sampling design for estimating wintering sea duck

  15. Estimation of elasticity map of soft biological tissue mimicking phantom using laser speckle contrast analysis

    NASA Astrophysics Data System (ADS)

    Suheshkumar Singh, M.; Rajan, K.; Vasu, R. M.

    2011-05-01

    Scattering of coherent light from scattering particles causes phase shift to the scattered light. The interference of unscattered and scattered light causes the formation of speckles. When the scattering particles, under the influence of an ultrasound (US) pressure wave, vibrate, the phase shift fluctuates, thereby causing fluctuation in speckle intensity. We use the laser speckle contrast analysis (LSCA) to reconstruct a map of the elastic property (Young's modulus) of soft tissue-mimicking phantom. The displacement of the scatters is inversely related to the Young's modulus of the medium. The elastic properties of soft biological tissues vary, many fold with malignancy. The experimental results show that laser speckle contrast (LSC) is very sensitive to the pathological changes in a soft tissue medium. The experiments are carried out on a phantom with two cylindrical inclusions of sizes 6mm in diameter, separated by 8mm between them. Three samples are made. One inclusion has Young's modulus E of 40kPa. The second inclusion has either a Young's modulus E of 20kPa, or scattering coefficient of μs'=3.00mm-1 or absorption coefficient of μa=0.03mm-1. The optical absorption (μa), reduced scattering (μs') coefficient, and the Young's modulus of the background are μa=0.01mm-1, μs'=1.00mm-1 and 12kPa, respectively. The experiments are carried out on all three phantoms. On a phantom with two inclusions of Young's modulus of 20 and 40kPa, the measured relative speckle image contrasts are 36.55% and 63.72%, respectively. Experiments are repeated on phantoms with inclusions of μa=0.03mm-1, E =40kPa and μs'=3.00mm-1. The results show that it is possible to detect inclusions with contrasts in optical absorption, optical scattering, and Young's modulus. Studies of the variation of laser speckle contrast with ultrasound driving force for various values of μa, μs', and Young's modulus of the tissue mimicking medium are also carried out.

  16. Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir Welding

    NASA Astrophysics Data System (ADS)

    Guerdoux, Simon; Fourment, Lionel

    2007-05-01

    An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.

  17. Total protein measurement in canine cerebrospinal fluid: agreement between a turbidimetric assay and 2 dye-binding methods and determination of reference intervals using an indirect a posteriori method.

    PubMed

    Riond, B; Steffen, F; Schmied, O; Hofmann-Lehmann, R; Lutz, H

    2014-03-01

    In veterinary clinical laboratories, qualitative tests for total protein measurement in canine cerebrospinal fluid (CSF) have been replaced by quantitative methods, which can be divided into dye-binding assays and turbidimetric methods. There is a lack of validation data and reference intervals (RIs) for these assays. The aim of the present study was to assess agreement between the turbidimetric benzethonium chloride method and 2 dye-binding methods (Pyrogallol Red-Molybdate method [PRM], Coomassie Brilliant Blue [CBB] technique) for measurement of total protein concentration in canine CSF. Furthermore, RIs were determined for all 3 methods using an indirect a posteriori method. For assay comparison, a total of 118 canine CSF specimens were analyzed. For RIs calculation, clinical records of 401 canine patients with normal CSF analysis were studied and classified according to their final diagnosis in pathologic and nonpathologic values. The turbidimetric assay showed excellent agreement with the PRM assay (mean bias 0.003 g/L [-0.26-0.27]). The CBB method generally showed higher total protein values than the turbidimetric assay and the PRM assay (mean bias -0.14 g/L for turbidimetric and PRM assay). From 90 of 401 canine patients, nonparametric reference intervals (2.5%, 97.5% quantile) were calculated (turbidimetric assay and PRM method: 0.08-0.35 g/L (90% CI: 0.07-0.08/0.33-0.39); CBB method: 0.17-0.55 g/L (90% CI: 0.16-0.18/0.52-0.61). Total protein concentration in canine CSF specimens remained stable for up to 6 months of storage at -80°C. Due to variations among methods, RIs for total protein concentration in canine CSF have to be calculated for each method. The a posteriori method of RIs calculation described here should encourage other veterinary laboratories to establish RIs that are laboratory-specific. ©2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.

  18. Developing a 30-m grassland productivity estimation map for central Nebraska using 250-m MODIS and 30-m Landsat-8 observations

    USGS Publications Warehouse

    Gu, Yingxin; Wylie, Bruce K.

    2015-01-01

    Accurately estimating aboveground vegetation biomass productivity is essential for local ecosystem assessment and best land management practice. Satellite-derived growing season time-integrated Normalized Difference Vegetation Index (GSN) has been used as a proxy for vegetation biomass productivity. A 250-m grassland biomass productivity map for the Greater Platte River Basin had been developed based on the relationship between Moderate Resolution Imaging Spectroradiometer (MODIS) GSN and Soil Survey Geographic (SSURGO) annual grassland productivity. However, the 250-m MODIS grassland biomass productivity map does not capture detailed ecological features (or patterns) and may result in only generalized estimation of the regional total productivity. Developing a high or moderate spatial resolution (e.g., 30-m) productivity map to better understand the regional detailed vegetation condition and ecosystem services is preferred. The 30-m Landsat data provide spatial detail for characterizing human-scale processes and have been successfully used for land cover and land change studies. The main goal of this study is to develop a 30-m grassland biomass productivity estimation map for central Nebraska, leveraging 250-m MODIS GSN and 30-m Landsat data. A rule-based piecewise regression GSN model based on MODIS and Landsat (r = 0.91) was developed, and a 30-m MODIS equivalent GSN map was generated. Finally, a 30-m grassland biomass productivity estimation map, which provides spatially detailed ecological features and conditions for central Nebraska, was produced. The resulting 30-m grassland productivity map was generally supported by the SSURGO biomass production map and will be useful for regional ecosystem study and local land management practices.

  19. The October 2015 flash-floods in south eastern France: hydrological analyses, inundation mapping and impact estimations

    NASA Astrophysics Data System (ADS)

    Payrastre, Olivier; Bourgin, François; Lebouc, Laurent; Le Bihan, Guillaume; Gaume, Eric

    2017-04-01

    The October 2015 flash-floods in south eastern France caused more than twenty fatalities, high damages and large economic losses in high density urban areas of the Mediterranean coast, including the cities of Mandelieu-La Napoule, Cannes and Antibes. Following a post event survey and preliminary analyses conducted within the framework of the Hymex project, we set up an entire simulation chain at the regional scale to better understand this outstanding event. Rainfall-runoff simulations, inundation mapping and a first estimation of the impacts are conducted following the approach developed and successfully applied for two large flash-flood events in two different French regions (Gard in 2002 and Var in 2010) by Le Bihan (2016). A distributed rainfall-runoff model applied at high resolution for the whole area - including numerous small ungauged basins - is used to feed a semi-automatic hydraulic approach (Cartino method) applied along the river network - including small tributaries. Estimation of the impacts is then performed based on the delineation of the flooded areas and geographic databases identifying buildings and population at risk.

  20. A document image model and estimation algorithm for optimized JPEG decompression.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya; Fan, Zhigang

    2009-11-01

    The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR.

  1. Individual Thresholding of Voxel-based Functional Connectivity Maps. Estimation of Random Errors by Means of Surrogate Time Series.

    PubMed

    Griffanti, L; Baglio, F; Laganà, M M; Preti, M G; Cecconi, P; Clerici, M; Nemni, R; Baselli, G

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Neural Signals and Images". Voxel-based functional connectivity analysis is a common method for resting state fMRI data. However, correlations between the seed and other brain voxels are corrupted by random estimate errors yielding false connections within the functional connectivity map (FCmap). These errors must be taken into account for a correct interpretation of single-subject results. We estimated the statistical range of random errors and propose two methods for an individual setting of correlation threshold for FCmaps. We assessed the amount of random errors by means of surrogate time series and described its distribution within the brain. On the basis of these results, the FCmaps of the posterior cingulate cortex (PCC) from 15 healthy subjects were thresholded with two innovative methods: the first one consisted in the computation of a unique (global) threshold value to be applied to all brain voxels, while the second method is to set a different (local) threshold of each voxel of the FCmap. The distribution of random errors within the brain was observed to be homogeneous and, after thresholding with both methods, the default mode network areas were well identifiable. The two methods yielded similar results, however the application of a global threshold to all brain voxels requires a reduced computational load. The inter-subject variability of the global threshold was observed to be very low and not correlated with age. Global threshold values are also almost independent from the number of surrogates used for their computation, so the analyses can be optimized using a reduced number of surrogate time series. We demonstrated the efficacy of FCmaps thresholding based on random error estimation. This method can be used for a reliable single-subject analysis and could also be applied in clinical setting, to compute individual

  2. Combination of computer simulations and experimental measurements as the training dataset for statistical estimation of epicardial activation maps from venous catheter recordings.

    PubMed

    Cunedioğlu, Uğur; Yilmaz, Bülent

    2009-03-01

    One of the epicardial mapping techniques requires the insertion of multiple multi-electrode catheters into the coronary vessels. The recordings from the intracoronary catheters reflect the electrical activity on the nearby epicardial sites; however, most of epicardial surface is still inaccessible. In order to overcome this limited access problem, a method called the linear least squares estimation was proposed for the reconstruction of high-resolution maps using sparse measurements. In this technique, the relationship between catheter measurements and the remaining sites on the epicardium is created from previously obtained high-resolution maps (training dataset). Even though open-chest surgery is still a relatively frequent occurrence, an additional burden on the patient to obtain epicardial maps might impose an important risk on the patient. In this study, we hypothesize that epicardial maps created from computer simulations might be used in combination with the experimental data. In order to test this hypothesis, we used high-resolution epicardial activation maps acquired from 13 experiments performed on canine hearts that were stimulated via unipolar pacing from sites distributed all over the epicardium. We investigated the feasibility of the Aliev-Panfilov model that generated focal epicardial arrhythmias on Auckland heart. We started the simulations from the sites that corresponded to the pacing sites on the experimental geometry after a registration procedure between the experimental and simulation geometries. We then compared the simulation results with the corresponding experimental activation maps. Finally, we included simulated activation maps alone (100%) and in combination (simulated maps constituted 90%, 75%, 50%, 25%, 10%, and 0% of the training dataset) with experimental maps in the training set, performed the statistical estimation, and obtained the error statistics. The mean correlation coefficient (CC) between the simulated epicardial activation

  3. NASA/BLM Applications Pilot Test (APT), phase 2. Volume 1: Executive summary. [vegetation mapping and production estimation in northwestern Arizona

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Data from LANDSAT, low altitude color aerial photography, and ground visits were combined and used to produce vegetation cover maps and to estimate productivity of range, woodland, and forest resources in northwestern Arizona. A planning session, two workshops, and four status reviews were held to assist technology transfer from NASA. Computer aided digital classification of LANDSAT data was selected as a major source of input data. An overview is presented of the data processing, data collection, productivity estimation, and map verification techniques used. Cost analysis and digital LANDSAT digital products are also considered.

  4. S-SAD phasing of monoclinic histidine kinase from Brucella abortus combining data from multiple crystals and orientations: an example of data-collection strategy and a posteriori analysis of different data combinations.

    PubMed

    Klinke, Sebastián; Foos, Nicolas; Rinaldi, Jimena J; Paris, Gastón; Goldbaum, Fernando A; Legrand, Pierre; Guimarães, Beatriz G; Thompson, Andrew

    2015-07-01

    The histidine kinase (HK) domain belonging to the light-oxygen-voltage histidine kinase (LOV-HK) from Brucella abortus is a member of the HWE family, for which no structural information is available, and has low sequence identity (20%) to the closest HK present in the PDB. The `off-edge' S-SAD method in macromolecular X-ray crystallography was used to solve the structure of the HK domain from LOV-HK at low resolution from crystals in a low-symmetry space group (P21) and with four copies in the asymmetric unit (∼108 kDa). Data were collected both from multiple crystals (diffraction limit varying from 2.90 to 3.25 Å) and from multiple orientations of the same crystal, using the κ-geometry goniostat on SOLEIL beamline PROXIMA 1, to obtain `true redundancy'. Data from three different crystals were combined for structure determination. An optimized HK construct bearing a shorter cloning artifact yielded crystals that diffracted X-rays to 2.51 Å resolution and that were used for final refinement of the model. Moreover, a thorough a posteriori analysis using several different combinations of data sets allowed us to investigate the impact of the data-collection strategy on the success of the structure determination.

  5. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  6. Ecosystem services - from assessements of estimations to quantitative, validated, high-resolution, continental-scale mapping via airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Pfeifer, Norbert

    2016-04-01

    "Ecosystem services" defined vaguely as "nature's benefits to people" are a trending concept in ecology and conservation. Quantifying and mapping these services is a longtime demand of both ecosystems science and environmental policy. The current state of the art is to use existing maps of land cover, and assign certain average ecosystem service values to their unit areas. This approach has some major weaknesses: the concept of "ecosystem services", the input land cover maps and the value indicators. Such assessments often aim at valueing services in terms of human currency as a basis for decision-making, although this approach remains contested. Land cover maps used for ecosystem service assessments (typically the CORINE land cover product) are generated from continental-scale satellite imagery, with resolution in the range of hundreds of meters. In some rare cases, airborne sensors are used, with higher resolution but less covered area. Typically, general land cover classes are used instead of categories defined specifically for the purpose of ecosystem service assessment. The value indicators are developed for and tested on small study sites, but widely applied and adapted to other sites far away (a process called benefit transfer) where local information may not be available. Upscaling is always problematic since such measurements investigate areas much smaller than the output map unit. Nevertheless, remote sensing is still expected to play a major role in conceptualization and assessment of ecosystem services. We propose that an improvement of several orders of magnitude in resolution and accuracy is possible through the application of airborne LIDAR, a measurement technique now routinely used for collection of countrywide three-dimensional datasets with typically sub-meter resolution. However, this requires a clear definition of the concept of ecosystem services and the variables in focus: remote sensing can measure variables closely related to "ecosystem

  7. Development of an integrated genetic map of a sugarcane (Saccharum spp.) commercial cross, based on a maximum-likelihood approach for estimation of linkage and linkage phases.

    PubMed

    Garcia, A A F; Kido, E A; Meza, A N; Souza, H M B; Pinto, L R; Pastina, M M; Leite, C S; Silva, J A G da; Ulian, E C; Figueira, A; Souza, A P

    2006-01-01

    Sugarcane (Saccharum spp.) is a clonally propagated outcrossing polyploid crop of great importance in tropical agriculture. Up to now, all sugarcane genetic maps had been developed using either full-sib progenies derived from interspecific crosses or from selfing, both approaches not directly adopted in conventional breeding. We have developed a single integrated genetic map using a population derived from a cross between two pre-commercial cultivars ('SP80-180' x 'SP80-4966') using a novel approach based on the simultaneous maximum-likelihood estimation of linkage and linkage phases method specially designed for outcrossing species. From a total of 1,118 single-dose markers (RFLP, SSR and AFLP) identified, 39% derived from a testcross configuration between the parents segregating in a 1:1 fashion, while 61% segregated 3:1, representing heterozygous markers in both parents with the same genotypes. The markers segregating 3:1 were used to establish linkage between the testcross markers. The final map comprised of 357 linked markers, including 57 RFLPs, 64 SSRs and 236 AFLPs that were assigned to 131 co-segregation groups, considering a LOD score of 5, and a recombination fraction of 37.5 cM with map distances estimated by Kosambi function. The co-segregation groups represented a total map length of 2,602.4 cM, with a marker density of 7.3 cM. When the same data were analyzed using JoinMap software, only 217 linked markers were assigned to 98 co-segregation groups, spanning 1,340 cM, with a marker density of 6.2 cM. The maximum-likelihood approach reduced the number of unlinked markers to 761 (68.0%), compared to 901 (80.5%) using JoinMap. All the co-segregation groups obtained using JoinMap were present in the map constructed based on the maximum-likelihood method. Differences on the marker order within the co-segregation groups were observed between the two maps. Based on RFLP and SSR markers, 42 of the 131 co-segregation groups were assembled into 12 putative

  8. Ice Sheet Roughness Estimation Based on Impulse Responses Acquired in the Global Ice Sheet Mapping Orbiter Mission

    NASA Astrophysics Data System (ADS)

    Niamsuwan, N.; Johnson, J. T.; Jezek, K. C.; Gogineni, P.

    2008-12-01

    The Global Ice Sheet Mapping Orbiter (GISMO) mission was developed to address scientific needs to understand the polar ice subsurface structure. This NASA Instrument Incubator Program project is a collaboration between Ohio State University, the University of Kansas, Vexcel Corporation and NASA. The GISMO design utilizes an interferometric SAR (InSAR) strategy in which ice sheet reflected signals received by a dual-antenna system are used to produce an interference pattern. The resulting interferogram can be used to filter out surface clutter so as to reveal the signals scattered from the base of the ice sheet. These signals are further processed to produce 3D-images representing basal topography of the ice sheet. In the past three years, the GISMO airborne field campaigns that have been conducted provide a set of useful data for studying geophysical properties of the Greenland ice sheet. While topography information can be obtained using interferometric SAR processing techniques, ice sheet roughness statistics can also be derived by a relatively simple procedure that involves analyzing power levels and the shape of the radar impulse response waveforms. An electromagnetic scattering model describing GISMO impulse responses has previously been proposed and validated. This model suggested that rms-heights and correlation lengths of the upper surface profile can be determined from the peak power and the decay rate of the pulse return waveform, respectively. This presentation will demonstrate a procedure for estimating the roughness of ice surfaces by fitting the GISMO impulse response model to retrieved waveforms from selected GISMO flights. Furthermore, an extension of this procedure to estimate the scattering coefficient of the glacier bed will be addressed as well. Planned future applications involving the classification of glacier bed conditions based on the derived scattering coefficients will also be described.

  9. Drift-Free Indoor Navigation Using Simultaneous Localization and Mapping of the Ambient Heterogeneous Magnetic Field

    NASA Astrophysics Data System (ADS)

    Chow, J. C. K.

    2017-09-01

    In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions

  10. Bathymetric map, area/capacity table, and sediment volume estimate for Millwood Lake near Ashdown, Arkansas, 2013

    USGS Publications Warehouse

    Richards, Joseph M.; Green, W. Reed

    2013-01-01

    Millwood Lake, in southwestern Arkansas, was constructed and is operated by the U.S. Army Corps of Engineers (USACE) for flood-risk reduction, water supply, and recreation. The lake was completed in 1966 and it is likely that with time sedimentation has resulted in the reduction of storage capacity of the lake. The loss of storage capacity can cause less water to be available for water supply, and lessens the ability of the lake to mitigate flooding. Excessive sediment accumulation also can cause a reduction in aquatic habitat in some areas of the lake. Although many lakes operated by the USACE have periodic bathymetric and sediment surveys, none have been completed for Millwood Lake. In March 2013, the U.S. Geological Survey (USGS), in cooperation with the USACE, surveyed the bathymetry of Millwood Lake to prepare an updated bathymetric map and area/capacity table. The USGS also collected sediment thickness data in June 2013 to estimate the volume of sediment accumulated in the lake.

  11. Height estimation improvement via baseline calibration for a dual-pass, dual-antenna ground mapping IFSAR system.

    SciTech Connect

    Martinez, Ana; Jamshidi, Mohammad; Bickel, Douglas Lloyd; Doerry, Armin Walter

    2003-07-01

    Data collection for interferometric synthetic aperture radar (IFSAR) mapping systems currently utilize two operation modes. A single-antenna, dual-pass IFSAR operation mode is the first mode in which a platform carrying a single antenna traverses a flight path by the scene of interest twice collecting data. A dual-antenna, single-pass IFSAR operation mode is the second mode where a platform possessing two antennas flies past the scene of interest collecting data. There are advantages and disadvantages associated with both of these data collection modes. The single-antenna, dual-pass IFSAR operation mode possesses an imprecise knowledge of the antenna baseline length but allows for large antenna baseline lengths. This imprecise antenna baseline length knowledge lends itself to inaccurate target height scaling. The dual-antenna, one-pass IFSAR operation mode allows for a precise knowledge of the limited antenna baseline length but this limited baseline length leads to increased target height noise. This paper presents a new, innovative dual-antenna, dual-pass IFSAR operation mode which overcomes the disadvantages of the two current IFSAR operation modes. Improved target height information is now obtained with this new mode by accurately estimating the antenna baseline length between the dual flight passes using the data itself. Consequently, this new IFSAR operation mode possesses the target height scaling accuracies of the dual-antenna, one-pass operation mode and the height-noise performance of the one-antenna, dual-pass operation mode.

  12. Unsupervised self-organized mapping: a versatile empirical tool for object selection, classification and redshift estimation in large surveys

    NASA Astrophysics Data System (ADS)

    Geach, James E.

    2012-01-01

    We present an application of unsupervised machine learning - the self-organized map (SOM) - as a tool for visualizing, exploring and mining the catalogues of large astronomical surveys. Self-organization culminates in a low-resolution representation of the 'topology' of a parameter volume, and this can be exploited in various ways pertinent to astronomy. Using data from the Cosmological Evolution Survey (COSMOS), we demonstrate two key astronomical applications of the SOM: (i) object classification and selection, using galaxies with active galactic nuclei as an example, and (ii) photometric redshift estimation, illustrating how SOMs can be used as totally empirical predictive tools. With a training set of ˜3800 galaxies with zspec≤ 1, we achieve photometric redshift accuracies competitive with other (mainly template fitting) techniques that use a similar number of photometric bands [σ(Δz) = 0.03 with a ˜2 per cent outlier rate when using u* band to 8 ?m photometry]. We also test the SOM as a photo-z tool using the PHoto-z Accuracy Testing (PHAT) synthetic catalogue of Hildebrandt et al., which compares several different photo-z codes using a common input/training set. We find that the SOM can deliver accuracies that are competitive with many of the established template fitting and empirical methods. This technique is not without clear limitations, which are discussed, but we suggest it could be a powerful tool in the era of extremely large -'petabyte'- data bases where efficient data mining is a paramount concern.

  13. Estimating Integrated Water Vapor (IWV) regional map distribution using METEOSAT satellite data and GPS Zenith Wet Delay (ZWD)

    NASA Astrophysics Data System (ADS)

    Reuveni, Y.; Leontiev, A.

    2016-12-01

    Using GPS satellites signals, we can study atmospheric processes and coupling mechanisms, which can help us understand the physical conditions in the upper atmosphere that might lead or act as proxies for severe weather events such as extreme storms and flooding. GPS signals received by geodetic stations on the ground are multi-purpose and can also provide estimates of tropospheric zenith delays, which can be converted into mm-accuracy Precipitable Water Vapor (PWV) using collocated pressure and temperature measurements on the ground. Here, we present the use of Israel's geodetic GPS receivers network for extracting tropospheric zenith path delays combined with near Real Time (RT) METEOSAT-10 Water Vapor (WV) and surface temperature pixel intensity values (7.3 and 12.1 channels, respectively) in order to obtain absolute IWV (kg/m2) or PWV (mm) map distribution. The results show good agreement between the absolute values obtained from our triangulation strategy based solely on GPS Zenith Total Delays (ZTD) and METEOSAT-10 surface temperature data compared with available radiosonde Precipitable IWV/PWV absolute values. The presented strategy can provide unprecedented temporal and special IWV/PWV distribution, which is needed as part of the accurate and comprehensive initial conditions pro­vided by upper-air observation systems at temporal and spatial resolutions consistent with the models assimilating them.

  14. Mapping grey matter reductions in schizophrenia: an anatomical likelihood estimation analysis of voxel-based morphometry studies.

    PubMed

    Fornito, A; Yücel, M; Patti, J; Wood, S J; Pantelis, C

    2009-03-01

    Voxel-based morphometry (VBM) is a popular tool for mapping neuroanatomical changes in schizophrenia patients. Several recent meta-analyses have identified the brain regions in which patients most consistently show grey matter reductions, although they have not examined whether such changes reflect differences in grey matter concentration (GMC) or grey matter volume (GMV). These measures assess different aspects of grey matter integrity, and may therefore reflect different pathological processes. In this study, we used the Anatomical Likelihood Estimation procedure to analyse significant differences reported in 37 VBM studies of schizophrenia patients, incorporating data from 1646 patients and 1690 controls, and compared the findings of studies using either GMC or GMV to index grey matter differences. Analysis of all studies combined indicated that grey matter reductions in a network of frontal, temporal, thalamic and striatal regions are among the most frequently reported in literature. GMC reductions were generally larger and more consistent than GMV reductions, and were more frequent in the insula, medial prefrontal, medial temporal and striatal regions. GMV reductions were more frequent in dorso-medial frontal cortex, and lateral and orbital frontal areas. These findings support the primacy of frontal, limbic, and subcortical dysfunction in the pathophysiology of schizophrenia, and suggest that the grey matter changes observed with MRI may not necessarily result from a unitary pathological process.

  15. Estimator reduction and convergence of adaptive BEM.

    PubMed

    Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk

    2012-06-01

    A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations.

  16. A priori and a posteriori dietary patterns at the age of 1 year and body composition at the age of 6 years: the Generation R Study.

    PubMed

    Voortman, Trudy; Leermakers, Elisabeth T M; Franco, Oscar H; Jaddoe, Vincent W V; Moll, Henriette A; Hofman, Albert; van den Hooven, Edith H; Kiefte-de Jong, Jessica C

    2016-08-01

    Dietary patterns have been linked to obesity in adults, however, not much is known about this association in early childhood. We examined associations of different types of dietary patterns in 1-year-old children with body composition at school age in 2026 children participating in a population-based cohort study. Dietary intake at the age of 1 year was assessed with a food-frequency questionnaire. At the children's age of 6 years we measured their body composition with dual-energy X-ray absorptiometry and we calculated body mass index, fat mass index (FMI), and fat-free mass index (FFMI). Three dietary pattern approaches were used: (1) An a priori-defined diet quality score; (2) dietary patterns based on variation in food intake, derived from principal-component-analysis (PCA); and (3) dietary patterns based on variations in FMI and FFMI, derived with reduced-rank-regression (RRR). Both the a priori-defined diet score and a 'Health-conscious' PCA-pattern were characterized by a high intake of fruit, vegetables, grains, and vegetable oils, and, after adjustment for confounders, children with higher adherence to these patterns had a higher FFMI at 6 years [0.19 SD (95 % CI 0.08;0.30) per SD increase in diet score], but had no different FMI. One of the two RRR-patterns was also positively associated with FFMI and was characterized by intake of whole grains, pasta and rice, and vegetable oils. Our results suggest that different a priori- and a posteriori-derived health-conscious dietary patterns in early childhood are associated with a higher fat-free mass, but not with fat mass, in later childhood.

  17. Development and Clinical Evaluation of a Three-Dimensional Cone-Beam Computed Tomography Estimation Method Using a Deformation Field Map

    SciTech Connect

    Ren, Lei; Chetty, Indrin J.; Zhang Junan; Jin Jianyue; Wu, Q. Jackie; Yan Hui; Brizel, David M.; Lee, W. Robert; Movsas, Benjamin; Yin Fangfang

    2012-04-01

    Purpose: To develop a three-dimensional (3D) cone-beam computed tomography (CBCT) estimation method using a deformation field map, and to evaluate and optimize the efficiency and accuracy of the method for use in the clinical setting. Methods and Materials: We propose a method to estimate patient CBCT images using prior information and a deformation model. Patients' previous CBCT data are used as the prior information, and the new CBCT volume to be estimated is considered as a deformation of the prior image volume. The deformation field map is solved by minimizing deformation energy and maintaining new projection data fidelity using a nonlinear conjugate gradient method. This method was implemented in 3D form using hardware acceleration and multi-resolution scheme, and it was evaluated for different scan angles, projection numbers, and scan directions using liver, lung, and prostate cancer patient data. The accuracy of the estimation was evaluated by comparing the organ volume difference and the similarity between estimated CBCT and the CBCT reconstructed from fully sampled projections. Results: Results showed that scan direction and number of projections do not have significant effects on the CBCT estimation accuracy. The total scan angle is the dominant factor affecting the accuracy of the CBCT estimation algorithm. Larger scan angles yield better estimation accuracy than smaller scan angles. Lung cancer patient data showed that the estimation error of the 3D lung tumor volume was reduced from 13.3% to 4.3% when the scan angle was increased from 60 Degree-Sign to 360 Degree-Sign using 57 projections. Conclusions: The proposed estimation method is applicable for 3D DTS, 3D CBCT, four-dimensional CBCT, and four-dimensional DTS image estimation. This method has the potential for significantly reducing the imaging dose and improving the image quality by removing the organ distortion artifacts and streak artifacts shown in images reconstructed by the conventional

  18. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  19. Illness Mapping: a time and cost effective method to estimate healthcare data needed to establish community-based health insurance

    PubMed Central

    2012-01-01

    Background Most healthcare spending in developing countries is private out-of-pocket. One explanation for low penetration of health insurance is that poorer individuals doubt their ability to enforce insurance contracts. Community-based health insurance schemes (CBHI) are a solution, but launching CBHI requires obtaining accurate local data on morbidity, healthcare utilization and other details to inform package design and pricing. We developed the “Illness Mapping” method (IM) for data collection (faster and cheaper than household surveys). Methods IM is a modification of two non-interactive consensus group methods (Delphi and Nominal Group Technique) to operate as interactive methods. We elicited estimates from “Experts” in the target community on morbidity and healthcare utilization. Interaction between facilitator and experts became essential to bridge literacy constraints and to reach consensus. The study was conducted in Gaya District, Bihar (India) during April-June 2010. The intervention included the IM and a household survey (HHS). IM included 18 women’s and 17 men’s groups. The HHS was conducted in 50 villages with1,000 randomly selected households (6,656 individuals). Results We found good agreement between the two methods on overall prevalence of illness (IM: 25.9% ±3.6; HHS: 31.4%) and on prevalence of acute (IM: 76.9%; HHS: 69.2%) and chronic illnesses (IM: 20.1%; HHS: 16.6%). We also found good agreement on incidence of deliveries (IM: 3.9% ±0.4; HHS: 3.9%), and on hospital deliveries (IM: 61.0%. ± 5.4; HHS: 51.4%). For hospitalizations, we obtained a lower estimate from the IM (1.1%) than from the HHS (2.6%). The IM required less time and less person-power than a household survey, which translate into reduced costs. Conclusions We have shown that our Illness Mapping method can be carried out at lower financial and human cost for sourcing essential local data, at acceptably accurate levels. In view of the good fit of results

  20. Quantitative estimation of Tropical Rainfall Mapping Mission precipitation radar signals from ground-based polarimetric radar observations

    NASA Astrophysics Data System (ADS)

    Bolen, Steven M.; Chandrasekar, V.

    2003-06-01

    The Tropical Rainfall Mapping Mission (TRMM) is the first mission dedicated to measuring rainfall from space using radar. The precipitation radar (PR) is one of several instruments aboard the TRMM satellite that is operating in a nearly circular orbit with nominal altitude of 350 km, inclination of 35°, and period of 91.5 min. The PR is a single-frequency Ku-band instrument that is designed to yield information about the vertical storm structure so as to gain insight into the intensity and distribution of rainfall. Attenuation effects on PR measurements, however, can be significant and as high as 10-15 dB. This can seriously impair the accuracy of rain rate retrieval algorithms derived from PR signal returns. Quantitative estimation of PR attenuation is made along the PR beam via ground-based polarimetric observations to validate attenuation correction procedures used by the PR. The reflectivity (Zh) at horizontal polarization and specific differential phase (Kdp) are found along the beam from S-band ground radar measurements, and theoretical modeling is used to determine the expected specific attenuation (k) along the space-Earth path at Ku-band frequency from these measurements. A theoretical k-Kdp relationship is determined for rain when Kdp ≥ 0.5°/km, and a power law relationship, k = a Zhb, is determined for light rain and other types of hydrometers encountered along the path. After alignment and resolution volume matching is made between ground and PR measurements, the two-way path-integrated attenuation (PIA) is calculated along the PR propagation path by integrating the specific attenuation along the path. The PR reflectivity derived after removing the PIA is also compared against ground radar observations.

  1. From EEG to BOLD: brain mapping and estimating transfer functions in simultaneous EEG-fMRI acquisitions.

    PubMed

    Sato, João R; Rondinoni, Carlo; Sturzbecher, Marcio; de Araujo, Draulio B; Amaro, Edson

    2010-05-01

    Simultaneous acquisition of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) aims to disentangle the description of brain processes by exploiting the advantages of each technique. Most studies in this field focus on exploring the relationships between fMRI signals and the power spectrum at some specific frequency bands (alpha, beta, etc.). On the other hand, brain mapping of EEG signals (e.g., interictal spikes in epileptic patients) usually assumes an haemodynamic response function for a parametric analysis applying the GLM, as a rough approximation. The integration of the information provided by the high spatial resolution of MR images and the high temporal resolution of EEG may be improved by referencing them by transfer functions, which allows the identification of neural driven areas without strong assumptions about haemodynamic response shapes or brain haemodynamic's homogeneity. The difference on sampling rate is the first obstacle for a full integration of EEG and fMRI information. Moreover, a parametric specification of a function representing the commonalities of both signals is not established. In this study, we introduce a new data-driven method for estimating the transfer function from EEG signal to fMRI signal at EEG sampling rate. This approach avoids EEG subsampling to fMRI time resolution and naturally provides a test for EEG predictive power over BOLD signal fluctuations, in a well-established statistical framework. We illustrate this concept in resting state (eyes closed) and visual simultaneous fMRI-EEG experiments. The results point out that it is possible to predict the BOLD fluctuations in occipital cortex by using EEG measurements.

  2. Multisensory processing of naturalistic objects in motion: a high-density electrical mapping and source estimation study.

    PubMed

    Senkowski, Daniel; Saint-Amour, Dave; Kelly, Simon P; Foxe, John J

    2007-07-01

    In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects in motion as reflected in event-related potentials (ERPs). A randomized stream of naturalistic multisensory-audiovisual (AV), unisensory-auditory (A), and unisensory-visual (V) "splash" clips (i.e., a drop falling and hitting a water surface) was presented among non-naturalistic abstract motion stimuli. The visual clip onset preceded the "splash" onset by 100 ms for multisensory stimuli. For naturalistic objects early multisensory integration effects beginning 120-140 ms after sound onset were observed over posterior scalp, with distributed sources localized to occipital cortex, temporal lobule, insular, and medial frontal gyrus (MFG). These effects, together with longer latency interactions (210-250 and 300-350 ms) found in a widespread network of occipital, temporal, and frontal areas, suggest that naturalistic objects in motion are processed at multiple stages of multisensory integration. The pattern of integration effects differed considerably for non-naturalistic stimuli. Unlike naturalistic objects, no early interactions were found for non-naturalistic objects. The earliest integration effects for non-naturalistic stimuli were observed 210-250 ms after sound onset including large portions of the inferior parietal cortex (IPC). As such, there were clear differences in the cortical networks activated by multisensory motion stimuli as a consequence of the semantic relatedness (or lack thereof) of the constituent sensory elements.

  3. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  4. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  5. Speech enhancement via two-stage dual tree complex wavelet packet transform with a speech presence probability estimator

    NASA Astrophysics Data System (ADS)

    Sun, Pengfei; Qin, Jun

    2017-02-01

    In this paper, a two-stage dual tree complex wavelet packet transform (DTCWPT) based speech enhancement algorithm has been proposed, in which a speech presence probability (SPP) estimator and a generalized minimum mean squared error (MMSE) estimator are developed. To overcome the drawback of signal distortions caused by down sampling of WPT, a two-stage analytic decomposition concatenating undecimated WPT (UWPT) and decimated WPT is employed. An SPP estimator in the DTCWPT domain is derived based on a generalized Gamma distribution of speech, and Gaussian noise assumption. The validation results show that the proposed algorithm can obtain enhanced perceptual evaluation of speech quality (PESQ), and segmental signal-to-noise ratio (SegSNR) at low SNR nonstationary noise, compared with other four state-of-the-art speech enhancement algorithms, including optimally modified LSA (OM-LSA), soft masking using a posteriori SNR uncertainty (SMPO), a posteriori SPP based MMSE estimation (MMSE-SPP), and adaptive Bayesian wavelet thresholding (BWT).

  6. Global Forest Canopy Height Maps Validation and Calibration for The Potential of Forest Biomass Estimation in The Southern United States

    NASA Astrophysics Data System (ADS)

    Ku, N. W.; Popescu, S. C.

    2015-12-01

    In the past few years, three global forest canopy height maps have been released. Lefsky (2010) first utilized the Geoscience Laser Altimeter System (GLAS) on the Ice, Cloud and land Elevation Satellite (ICESat) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to generate a global forest canopy height map in 2010. Simard et al. (2011) integrated GLAS data and other ancillary variables, such as MODIS, Shuttle Radar Topography Mission (STRM), and climatic data, to generate another global forest canopy height map in 2011. Los et al. (2012) also used GLAS data to create a vegetation height map in 2012.Several studies attempted to compare these global height maps to other sources of data., Bolton et al. (2013) concluded that Simard's forest canopy height map has strong agreement with airborne lidar derived heights. Los map is a coarse spatial resolution vegetation height map with a 0.5 decimal degrees horizontal resolution, around 50 km in the US, which is not feasible for the purpose of our research. Thus, Simard's global forest canopy height map is the primary map for this research study. The main objectives of this research were to validate and calibrate Simard's map with airborne lidar data and other ancillary variables in the southern United States. The airborne lidar data was collected between 2010 and 2012 from: (1) NASA LiDAR, Hyperspectral & Thermal Image (G-LiHT) program; (2) National Ecological Observatory Network's (NEON) prototype data sharing program; (3) NSF Open Topography Facility; and (4) the Department of Ecosystem Science and Management at Texas A&M University. The airborne lidar study areas also cover a wide variety of vegetation types across the southern US. The airborne lidar data is post-processed to generate lidar-derived metrics and assigned to four different classes of point cloud data. The four classes of point cloud data are the data with ground points, above 1 m, above 3 m, and above 5 m. The root mean square error (RMSE) and

  7. An objective method for the production of isopach maps and implications for the estimation of tephra deposit volumes and their uncertainties.

    PubMed

    Engwell, S L; Aspinall, W P; Sparks, R S J

    Characterization of explosive volcanic eruptive processes from interpretation of deposits is a key for assessing volcanic hazard and risk, particularly for infrequent large explosive eruptions and those whose deposits are transient in the geological record. While eruption size-determined by measurement and interpretation of tephra fall deposits-is of particular importance, uncertainties for such measurements and volume estimates are rarely presented. Here, tephra volume estimates are derived from isopach maps produced by modeling raw thickness data as cubic B-spline curves under tension. Isopachs are objectively determined in relation to original data and enable limitations in volume estimates from published maps to be investigated. The eruption volumes derived using spline isopachs differ from selected published estimates by 15-40 %, reflecting uncertainties in the volume estimation process. The formalized analysis enables identification of sources of uncertainty; eruptive volume uncertainties (>30 %) are much greater than thickness measurement uncertainties (~10 %). The number of measurements is a key factor in volume estimate uncertainty, regardless of method utilized for isopach production. Deposits processed using the cubic B-spline method are well described by 60 measurements distributed across each deposit; however, this figure is deposit and distribution dependent, increasing for geometrically complex deposits, such as those exhibiting bilobate dispersion.

  8. MAP Algorithms for Decoding Linear Block Codes Based on Sectionalized Trellis Diagrams

    NASA Technical Reports Server (NTRS)

    Lui, Ye; Lin, Shu; Fossorier, Marc P. C.

    2000-01-01

    The maximum a posteriori probability (MAP) algorithm is a trellis-based MAP decoding algorithm. It is the heart of turbo (or iterative) decoding that achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in long decoding delay. For practical applications, this decoding algorithm must be simplified and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variations, such as log-MAP and max-log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bidirectional and parallel MAP decodings.

  9. MAP Algorithms for Decoding Linear Block Codes Based on Sectionalized Trellis Diagrams

    NASA Technical Reports Server (NTRS)

    Lui, Ye; Lin, Shu; Fossorier, Marc P. C.

    2000-01-01

    The maximum a posteriori probability (MAP) algorithm is a trellis-based MAP decoding algorithm. It is the heart of turbo (or iterative) decoding that achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in long decoding delay. For practical applications, this decoding algorithm must be simplified and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variations, such as log-MAP and max-log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bidirectional and parallel MAP decodings.

  10. Real-Time Estimation of Earthquake Location, Magnitude and Rapid Shake map Computation for the Campania Region, Southern Italy

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Convertito, V.; de Matteis, R.; Iannaccone, G.; Lancieri, M.; Lomax, A.; Satriano, C.

    2005-12-01

    A prototype system for earthquake early warning and rapid shake map evaluation is being developed and tested in southern Italy based on a dense, wide dynamic-range seismic network (accelerometers + seismometers) under installation in the Apenninic belt region (Irpinia Seismic Network). This system forms a regional Earthquake Early Warning System consisting of a seismic sensor network covering a portion of the expected epicentral area for large earrthquakes. Considering a warning window ranging from tens of seconds before to hundred of seconds after an earthquake, several public infrastructures and buildings of strategic relevance (hospitals, gas pipelines, railways, railroads, ...) of the Regione Campania are potential test-sites for testing innovative technologies for data acquisition, processing and transmission. A potential application of an early warning system in the Campania region based on the Irpinia network, should consider an expected time delay to the first energetic S wave train varying between 14-20 sec at 40-60 km distance to 26-30 sec at about 80-100 km, from a crustal earthquake occurring in the source region. The latter is the typical time window available for mitigating earthquake effects through early warning in the city of Naples (about 2 million of inhabitants including suburbs). We have developed a method for real time earthquake location following a probabilistic approach. The earthquake location is expressed as a probability density function for the hypocenter location in 3D space based on the concept of equal differential-time (EDT). It provides a location as the maximum of a stack over quasi-hyperbolic surfaces. On each surface the difference of calculated travel-times at a pair of stations is equal to the difference of observed arrival times at the same pair of stations. For an increasing number of P-wave readings, progressively acquired in the short time after the occurrence of an earthquake, the EDT method can be generalized by

  11. Estimating surface fluxes of very short-lived halogens from aircraft measurements over the tropical Western Pacific

    NASA Astrophysics Data System (ADS)

    Feng, Liang; Palmer, Paul I.; Butler, Robyn; Harris, Neil; Carpenter, Lucy; Andrews, Steve; Atlas, Elliot; Pan, Laura; Salawitch, Ross; Donets, Valeria; Schauffler, Sue

    2016-04-01

    We use an inverse model approach to quantitatively understand the ocean flux and atmospheric transport of very short-lived halogenated species (VSLS) measured during the coordinated NERC CAST and NCAR CONTRAST aircraft campaigns over the Western Pacific during January/February 2014. To achieve this we have developed a nested GEOS-Chem chemistry transport model simulation of bromoform (CHBr3) and dibromomethane (CH2Br2), which has a spatial resolution of 0.25° (latitude) × 0.3125° (longitude) over the tropical Western Pacific region, and fed by boundary conditions from a coarser version of the model. We use archived 3-hourly 3-D fields of OH and j-values for CHBr3 photolysis, allowing us to linearly decompose these gases into tagged contributions from different geographical regions. Using these tagged tracers, we are able to use the maximum a posteriori probability (MAP) approach to estimate the VSLS sources by fitting the model to observations. We find that the resulting VSLS fluxes are significantly different from some previous studies. To interpret the results, we describe several observation system simulation experiments to understand the sensitivity of these flux estimates to observation errors as well as to the uncertainty in the boundary condition imposed around the nested grid.

  12. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  13. Advances in 3D soil mapping and water content estimation using multi-channel ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2011-12-01

    Multi-channel ground-penetrating radar systems have recently become widely available, thereby opening new possibilities for shallow imaging of the subsurface. One advantage of these systems is that they can significantly reduce survey times by simultaneously collecting multiple lines of GPR reflection data. As a result, it is becoming more practical to complete 3D surveys - particularly in situations where the subsurface undergoes rapid changes, e.g., when monitoring infiltration and redistribution of water in soils. While 3D and 4D surveys can provide a degree of clarity that significantly improves interpretation of the subsurface, an even more powerful feature of the new multi-channel systems for hydrologists is their ability to collect data using multiple antenna offsets. Central mid-point (CMP) surveys have been widely used to estimate radar wave velocities, which can be related to water contents, by sequentially increasing the distance, i.e., offset, between the source and receiver antennas. This process is highly labor intensive using single-channel systems and therefore such surveys are often only performed at a few locations at any given site. In contrast, with multi-channel GPR systems it is possible to physically arrange an array of antennas at different offsets, such that a CMP-style survey is performed at every point along a radar transect. It is then possible to process this data to obtain detailed maps of wave velocity with a horizontal resolution on the order of centimeters. In this talk I review concepts underlying multi-channel GPR imaging with an emphasis on multi-offset profiling for water content estimation. Numerical simulations are used to provide examples that illustrate situations where multi-offset GPR profiling is likely to be successful, with an emphasis on considering how issues like noise, soil heterogeneity, vertical variations in water content and weak reflection returns affect algorithms for automated analysis of the data. Overall

  14. Estimation of diffusion properties in three-way fiber crossings without overfitting

    NASA Astrophysics Data System (ADS)

    Yang, Jianfei; Poot, Dirk H. J.; van Vliet, Lucas J.; Vos, Frans M.

    2015-12-01

    Diffusion-weighted magnetic resonance imaging permits assessment of the structural integrity of the brain’s white matter. This requires unbiased and precise quantification of diffusion properties. We aim to estimate such properties in simple and complex fiber geometries up to three-way fiber crossings using rank-2 tensor model selection. A maximum a-posteriori (MAP) estimator is employed to determine the parameters of a constrained triple tensor model. A prior is imposed on the parameters to avoid the degeneracy of the model estimation. This prior maximizes the divergence between the three tensor’s principal orientations. A new model selection approach quantifies the extent to which the candidate models are appropriate, i.e. a single-, dual- or triple-tensor model. The model selection precludes overfitting to the data. It is based on the goodness of fit and information complexity measured by the total Kullback-Leibler divergence (ICOMP-TKLD). The proposed framework is compared to maximum likelihood estimation on phantom data of three-way fiber crossings. It is also compared to the ball-and-stick approach from the FMRIB Software Library (FSL) on experimental data. The spread in the estimated parameters reduces significantly due to the prior. The fractional anisotropy (FA) could be precisely estimated with MAP down to an angle of approximately 40° between the three fibers. Furthermore, volume fractions between 0.2 and 0.8 could be reliably estimated. The configurations inferred by our method corresponded to the anticipated neuro-anatomy both in single fibers and in three-way fiber crossings. The main difference with FSL was in single fiber regions. Here, ICOMP-TKLD predominantly inferred a single fiber configuration, as preferred, whereas FSL mostly selected dual or triple order ball-and-stick models. The prior of our MAP estimator enhances the precision of the parameter estimation, without introducing a bias. Additionally, our model selection effectively

  15. Estimation of diffusion properties in three-way fiber crossings without overfitting.

    PubMed

    Yang, Jianfei; Poot, Dirk H J; van Vliet, Lucas J; Vos, Frans M

    2015-12-07

    Diffusion-weighted magnetic resonance imaging permits assessment of the structural integrity of the brain's white matter. This requires unbiased and precise quantification of diffusion properties. We aim to estimate such properties in simple and complex fiber geometries up to three-way fiber crossings using rank-2 tensor model selection. A maximum a-posteriori (MAP) estimator is employed to determine the parameters of a constrained triple tensor model. A prior is imposed on the parameters to avoid the degeneracy of the model estimation. This prior maximizes the divergence between the three tensor's principal orientations. A new model selection approach quantifies the extent to which the candidate models are appropriate, i.e. a single-, dual- or triple-tensor model. The model selection precludes overfitting to the data. It is based on the goodness of fit and information complexity measured by the total Kullback-Leibler divergence (ICOMP-TKLD). The proposed framework is compared to maximum likelihood estimation on phantom data of three-way fiber crossings. It is also compared to the ball-and-stick approach from the FMRIB Software Library (FSL) on experimental data. The spread in the estimated parameters reduces significantly due to the prior. The fractional anisotropy (FA) could be precisely estimated with MAP down to an angle of approximately 40° between the three fibers. Furthermore, volume fractions between 0.2 and 0.8 could be reliably estimated. The configurations inferred by our method corresponded to the anticipated neuro-anatomy both in single fibers and in three-way fiber crossings. The main difference with FSL was in single fiber regions. Here, ICOMP-TKLD predominantly inferred a single fiber configuration, as preferred, whereas FSL mostly selected dual or triple order ball-and-stick models. The prior of our MAP estimator enhances the precision of the parameter estimation, without introducing a bias. Additionally, our model selection effectively balances

  16. Map Algorithms for Decoding Linear Block codes Based on Sectionalized Trellis Diagrams

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1999-01-01

    The MAP algorithm is a trellis-based maximum a posteriori probability decoding algorithm. It is the heart of the turbo (or iterative) decoding which achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in long decoding delay. For practical applications, this decoding algorithm must be simplified and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variations, such as Log-MAP and Max-Log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bi-directional and parallel MAP decodings.

  17. Method for estimating potential wetland extent by utilizing streamflow statistics and flood-inundation mapping techniques: Pilot study for land along the Wabash River near Terre Haute, Indiana

    USGS Publications Warehouse

    Kim, Moon H.; Ritz, Christian T.; Arvin, Donald V.

    2012-01-01

    Potential wetland extents were estimated for a 14-mile reach of the Wabash River near Terre Haute, Indiana. This pilot study was completed by the U.S. Geological Survey in cooperation with the U.S. Department of Agriculture, Natural Resources Conservation Service (NRCS). The study showed that potential wetland extents can be estimated by analyzing streamflow statistics with the available streamgage data, calculating the approximate water-surface elevation along the river, and generating maps by use of flood-inundation mapping techniques. Planning successful restorations for Wetland Reserve Program (WRP) easements requires a determination of areas that show evidence of being in a zone prone to sustained or frequent flooding. Zone determinations of this type are used by WRP planners to define the actively inundated area and make decisions on restoration-practice installation. According to WRP planning guidelines, a site needs to show evidence of being in an "inundation zone" that is prone to sustained or frequent flooding for a period of 7 consecutive days at least once every 2 years on average in order to meet the planning criteria for determining a wetland for a restoration in agricultural land. By calculating the annual highest 7-consecutive-day mean discharge with a 2-year recurrence interval (7MQ2) at a streamgage on the basis of available streamflow data, one can determine the water-surface elevation corresponding to the calculated flow that defines the estimated inundation zone along the river. By using the estimated water-surface elevation ("inundation elevation") along the river, an approximate extent of potential wetland for a restoration in agricultural land can be mapped. As part of the pilot study, a set of maps representing the estimated potential wetland extents was generated in a geographic information system (GIS) application by combining (1) a digital water-surface plane representing the surface of inundation elevation that sloped in the downstream

  18. Solving inverse problems with piecewise linear estimators: from Gaussian mixture models to structured sparsity.

    PubMed

    Yu, Guoshen; Sapiro, Guillermo; Mallat, Stéphane

    2012-05-01

    A general framework for solving image inverse problems with piecewise linear estimations is introduced in this paper. The approach is based on Gaussian mixture models, which are estimated via a maximum a posteriori expectation-maximization algorithm. A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques. We demonstrate that, in a number of image inverse problems, including interpolation, zooming, and deblurring of narrow kernels, the same simple and computationally efficient algorithm yields results in the same ballpark as that of the state of the art.

  19. Estimation of two-dimensional intraventricular velocity and pressure maps by digital processing conventional color-Doppler sequences

    NASA Astrophysics Data System (ADS)

    Garcia, Damien; Del Alamo, Juan C.; Tanne, David; Cortina, Cristina; Yotti, Raquel; Fernandez-Aviles, Francisco; Bermejo, Javier

    2008-11-01

    Clinical echocardiographic quantification of blood flow in the left ventricle is limited because Doppler methods only provide one velocity component. We developed a new technique to obtain two-dimensional flow maps from conventional transthoracic echocardiographic acquisitions. Velocity and pressure maps were calculated from color-Doppler velocity (apical long-axis view) by solving the continuity and Euler equations under the assumptions of zero transverse fluxes of mass and momentum. This technique is fast, clinically-compliant and does not require any specific training. Particle image velocimetry experiments performed in an atrioventricular duplicator showed that the circulation and size of the diastolic vortex was quantified accurately. Micromanometer measurements in pigs showed that apex-base pressure differences extracted from two-dimensional maps qualitatively agreed with micromanometer data. Initial clinical measurements in healthy volunteers showed a large prograde vortex. Additional retrograde vortices appeared in patients with dilated cardiomyopathy and left ventricular hypertrophy.

  20. www.common-metrics.org: a web application to estimate scores from different patient-reported outcome measures on a common scale.

    PubMed

    Fischer, H Felix; Rose, Matthias

    2016-10-19

    Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.

  1. Estimation of Flow Duration Curve for Ungauged Catchments using Adaptive Neuro-Fuzzy Inference System and Map Correlation Method: A Case Study from Turkey

    NASA Astrophysics Data System (ADS)

    Kentel, E.; Dogulu, N.

    2015-12-01

    In Turkey the experience and data required for a hydrological model setup is limited and very often not available. Moreover there are many ungauged catchments where there are also many planned projects aimed at utilization of water resources including development of existing hydropower potential. This situation makes runoff prediction at locations with lack of data and ungauged locations where small hydropower plants, reservoirs, etc. are planned an increasingly significant challenge and concern in the country. Flow duration curves have many practical applications in hydrology and integrated water resources management. Estimation of flood duration curve (FDC) at ungauged locations is essential, particularly for hydropower feasibility studies and selection of the installed capacities. In this study, we test and compare the performances of two methods for estimating FDCs in the Western Black Sea catchment, Turkey: (i) FDC based on Map Correlation Method (MCM) flow estimates. MCM is a recently proposed method (Archfield and Vogel, 2010) which uses geospatial information to estimate flow. Flow measurements of stream gauging stations nearby the ungauged location are the only data requirement for this method. This fact makes MCM very attractive for flow estimation in Turkey, (ii) Adaptive Neuro-Fuzzy Inference System (ANFIS) is a data-driven method which is used to relate FDC to a number of variables representing catchment and climate characteristics. However, it`s ease of implementation makes it very useful for practical purposes. Both methods use easily collectable data and are computationally efficient. Comparison of the results is realized based on two different measures: the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE) value. Ref: Archfield, S. A., and R. M. Vogel (2010), Map correlation method: Selection of a reference streamgage to estimate daily streamflow at ungaged catchments, Water Resour. Res., 46, W10513, doi:10.1029/2009WR008481.

  2. Difficulties with estimating city-wide urban forest cover change from national, remotely-sensed tree canopy maps

    Treesearch

    Jeffrey T. Walton

    2008-01-01

    Two datasets of percent urban tree canopy cover were compared. The first dataset was based on a 1991 AVHRR forest density map. The second was the US Geological Survey's National Land Cover Database (NLCD) 2001 sub-pixel tree canopy. A comparison of these two tree canopy layers was conducted in 36 census designated places of western New York State. Reference data...

  3. Mapping land cover and estimating forest structure using satellite imagery and coarse resolution lidar in the Virgin Islands

    Treesearch

    Todd Kennaway; Eileen Helmer; Michael Lefsky; Thomas Brandeis; Kirk Sherrill

    2009-01-01

    Current information on land cover, forest type and forest structure for the Virgin Islands is critical to land managers and researachers for accurate forest inverntory and ecological monitoring. In this study, we use cloud free image mosaics of panchromatic sharpened Landsat ETM+ images and decision tree classification software to map land cover and forest type for the...

  4. Mapping land cover and estimating forest structure using satellite imagery and coarse resolution lidar in the Virgin Islands

    Treesearch

    T.A. Kennaway; E.H. Helmer; M.A. Lefsky; T.A. Brandeis; K.R. Sherill

    2008-01-01

    Current information on land cover, forest type and forest structure for the Virgin Islands is critical to land managers and researchers for accurate forest inventory and ecological monitoring. In this study, we use cloud free image mosaics of panchromatic sharpened Landsat ETM+ images and decision tree classification software to map land cover and forest type for the...

  5. Estimating flow parameter distributions using ground-penetrating radar and hydrological measurements during transient flow in the vadose zone

    SciTech Connect

    Kowalsky, Michael B.; Finsterle, Stefan; Rubin, Yoram

    2003-07-01

    Methods for estimating the parameter distributions necessary for modeling fluid flow and contaminant transport in the shallow subsurface are in great demand. Soil properties such as permeability, porosity, and water retention are typically estimated through the inversion of hydrological data (e.g., measurements of capillary pressure and water saturation). However, ill-posedness and non-uniqueness commonly arise in such non-linear inverse problems making their solutions elusive. Incorporating additional types of data, such as from geophysical methods, may greatly improve the success of inverse modeling. In particular, ground-penetrating radar (GPR) methods have proven sensitive to subsurface fluid flow processes and appear promising for such applications. In the present work, an inverse technique is presented which allows for the estimation of flow parameter distributions and the prediction of flow phenomena using GPR and hydrological measurements collected during a transient flow experiment. Specifically, concepts from the pilot point method were implemented in a maximum a posteriori (MAP) framework to allow for the generation of permeability distributions that are conditional to permeability point measurements, that maintain specified patterns of spatial correlation, and that are consistent with geophysical and hydrological data. The current implementation of the approach allows for additional flow parameters to be estimated concurrently if they are assumed uniform and uncorrelated with the permeability distribution. (The method itself allows for heterogeneity in these parameters to be considered, and it allows for parameters of the petrophysical and semivariogram models to be estimated as well.) Through a synthetic example, performance of the method is evaluated under various conditions, and some conclusions are made regarding the joint use of transient GPR and hydrological measurements in estimating fluid flow parameters in the vadose zone.

  6. Some New Methods for Mapping Ratings to the NAEP Theta Scale To Support Estimation of NAEP Achievement Level Boundaries.

    ERIC Educational Resources Information Center

    Davey, Tim; And Others

    Some standard-setting methods require judges to estimate the probability that an examinee who just meets an achievement standard will answer each of a set of items correctly. These probability estimates are then used to infer the values on some latent scale that, in theory, determines an examinee's responses. The paper focuses on the procedures…

  7. Sensitivity of Global Upper Ocean Heat Content Estimates to Mapping Methods, XBT Bias Corrections, and Baseline Climatologies

    NASA Astrophysics Data System (ADS)

    Boyer, T.; Domingues, C. M.; Good, S. A.; Johnson, G. C.; Lyman, J. M.; Ishii, M.; Gouretski, V. V.; Willis, J. K.; Antonov, J.; Church, J. A.; Cowley, R.; Bindoff, N. L.; Wijffels, S. A.

    2016-02-01

    Ocean warming accounts for the majority of Earth's current energy imbalance. Historic Ocean Heat Content (OHC) changes are important to understanding our changing climate. Calculations of OHC anomalies (OHCA) from in situ measurements provide estimates of these changes. Uncertainties in OHCA estimates arise from temporal and spatial inhomogeneity of subsurface ocean temperature measurements, instrument bias corrections, and the definitions of a mean ocean climatology from which anomalies are calculated. To quantify the uncertainties and biases these different factors contribute for different OHCA estimation methods, and the uncertainty in these different estimation methods, the same data set with the same quality control is used by seven groups calculating eight OHCA estimates for 0-700m depth and comparisons made.

  8. Land Cover Mapping using GEOBIA to Estimate Loss of Salacca zalacca Trees in Landslide Area of Clapar, Madukara District of Banjarnegara

    NASA Astrophysics Data System (ADS)

    Permata, Anggi; Juniansah, Anwar; Nurcahyati, Eka; Dimas Afrizal, Mousafi; Adnan Shafry Untoro, Muhammad; Arifatha, Na'ima; Ramadhani Yudha Adiwijaya, Raden; Farda, Nur Mohammad

    2016-11-01

    Landslide is an unpredictable natural disaster which commonly happens in highslope area. Aerial photography in small format is one of acquisition method that can reach and obtain high resolution spatial data faster than other methods, and provide data such as orthomosaic and Digital Surface Model (DSM). The study area contained landslide area in Clapar, Madukara District of Banjarnegara. Aerial photographs of landslide area provided advantage in objects visibility. Object's characters such as shape, size, and texture were clearly seen, therefore GEOBIA (Geography Object Based Image Analysis) was compatible as method for classifying land cover in study area. Dissimilar with PPA (PerPixel Analyst) method that used spectral information as base object detection, GEOBIA could use spatial elements as classification basis to establish a land cover map with better accuracy. GEOBIA method used classification hierarchy to divide post disaster land cover into three main objects: vegetation, landslide/soil, and building. Those three were required to obtain more detailed information that can be used in estimating loss caused by landslide and establishing land cover map in landslide area. Estimating loss in landslide area related to damage in Salak (Salacca zalacca) plantations. This estimation towards quantity of Salak tree that were drifted away by landslide was calculated in assumption that every tree damaged by landslide had same age and production class with other tree that weren't damaged. Loss calculation was done by approximating quantity of damaged trees in landslide area with data of trees around area that were acquired from GEOBIA classification method.

  9. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data.

    PubMed

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-02-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T(a)) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T(a) estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T(a) based on MODIS land surface temperature (LST) data. The verification results of maximum T(a), minimum T(a), GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001-2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale.

  10. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data*

    PubMed Central

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-01-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T a) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T a estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T a based on MODIS land surface temperature (LST) data. The verification results of maximum T a, minimum T a, GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001–2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale. PMID:23365013

  11. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  12. Nonintrusive estimation of anisotropic stiffness maps of heterogeneous steel welds for the improvement of ultrasonic array inspection.

    PubMed

    Fan, Zheng; Mark, Alison F; Lowe, Michael J S; Withers, Philip J

    2015-08-01

    It is challenging to inspect austenitic welds nondestructively using ultrasonic waves because the spatially varying elastic anisotropy of weld microstructures can lead to the deviation of ultrasound. Models have been developed to predict the propagation of ultrasound in such welds once the weld stiffness heterogeneity is known. Consequently, it is desirable to have a means of measuring the variation in elastic anisotropy experimentally so as to be able to correct for deviations in ultrasonic pathways for the improvement of weld inspection. This paper investigates the use of external nonintrusive ultrasonic array measurements to construct such weld stiffness maps, representing the orientation of the stiffness tensor according to location in the weld cross section. An inverse model based on a genetic algorithm has been developed to recover a small number of key parameters in an approximate model of the weld map, making use of ultrasonic array measurements. The approximate model of the weld map uses the Modeling of anIsotropy based on Notebook of Arcwelding (MINA) formulation, which is one of the representations that has been proposed by other researchers to provide a simple, yet physically based, description of the overall variations of orientations of the stiffness tensors over the weld cross section. The choice of sensitive ultrasonic modes as well as the best monitoring positions have been discussed to achieve a robust inversion. Experiments have been carried out on a 60-mm-thick multipass tungsten inert gas (TIG) weld to validate the findings of the modeling, showing very good agreement. This work shows that ultrasonic array measurements can be used on a single side of a butt-welded plate, such that there is no need to access the remote side, to construct an approximate but useful weld map of the spatial variations in anisotropic stiffness orientation that occur within the weld.

  13. Use of a satellite-derived land cover map to estimate transport of radiocaesium to surface waters.

    PubMed

    Smith, J T; Howard, D C; Wright, S M; Naylor, C; Brookes, A M; Hilton, J; Howard, B J

    1998-01-08

    During the weeks to months after the deposition of radioactive fallout, the initial concentration of radioactivity in rivers and lakes declines as a result of flushing and removal to bottom sediments. In the long-term, however, radioactivity in the water body can remain at significant levels as a result of secondary contamination processes. In particular, it is known that soils contaminated by long-lived radionuclides such as 137Cs and 90Sr provide a significant source to surface waters over a period of years after fallout. Using The Land Cover Map of Great Britain, a satellite-derived land cover map as a surrogate indicator of soil type, we have related catchment land cover type to long-term 137Cs activity concentrations in 27 lakes in Cumbria, UK. The study has shown that satellite-derived maps could be used to indicate areas vulnerable to high long-term 137Cs transport to surface waters in the event of a nuclear accident. In these Cumbrian lakes, it appears that residual 137Cs levels are determined by transfers of 137Cs from contaminated catchments rather than within-lake processes. Only three of the cover types, open shrub moor, bog and dense shrub moor, as identified by the satellite, are needed to explain over 90% of the variation in long-term 137Cs activity concentrations in the lakes, and these have been shown to correlate spatially with occurrence of organic soils.

  14. On the downscaling of actual evapotranspiration maps based on combination of MODIS and landsat-based actual evapotranspiration estimates

    USGS Publications Warehouse

    Singh, Ramesh K.; Senay, Gabriel B.; Velpuri, Naga Manohar; Bohms, Stefanie; Verdin, James P.

    2014-01-01

     Downscaling is one of the important ways of utilizing the combined benefits of the high temporal resolution of Moderate Resolution Imaging Spectroradiometer (MODIS) images and fine spatial resolution of Landsat images. We have evaluated the output regression with intercept method and developed the Linear with Zero Intercept (LinZI) method for downscaling MODIS-based monthly actual evapotranspiration (AET) maps to the Landsat-scale monthly AET maps for the Colorado River Basin for 2010. We used the 8-day MODIS land surface temperature product (MOD11A2) and 328 cloud-free Landsat images for computing AET maps and downscaling. The regression with intercept method does have limitations in downscaling if the slope and intercept are computed over a large area. A good agreement was obtained between downscaled monthly AET using the LinZI method and the eddy covariance measurements from seven flux sites within the Colorado River Basin. The mean bias ranged from −16 mm (underestimation) to 22 mm (overestimation) per month, and the coefficient of determination varied from 0.52 to 0.88. Some discrepancies between measured and downscaled monthly AET at two flux sites were found to be due to the prevailing flux footprint. A reasonable comparison was also obtained between downscaled monthly AET using LinZI method and the gridded FLUXNET dataset. The downscaled monthly AET nicely captured the temporal variation in sampled land cover classes. The proposed LinZI method can be used at finer temporal resolution (such as 8 days) with further evaluation. The proposed downscaling method will be very useful in advancing the application of remotely sensed images in water resources planning and management.

  15. Mapping regions of equifinality in the parameter space - A method to evaluate inverse estimates and to plan experimental boundary conditions

    NASA Astrophysics Data System (ADS)

    Wehrer, Markus; Totsche, Kai Uwe

    2010-05-01

    Only the combination of physical models and experiments can elucidate the processes of reactive transport in porous media. Column scale percolation experiments offer a great opportunity to identify and quantify processes of reactive transport. In contrast to batch experiments, approximately natural flow dynamics can be realized. However, due to the complexity of interactions and wide range of parameters the experiment can be insensitive to the wanted process and misinterpretation of the results is likely. In the proposed talk we want to show how numerical tools can be applied for thorough planning and evaluation of experiments. The central tool are maps of regions of equifinality, which are gained by a thorough sensitivity analysis of the parameter space. This tool can help on the one hand to plan the experimental boundary conditions such that the results are sensitive to the wanted process. On the other hand, they provide information on the reliability of inversely gained parameters of flow and transport. In the proposed talk we want to show from all three phases of the method. In the first phase the equifinality maps are used to choose an appropriate boundary condition for the experiment. In the second phase, the according column experiments are conducted and simulated inversely. We show break-through curves from such experiments with materials from different soils, sites and materials (Coke oven sites, abandoned industrial sites, destruction debris, municipal waste incineration ash). The columns were subjected to multiple flow interruptions and different flow velocities and parameters of reactive transport were gained in inverse simulations. The third phase consisted of an evaluation of the reliability of the parameters applying again maps of equifinality. Some drawbacks of the model could be identified and gave valuable hints on the actual processes.

  16. A Limited Sampling Schedule to Estimate Individual Pharmacokinetic Parameters of Fludarabine in Hematopoietic Cell Transplant Patients

    PubMed Central

    Salinger, David H.; Blough, David K.; Vicini, Paolo; Anasetti, Claudio; O’Donnell, Paul V.; Sandmaier, Brenda M.; McCune, Jeannine S.

    2009-01-01

    Purpose Fludarabine monophosphate (fludarabine) is frequently administered to patients receiving a reduced-intensity conditioning regimen for allogeneic hematopoietic cell transplant (HCT) in an ambulatory care setting. These patients experience significant interpatient variability in clinical outcomes, potentially due to pharmacokinetic variability in 2-fluoroadenine (F-ara-A) plasma concentrations. To test such hypotheses, patient compliance with the blood sampling should be optimized by the development of a minimally intrusive limited sampling schedule (LSS) to characterize F-ara-A pharmacokinetics. To this end, we sought to create the first F-ara-A population pharmacokinetic model and subsequently a LSS. Experimental Design A retrospective evaluation of F-ara-A pharmacokinetics was conducted after one or more doses of daily IV fludarabine in 42 adult HCT recipients. NONMEM software was used to estimate the population pharmacokinetic parameters and compute the area under the concentration-time curve (AUC). Results A two compartment model best fit the data. A LSS was constructed using a simulation approach, seeking to minimize the scaled mean square error (sMSE) for the AUC for each simulated individual. The LSS times chosen were: 0.583 hour (hr), 1.5 hr, 6.5 hr and 24 hr after the start of the 30 minute fludarabine infusion. Conclusion The pharmacokinetics of F-ara-A in an individual HCT patient can be accurately estimated by obtaining 4 blood samples (using the LSS) and maximum a posteriori (MAP) Bayesian estimation. Conclusions These are essential tools for prospective pharmacodynamic studies seeking to determine if clinical outcomes are related to F-ara-A pharmacokinetics in patients receiving IV fludarabine in the ambulatory clinic. PMID:19671874

  17. Estimated Flood-Inundation Mapping for the Upper Blue River, Indian Creek, and Dyke Branch in Kansas City, Missouri, 2006-08

    USGS Publications Warehouse

    Kelly, Brian P.; Huizinga, Richard J.

    2008-01-01

    In the interest of improved public safety during flooding, the U.S. Geological Survey, in cooperation with the city of Kansas City, Missouri, completed a flood-inundation study of the Blue River in Kansas City, Missouri, from the U.S. Geological Survey streamflow gage at Kenneth Road to 63rd Street, of Indian Creek from the Kansas-Missouri border to its mouth, and of Dyke Branch from the Kansas-Missouri border to its mouth, to determine the estimated extent of flood inundation at selected flood stages on the Blue River, Indian Creek, and Dyke Branch. The results of this study spatially interpolate information provided by U.S. Geological Survey gages, Kansas City Automated Local Evaluation in Real Time gages, and the National Weather Service flood-peak prediction service that comprise the Blue River flood-alert system and are a valuable tool for public officials and residents to minimize flood deaths and damage in Kansas City. To provide public access to the information presented in this report, a World Wide Web site (http://mo.water.usgs.gov/indep/kelly/blueriver) was created that displays the results of two-dimensional modeling between Hickman Mills Drive and 63rd Street, estimated flood-inundation maps for 13 flood stages, the latest gage heights, and National Weather Service stage forecasts for each forecast location within the study area. The results of a previous study of flood inundation on the Blue River from 63rd Street to the mouth also are available. In addition the full text of this report, all tables and maps are available for download (http://pubs.usgs.gov/sir/2008/5068). Thirteen flood-inundation maps were produced at 2-foot intervals for water-surface elevations from 763.8 to 787.8 feet referenced to the Blue River at the 63rd Street Automated Local Evaluation in Real Time stream gage operated by the city of Kansas City, Missouri. Each map is associated with gages at Kenneth Road, Blue Ridge Boulevard, Kansas City (at Bannister Road), U.S. Highway 71

  18. Dual-energy digital mammography: calibration and inverse-mapping techniques to estimate calcification thickness and glandular-tissue ratio.

    PubMed

    Kappadath, S Cheenu; Shaw, Chris C

    2003-06-01

    Breast cancer may manifest as microcalcifications in x-ray mammography. Small microcalcifications, essential to the early detection of breast cancer, are often obscured by overlapping tissue structures. Dual-energy imaging, where separate low- and high-energy images are acquired and synthesized to cancel the tissue structures, may improve the ability to detect and visualize microcalcifications. Transmission measurements at two different kVp values were made on breast-tissue-equivalent materials under narrow-beam geometry using an indirect flat-panel mammographic imager. The imaging scenario consisted of variable aluminum thickness (to simulate calcifications) and variable glandular ratio (defined as the ratio of the glandular-tissue thickness to the total tissue thickness) for a fixed total tissue thickness--the clinical situation of microcalcification imaging with varying tissue composition under breast compression. The coefficients of the inverse-mapping functions used to determine material composition from dual-energy measurements were calculated by a least-squares analysis. The linear function poorly modeled both the aluminum thickness and the glandular ratio. The inverse-mapping functions were found to vary as analytic functions of second (conic) or third (cubic) order. By comparing the model predictions with the calibration values, the root-mean-square residuals for both the cubic and the conic functions were approximately 50 microm for the aluminum thickness and approximately 0.05 for the glandular ratio.

  19. Estimating and mapping chlorophyll content for a heterogeneous grassland: Comparing prediction power of a suite of vegetation indices across scales between years

    NASA Astrophysics Data System (ADS)

    Tong, Alexander; He, Yuhong

    2017-04-01

    This study investigates the performance of existing vegetation indices for retrieving chlorophyll content for a semi-arid mixed grass prairie ecosystem across scales using in situ data collected in 2012 and 2013. A 144 published broadband (21) and narrowband (123) vegetation indices are evaluated to estimate chlorophyll content. Results indicate that narrowband indices utilize reflectance data from one or more wavelengths in the red-edge region (∼690-750 nm) perform better. Broadband indices are found to be as effective as narrowband indices for chlorophyll content estimation at both leaf and canopy scales. The empirical relationships are generally stronger at the canopy than the leaf scale, attributable to the fact that leaf samples are collected during the peak growing season when chlorophyll in plant species are uniform. SPOT-5 and CASI-550 derived chlorophyll maps result in map accuracies of 63.56% and 78.88% respectively. The assessment of vegetation chlorophylls at the canopy level, especially using remote sensing imagery is important for providing information pertaining to ecosystem health such as the physiological status, productivity, or phenology of vegetation.

  20. Source estimates for MEG/EEG visual evoked responses constrained by multiple, retinotopically-mapped stimulus locations.

    PubMed

    Hagler, Donald J; Halgren, Eric; Martinez, Antigona; Huang, Mingxiong; Hillyard, Steven A; Dale, Anders M

    2009-04-01

    Studying the human visual system with high temporal resolution is a significant challenge due to the limitations of the available, noninvasive measurement tools. MEG and EEG provide the millisecond temporal resolution necessary for answering questions about intracortical communication involved in visual processing, but source estimation is ill-posed and unreliable when multiple; simultaneously active areas are located close together. To address this problem, we have developed a retinotopy-constrained source estimation method to calculate the time courses of activation in multiple visual areas. Source estimation was disambiguated by: (1) fixing MEG/EEG generator locations and orientations based on fMRI retinotopy and surface tessellations constructed from high-resolution MRI images; and (2) solving for many visual field locations simultaneously in MEG/EEG responses, assuming source current amplitudes to be constant or varying smoothly across the visual field. Because of these constraints on the solutions, estimated source waveforms become less sensitive to sensor noise or random errors in the specification of the retinotopic dipole models. We demonstrate the feasibility of this method and discuss future applications such as studying the timing of attentional modulation in individual visual areas.

  1. Health risk estimates for groundwater and soil contamination in the Slovak Republic: a convenient tool for identification and mapping of risk areas.

    PubMed

    Fajčíková, K; Cvečková, V; Stewart, A; Rapant, S

    2014-10-01

    We undertook a quantitative estimation of health risks to residents living in the Slovak Republic and exposed to contaminated groundwater (ingestion by adult population) and/or soils (ingestion by adult and child population). Potential risk areas were mapped to give a visual presentation at basic administrative units of the country (municipalities, districts, regions) for easy discussion with policy and decision-makers. The health risk estimates were calculated by US EPA methods, applying threshold values for chronic risk and non-threshold values for cancer risk. The potential health risk was evaluated for As, Ba, Cd, Cu, F, Hg, Mn, NO3 (-), Pb, Sb, Se and Zn for groundwater and As, B, Ba, Be, Cd, Cu, F, Hg, Mn, Mo, Ni, Pb, Sb, Se and Zn for soils. An increased health risk was identified mainly in historical mining areas highly contaminated by geogenic-anthropogenic sources (ore deposit occurrence, mining, metallurgy). Arsenic and antimony were the most significant elements in relation to health risks from groundwater and soil contamination in the Slovak Republic contributing a significant part of total chronic risk levels. Health risk estimation for soil contamination has highlighted the significance of exposure through soil ingestion in children. Increased cancer risks from groundwater and soil contamination by arsenic were noted in several municipalities and districts throughout the country in areas with significantly high arsenic levels in the environment. This approach to health risk estimations and visualization represents a fast, clear and convenient tool for delineation of risk areas at national and local levels.

  2. Model Parameter Estimation Using Ensemble Data Assimilation: A Case with the Nonhydrostatic Icosahedral Atmospheric Model NICAM and the Global Satellite Mapping of Precipitation Data

    NASA Astrophysics Data System (ADS)

    Kotsuki, Shunji; Terasaki, Koji; Yashiro, Hasashi; Tomita, Hirofumi; Satoh, Masaki; Miyoshi, Takemasa

    2017-04-01

    This study aims to improve precipitation forecasts from numerical weather prediction (NWP) models through effective use of satellite-derived precipitation data. Kotsuki et al. (2016, JGR-A) successfully improved the precipitation forecasts by assimilating the Japan Aerospace eXploration Agency (JAXA)'s Global Satellite Mapping of Precipitation (GSMaP) data into the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) at 112-km horizontal resolution. Kotsuki et al. mitigated the non-Gaussianity of the precipitation variables by the Gaussian transform method for observed and forecasted precipitation using the previous 30-day precipitation data. This study extends the previous study by Kotsuki et al. and explores an online estimation of model parameters using ensemble data assimilation. We choose two globally-uniform parameters, one is the cloud-to-rain auto-conversion parameter of the Berry's scheme for large scale condensation and the other is the relative humidity threshold of the Arakawa-Schubert cumulus parameterization scheme. We perform the online-estimation of the two model parameters with an ensemble transform Kalman filter by assimilating the GSMaP precipitation data. The estimated parameters improve the analyzed and forecasted mixing ratio in the lower troposphere. Therefore, the parameter estimation would be a useful technique to improve the NWP models and their forecasts. This presentation will include the most recent progress up to the time of the symposium.

  3. Planck intermediate results. XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battye, R.; Benabed, K.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Carron, J.; Challinor, A.; Chiang, H. C.; Colombo, L. P. L.; Combet, C.; Comis, B.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fantaye, Y.; Finelli, F.; Forastieri, F.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; Ghosh, T.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Helou, G.; Henrot-Versillé, S.; Herranz, D.; Hivon, E.; Huang, Z.; Ilić, S