Science.gov

Sample records for a-posteriori map estimation

  1. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  2. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  3. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  4. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  5. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  6. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  7. A posteriori model validation for the temporal order of directed functional connectivity maps

    PubMed Central

    Beltz, Adriene M.; Molenaar, Peter C. M.

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  8. A posteriori model validation for the temporal order of directed functional connectivity maps.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  9. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    PubMed

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  10. Automatic Lung Lobe Segmentation Using Particles, Thin Plate Splines, and Maximum a Posteriori Estimation

    PubMed Central

    Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.

    2011-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396

  11. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  12. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  13. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  14. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for

  15. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  16. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  17. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  18. Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio

    2018-04-01

    We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.

  19. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  20. A posteriori noise estimation in variable data sets. With applications to spectra and light curves

    NASA Astrophysics Data System (ADS)

    Czesla, S.; Molle, T.; Schmitt, J. H. M. M.

    2018-01-01

    Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.

  1. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  2. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    SciTech Connect

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  3. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different

  4. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  5. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  6. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    SciTech Connect

    Bishop, Joseph E.; Brown, Judith Alice

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  7. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE PAGES

    Bishop, Joseph E.; Brown, Judith Alice

    2018-06-15

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  8. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  9. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  10. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  11. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  12. Mean phase predictor for maximum a posteriori demodulator

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1996-01-01

    A system and method for optimal maximum a posteriori (MAP) demodulation using a novel mean phase predictor. The mean phase predictor conducts cumulative averaging over multiple blocks of phase samples to provide accurate prior mean phases, to be input into a MAP phase estimator.

  13. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  14. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra.

    PubMed

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-28

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  15. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra

    NASA Astrophysics Data System (ADS)

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-01

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  16. Accuracy and Variability of Item Parameter Estimates from Marginal Maximum a Posteriori Estimation and Bayesian Inference via Gibbs Samplers

    ERIC Educational Resources Information Center

    Wu, Yi-Fang

    2015-01-01

    Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…

  17. Using meta-information of a posteriori Bayesian solutions of the hypocentre location task for improving accuracy of location error estimation

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2015-06-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.

  18. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers.

    PubMed

    Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne

    2012-04-01

    Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in children. A population pharmacokinetic model was developed to describe both once and twice daily pharmacokinetic profiles of abacavir in infants and toddlers. Standard dosage regimen is associated with large interindividual variability in abacavir concentrations. A maximum a posteriori probability Bayesian estimator of AUC(0-) (t) based on three time points (0, 1 or 2, and 3 h) is proposed to support area under the concentration-time curve (AUC) targeted individualized therapy in infants and toddlers. To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration-time curve (AUC) targeted dosage and individualize therapy. The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation-estimation method. The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 () h−1 (RSE 6.3%), apparent central volume of distribution 4.94 () (RSE 28.7%), apparent peripheral volume of distribution 8.12 () (RSE14.2%), apparent intercompartment clearance 1.25 () h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis

  19. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers

    PubMed Central

    Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne

    2012-01-01

    AIMS To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration–time curve (AUC) targeted dosage and individualize therapy. METHODS The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation–estimation method. RESULTS The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 l h−1 (RSE 6.3%), apparent central volume of distribution 4.94 l (RSE 28.7%), apparent peripheral volume of distribution 8.12 l (RSE14.2%), apparent intercompartment clearance 1.25 l h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. CONCLUSIONS The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC0–t was developed from the final model and can be used routinely to optimize individual dosing. PMID:21988586

  20. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    PubMed

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  1. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  2. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  3. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    PubMed

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  4. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  5. Practical Considerations about Expected A Posteriori Estimation in Adaptive Testing: Adaptive A Priori, Adaptive Correction for Bias, and Adaptive Integration Interval.

    ERIC Educational Resources Information Center

    Raiche, Gilles; Blais, Jean-Guy

    In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…

  6. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    SciTech Connect

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  7. Comparing Mapped Plot Estimators

    Treesearch

    Paul C. Van Deusen

    2006-01-01

    Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...

  8. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    PubMed

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  9. Maximum a posteriori classification of multifrequency, multilook, synthetic aperture radar intensity data

    NASA Technical Reports Server (NTRS)

    Rignot, E.; Chellappa, R.

    1993-01-01

    We present a maximum a posteriori (MAP) classifier for classifying multifrequency, multilook, single polarization SAR intensity data into regions or ensembles of pixels of homogeneous and similar radar backscatter characteristics. A model for the prior joint distribution of the multifrequency SAR intensity data is combined with a Markov random field for representing the interactions between region labels to obtain an expression for the posterior distribution of the region labels given the multifrequency SAR observations. The maximization of the posterior distribution yields Bayes's optimum region labeling or classification of the SAR data or its MAP estimate. The performance of the MAP classifier is evaluated by using computer-simulated multilook SAR intensity data as a function of the parameters in the classification process. Multilook SAR intensity data are shown to yield higher classification accuracies than one-look SAR complex amplitude data. The MAP classifier is extended to the case in which the radar backscatter from the remotely sensed surface varies within the SAR image because of incidence angle effects. The results obtained illustrate the practicality of the method for combining SAR intensity observations acquired at two different frequencies and for improving classification accuracy of SAR data.

  10. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  11. Mapped Plot Patch Size Estimates

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...

  12. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  13. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  14. A filtering approach to edge preserving MAP estimation of images.

    PubMed

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  15. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  16. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  17. Improved Topographic Mapping Through Multi-Baseline SAR Interferometry with MAP Estimation

    NASA Astrophysics Data System (ADS)

    Dong, Yuting; Jiang, Houjun; Zhang, Lu; Liao, Mingsheng; Shi, Xuguo

    2015-05-01

    There is an inherent contradiction between the sensitivity of height measurement and the accuracy of phase unwrapping for SAR interferometry (InSAR) over rough terrain. This contradiction can be resolved by multi-baseline InSAR analysis, which exploits multiple phase observations with different normal baselines to improve phase unwrapping accuracy, or even avoid phase unwrapping. In this paper we propose a maximum a posteriori (MAP) estimation method assisted by SRTM DEM data for multi-baseline InSAR topographic mapping. Based on our method, a data processing flow is established and applied in processing multi-baseline ALOS/PALSAR dataset. The accuracy of resultant DEMs is evaluated by using a standard Chinese national DEM of scale 1:10,000 as reference. The results show that multi-baseline InSAR can improve DEM accuracy compared with single-baseline case. It is noteworthy that phase unwrapping is avoided and the quality of multi-baseline InSAR DEM can meet the DTED-2 standard.

  18. Estimating uncertainty in map intersections

    Treesearch

    Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker

    2009-01-01

    Traditionally, natural resource managers have asked the question "How much?" and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question "Where?" and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases...

  19. Maximum a posteriori resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, John A.; Jenkins, Chris; Calder, Brian

    2006-08-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as

  20. Estimating A Reference Standard Segmentation With Spatially Varying Performance Parameters: Local MAP STAPLE

    PubMed Central

    Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.

    2012-01-01

    We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727

  1. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  2. A posteriori operation detection in evolving software models

    PubMed Central

    Langer, Philip; Wimmer, Manuel; Brosch, Petra; Herrmannsdörfer, Markus; Seidl, Martina; Wieland, Konrad; Kappel, Gerti

    2013-01-01

    As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment. PMID:23471366

  3. The role of a posteriori mathematics in physics

    NASA Astrophysics Data System (ADS)

    MacKinnon, Edward

    2018-05-01

    The calculus that co-evolved with classical mechanics relied on definitions of functions and differentials that accommodated physical intuitions. In the early nineteenth century mathematicians began the rigorous reformulation of calculus and eventually succeeded in putting almost all of mathematics on a set-theoretic foundation. Physicists traditionally ignore this rigorous mathematics. Physicists often rely on a posteriori math, a practice of using physical considerations to determine mathematical formulations. This is illustrated by examples from classical and quantum physics. A justification of such practice stems from a consideration of the role of phenomenological theories in classical physics and effective theories in contemporary physics. This relates to the larger question of how physical theories should be interpreted.

  4. Effects of using a posteriori methods for the conservation of integral invariants. [for weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.

    1988-01-01

    The nature and effect of using a posteriori adjustments to nonconservative finite-difference schemes to enforce integral invariants of the corresponding analytic system are examined. The method of a posteriori integral constraint restoration is analyzed for the case of linear advection, and the harmonic response associated with the a posteriori adjustments is examined in detail. The conservative properties of the shallow water system are reviewed, and the constraint restoration algorithm applied to the shallow water equations are described. A comparison is made between forecasts obtained using implicit and a posteriori methods for the conservation of mass, energy, and potential enstrophy in the complete nonlinear shallow-water system.

  5. MAP Estimators for Piecewise Continuous Inversion

    DTIC Science & Technology

    2016-08-08

    MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP

  6. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects

    USDA-ARS?s Scientific Manuscript database

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with '100 genotyped sons with genetic evaluations based on progeny tests. Fo...

  7. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects

    USDA-ARS?s Scientific Manuscript database

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with >100 genotyped sons with genetic evaluations based on progeny tests. Fo...

  8. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  9. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  10. Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging

    NASA Astrophysics Data System (ADS)

    Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.

    2008-03-01

    We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.

  11. Ontology based log content extraction engine for a posteriori security control.

    PubMed

    Azkia, Hanieh; Cuppens-Boulahia, Nora; Cuppens, Frédéric; Coatrieux, Gouenou

    2012-01-01

    In a posteriori access control, users are accountable for actions they performed and must provide evidence, when required by some legal authorities for instance, to prove that these actions were legitimate. Generally, log files contain the needed data to achieve this goal. This logged data can be recorded in several formats; we consider here IHE-ATNA (Integrating the healthcare enterprise-Audit Trail and Node Authentication) as log format. The difficulty lies in extracting useful information regardless of the log format. A posteriori access control frameworks often include a log filtering engine that provides this extraction function. In this paper we define and enforce this function by building an IHE-ATNA based ontology model, which we query using SPARQL, and show how the a posteriori security controls are made effective and easier based on this function.

  12. Uncertainty estimation for map-based analyses

    Treesearch

    Ronald E. McRoberts; Mark A. Hatfield; Susan J. Crocker

    2010-01-01

    Traditionally, natural resource managers have asked the question, “How much?” and have received sample-based estimates of resource totals or means. Increasingly, however, the same managers are now asking the additional question, “Where?” and are expecting spatially explicit answers in the form of maps. Recent development of natural resource databases, access to...

  13. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  14. Application of the a posteriori granddaughter design to the Holstein genome

    USDA-ARS?s Scientific Manuscript database

    An a posteriori granddaughter design was applied to determine haplotype effects for the Holstein genome. A total of 52 grandsire families, each with >=100 genotyped sons with genetic evaluations based on progeny tests, were analyzed for 33 traits (milk, fat, and protein yields; fat and protein perce...

  15. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  16. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  17. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  18. Automatic simplification of systems of reaction-diffusion equations by a posteriori analysis.

    PubMed

    Maybank, Philip J; Whiteley, Jonathan P

    2014-02-01

    Many mathematical models in biology and physiology are represented by systems of nonlinear differential equations. In recent years these models have become increasingly complex in order to explain the enormous volume of data now available. A key role of modellers is to determine which components of the model have the greatest effect on a given observed behaviour. An approach for automatically fulfilling this role, based on a posteriori analysis, has recently been developed for nonlinear initial value ordinary differential equations [J.P. Whiteley, Model reduction using a posteriori analysis, Math. Biosci. 225 (2010) 44-52]. In this paper we extend this model reduction technique for application to both steady-state and time-dependent nonlinear reaction-diffusion systems. Exemplar problems drawn from biology are used to demonstrate the applicability of the technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. [Methods of a posteriori identification of food patterns in Brazilian children: a systematic review].

    PubMed

    Carvalho, Carolina Abreu de; Fonsêca, Poliana Cristina de Almeida; Nobre, Luciana Neri; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro

    2016-01-01

    The objective of this study is to provide guidance for identifying dietary patterns using the a posteriori approach, and analyze the methodological aspects of the studies conducted in Brazil that identified the dietary patterns of children. Articles were selected from the Latin American and Caribbean Literature on Health Sciences, Scientific Electronic Library Online and Pubmed databases. The key words were: Dietary pattern; Food pattern; Principal Components Analysis; Factor analysis; Cluster analysis; Reduced rank regression. We included studies that identified dietary patterns of children using the a posteriori approach. Seven studies published between 2007 and 2014 were selected, six of which were cross-sectional and one cohort, Five studies used the food frequency questionnaire for dietary assessment; one used a 24-hour dietary recall and the other a food list. The method of exploratory approach used in most publications was principal components factor analysis, followed by cluster analysis. The sample size of the studies ranged from 232 to 4231, the values of the Kaiser-Meyer-Olkin test from 0.524 to 0.873, and Cronbach's alpha from 0.51 to 0.69. Few Brazilian studies identified dietary patterns of children using the a posteriori approach and principal components factor analysis was the technique most used.

  20. Constrained map-based inventory estimation

    Treesearch

    Paul C. Van Deusen; Francis A. Roesch

    2007-01-01

    A region can conceptually be tessellated into polygons at different scales or resolutions. Likewise, samples can be taken from the region to determine the value of a polygon variable for each scale. Sampled polygons can be used to estimate values for other polygons at the same scale. However, estimates should be compatible across the different scales. Estimates are...

  1. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  2. Analysis of the Efficiency of an A-Posteriori Error Estimator for Linear Triangular Finite Elements

    DTIC Science & Technology

    1991-06-01

    Release 1.0, NOETIC Tech. Corp., St. Louis, Missouri, 1985. [28] R. VERFURTH, FEMFLOW-user guide. Version 1, Report, Universitiit Zirich, 1989. [29] R... study and research for foreign students in numerical mathematics who are supported by foreign governments or exchange agencies (Fulbright, etc

  3. Estimating mapped-plot forest attributes with ratios of means

    Treesearch

    S.J. Zarnoch; W.A. Bechtold

    2000-01-01

    The mapped-plot design utilized by the U.S. Department of Agriculture (USDA) Forest Inventory and Analysis and the National Forest Health Monitoring Programs is described. Data from 2458 forested mapped plots systematically spread across 25 States reveal that 35 percent straddle multiple conditions. The ratio-of-means estimator is developed as a method to obtain...

  4. Satellite-map position estimation for the Mars rover

    NASA Technical Reports Server (NTRS)

    Hayashi, Akira; Dean, Thomas

    1989-01-01

    A method for locating the Mars rover using an elevation map generated from satellite data is described. In exploring its environment, the rover is assumed to generate a local rover-centered elevation map that can be used to extract information about the relative position and orientation of landmarks corresponding to local maxima. These landmarks are integrated into a stochastic map which is then matched with the satellite map to obtain an estimate of the robot's current location. The landmarks are not explicitly represented in the satellite map. The results of the matching algorithm correspond to a probabilistic assessment of whether or not the robot is located within a given region of the satellite map. By assigning a probabilistic interpretation to the information stored in the satellite map, researchers are able to provide a precise characterization of the results computed by the matching algorithm.

  5. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    NASA Astrophysics Data System (ADS)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  6. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  7. Covariance and correlation estimation in electron-density maps.

    PubMed

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  8. Incorporating priors on expert performance parameters for segmentation validation and label fusion: a maximum a posteriori STAPLE

    PubMed Central

    Commowick, Olivier; Warfield, Simon K

    2010-01-01

    In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379

  9. Incorporating priors on expert performance parameters for segmentation validation and label fusion: a maximum a posteriori STAPLE.

    PubMed

    Commowick, Olivier; Warfield, Simon K

    2010-01-01

    In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.

  10. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    1996-10-01

    One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

  11. A priori and a posteriori analysis of the flow around a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Cimarelli, A.; Leonforte, A.; Franciolini, M.; De Angelis, E.; Angeli, D.; Crivellini, A.

    2017-11-01

    The definition of a correct mesh resolution and modelling approach for the Large Eddy Simulation (LES) of the flow around a rectangular cylinder is recognized to be a rather elusive problem as shown by the large scatter of LES results present in the literature. In the present work, we aim at assessing this issue by performing an a priori analysis of Direct Numerical Simulation (DNS) data of the flow. This approach allows us to measure the ability of the LES field on reproducing the main flow features as a function of the resolution employed. Based on these results, we define a mesh resolution which maximize the opposite needs of reducing the computational costs and of adequately resolving the flow dynamics. The effectiveness of the resolution method proposed is then verified by means of an a posteriori analysis of actual LES data obtained by means of the implicit LES approach given by the numerical properties of the Discontinuous Galerkin spatial discretization technique. The present work represents a first step towards a best practice for LES of separating and reattaching flows.

  12. Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation.

    PubMed

    Marcel, Sébastien; Millán, José Del R

    2007-04-01

    In this paper, we investigate the use of brain activity for person authentication. It has been shown in previous studies that the brain-wave pattern of every individual is unique and that the electroencephalogram (EEG) can be used for biometric identification. EEG-based biometry is an emerging research topic and we believe that it may open new research directions and applications in the future. However, very little work has been done in this area and was focusing mainly on person identification but not on person authentication. Person authentication aims to accept or to reject a person claiming an identity, i.e., comparing a biometric data to one template, while the goal of person identification is to match the biometric data against all the records in a database. We propose the use of a statistical framework based on Gaussian Mixture Models and Maximum A Posteriori model adaptation, successfully applied to speaker and face authentication, which can deal with only one training session. We perform intensive experimental simulations using several strict train/test protocols to show the potential of our method. We also show that there are some mental tasks that are more appropriate for person authentication than others.

  13. Simple Form of MMSE Estimator for Super-Gaussian Prior Densities

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-04-01

    The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.

  14. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  15. Cardiac conduction velocity estimation from sequential mapping assuming known Gaussian distribution for activation time estimation error.

    PubMed

    Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian

    2016-08-01

    In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.

  16. A Posteriori Study of a DNS Database Describing Super critical Binary-Species Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2012-01-01

    Currently, the modeling of supercritical-pressure flows through Large Eddy Simulation (LES) uses models derived for atmospheric-pressure flows. Those atmospheric-pressure flows do not exhibit the particularities of high densitygradient magnitude features observed both in experiments and simulations of supercritical-pressure flows in the case of two species mixing. To assess whether the current LES modeling is appropriate and if found not appropriate to propose higher-fidelity models, a LES a posteriori study has been conducted for a mixing layer that initially contains different species in the lower and upper streams, and where the initial pressure is larger than the critical pressure of either species. An initially-imposed vorticity perturbation promotes roll-up and a double pairing of four initial span-wise vortices into an ultimate vortex that reaches a transitional state. The LES equations consist of the differential conservation equations coupled with a real-gas equation of state, and the equation set uses transport properties depending on the thermodynamic variables. Unlike all LES models to date, the differential equations contain, additional to the subgrid scale (SGS) fluxes, a new SGS term that is a pressure correction in the momentum equation. This additional term results from filtering of Direct Numerical Simulation (DNS) equations, and represents the gradient of the difference between the filtered pressure and the pressure computed from the filtered flow field. A previous a priori analysis, using a DNS database for the same configuration, found this term to be of leading order in the momentum equation, a fact traced to the existence of high-densitygradient magnitude regions that populated the entire flow; in the study, models were proposed for the SGS fluxes as well as this new term. In the present study, the previously proposed constantcoefficient SGS-flux models of the a priori investigation are tested a posteriori in LES, devoid of or including, the

  17. Modelling of turbulent lifted jet flames using flamelets: a priori assessment and a posteriori validation

    NASA Astrophysics Data System (ADS)

    Ruan, Shaohong; Swaminathan, Nedunchezhian; Darbyshire, Oliver

    2014-03-01

    This study focuses on the modelling of turbulent lifted jet flames using flamelets and a presumed Probability Density Function (PDF) approach with interest in both flame lift-off height and flame brush structure. First, flamelet models used to capture contributions from premixed and non-premixed modes of the partially premixed combustion in the lifted jet flame are assessed using a Direct Numerical Simulation (DNS) data for a turbulent lifted hydrogen jet flame. The joint PDFs of mixture fraction Z and progress variable c, including their statistical correlation, are obtained using a copula method, which is also validated using the DNS data. The statistically independent PDFs are found to be generally inadequate to represent the joint PDFs from the DNS data. The effects of Z-c correlation and the contribution from the non-premixed combustion mode on the flame lift-off height are studied systematically by including one effect at a time in the simulations used for a posteriori validation. A simple model including the effects of chemical kinetics and scalar dissipation rate is suggested and used for non-premixed combustion contributions. The results clearly show that both Z-c correlation and non-premixed combustion effects are required in the premixed flamelets approach to get good agreement with the measured flame lift-off heights as a function of jet velocity. The flame brush structure reported in earlier experimental studies is also captured reasonably well for various axial positions. It seems that flame stabilisation is influenced by both premixed and non-premixed combustion modes, and their mutual influences.

  18. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  19. Estimating floodwater depths from flood inundation maps and topography

    USGS Publications Warehouse

    Cohen, Sagy; Brakenridge, G. Robert; Kettner, Albert; Bates, Bradford; Nelson, Jonathan M.; McDonald, Richard R.; Huang, Yu-Fen; Munasinghe, Dinuke; Zhang, Jiaqi

    2018-01-01

    Information on flood inundation extent is important for understanding societal exposure, water storage volumes, flood wave attenuation, future flood hazard, and other variables. A number of organizations now provide flood inundation maps based on satellite remote sensing. These data products can efficiently and accurately provide the areal extent of a flood event, but do not provide floodwater depth, an important attribute for first responders and damage assessment. Here we present a new methodology and a GIS-based tool, the Floodwater Depth Estimation Tool (FwDET), for estimating floodwater depth based solely on an inundation map and a digital elevation model (DEM). We compare the FwDET results against water depth maps derived from hydraulic simulation of two flood events, a large-scale event for which we use medium resolution input layer (10 m) and a small-scale event for which we use a high-resolution (LiDAR; 1 m) input. Further testing is performed for two inundation maps with a number of challenging features that include a narrow valley, a large reservoir, and an urban setting. The results show FwDET can accurately calculate floodwater depth for diverse flooding scenarios but also leads to considerable bias in locations where the inundation extent does not align well with the DEM. In these locations, manual adjustment or higher spatial resolution input is required.

  20. Determination of quantitative trait variants by concordance via application of the a posteriori granddaughter design to the U.S. Holstein population

    USDA-ARS?s Scientific Manuscript database

    Experimental designs that exploit family information can provide substantial predictive power in quantitative trait variant discovery projects. Concordance between quantitative trait locus genotype as determined by the a posteriori granddaughter design and marker genotype was determined for 29 trai...

  1. The MAP Spacecraft Angular State Estimation After Sensor Failure

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, the conclusions have a far reaching consequence.

  2. Stable Estimation of a Covariance Matrix Guided by Nuclear Norm Penalties

    PubMed Central

    Chi, Eric C.; Lange, Kenneth

    2014-01-01

    Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications – discriminant analysis and EM clustering – in this sampling regime. PMID:25143662

  3. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  4. Arbitrary-Lagrangian-Eulerian Discontinuous Galerkin schemes with a posteriori subcell finite volume limiting on moving unstructured meshes

    NASA Astrophysics Data System (ADS)

    Boscheri, Walter; Dumbser, Michael

    2017-10-01

    Lagrangian formulations that are based on a fixed computational grid and which instead evolve the mapping of the reference configuration to the current one. Our new Lagrangian-type DG scheme adopts the novel a posteriori sub-cell finite volume limiter method recently developed in [62] for fixed unstructured grids. In this approach, the validity of the candidate solution produced in each cell by an unlimited ADER-DG scheme is verified against a set of physical and numerical detection criteria, such as the positivity of pressure and density, the absence of floating point errors (NaN) and the satisfaction of a relaxed discrete maximum principle (DMP) in the sense of polynomials. Those cells which do not satisfy all of the above criteria are flagged as troubled cells and are recomputed at the aid of a more robust second order TVD finite volume scheme. To preserve the subcell resolution capability of the original DG scheme, the FV limiter is run on a sub-grid that is 2 N + 1 times finer compared to the mesh of the original unlimited DG scheme. The new subcell averages are then gathered back into a high order DG polynomial by a usual conservative finite volume reconstruction operator. The numerical convergence rates of the new ALE ADER-DG schemes are studied up to fourth order in space and time and several test problems are simulated in order to check the accuracy and the robustness of the proposed numerical method in the context of the Euler and Navier-Stokes equations for compressible gas dynamics, considering both inviscid and viscous fluids. Finally, an application inspired by Inertial Confinement Fusion (ICF) type flows is considered by solving the Euler equations and the PDE of viscous and resistive magnetohydrodynamics (VRMHD).

  5. Estimating and mapping the population at risk of sleeping sickness.

    PubMed

    Simarro, Pere P; Cecchi, Giuliano; Franco, José R; Paone, Massimo; Diarra, Abdoulaye; Ruiz-Postigo, José Antonio; Fèvre, Eric M; Mattioli, Raffaele C; Jannin, Jean G

    2012-01-01

    Human African trypanosomiasis (HAT), also known as sleeping sickness, persists as a public health problem in several sub-Saharan countries. Evidence-based, spatially explicit estimates of population at risk are needed to inform planning and implementation of field interventions, monitor disease trends, raise awareness and support advocacy. Comprehensive, geo-referenced epidemiological records from HAT-affected countries were combined with human population layers to map five categories of risk, ranging from "very high" to "very low," and to estimate the corresponding at-risk population. Approximately 70 million people distributed over a surface of 1.55 million km(2) are estimated to be at different levels of risk of contracting HAT. Trypanosoma brucei gambiense accounts for 82.2% of the population at risk, the remaining 17.8% being at risk of infection from T. b. rhodesiense. Twenty-one million people live in areas classified as moderate to very high risk, where more than 1 HAT case per 10,000 inhabitants per annum is reported. Updated estimates of the population at risk of sleeping sickness were made, based on quantitative information on the reported cases and the geographic distribution of human population. Due to substantial methodological differences, it is not possible to make direct comparisons with previous figures for at-risk population. By contrast, it will be possible to explore trends in the future. The presented maps of different HAT risk levels will help to develop site-specific strategies for control and surveillance, and to monitor progress achieved by ongoing efforts aimed at the elimination of sleeping sickness.

  6. Estimating and mapping ecological processes influencing microbial community assembly

    DOE PAGES

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; ...

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recentlymore » developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.« less

  7. Estimating and mapping ecological processes influencing microbial community assembly

    PubMed Central

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-01-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725

  8. Optimizing spectral wave estimates with adjoint-based sensitivity maps

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos

    2014-04-01

    A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.

  9. A Novel A Posteriori Investigation of Scalar Flux Models for Passive Scalar Dispersion in Compressible Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Braman, Kalen; Raman, Venkat

    2011-11-01

    A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.

  10. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment

    PubMed Central

    Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-01-01

    Purpose This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. Materials and Methods A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. Results The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. Conclusion A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment. PMID:27051635

  11. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment.

    PubMed

    Kreich, Eliane Maria; Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-03-01

    This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment.

  12. Allowing for MSD prevention during facilities planning for a public service: an a posteriori analysis of 10 library design projects.

    PubMed

    Bellemare, Marie; Trudel, Louis; Ledoux, Elise; Montreuil, Sylvie; Marier, Micheline; Laberge, Marie; Vincent, Patrick

    2006-01-01

    Research was conducted to identify an ergonomics-based intervention model designed to factor in musculoskeletal disorder (MSD) prevention when library projects are being designed. The first stage of the research involved an a posteriori analysis of 10 recent redesign projects. The purpose of the analysis was to document perceptions about the attention given to MSD prevention measures over the course of a project on the part of 2 categories of employees: librarians responsible for such projects and personnel working in the libraries before and after changes. Subjects were interviewed in focus groups. Outcomes of the analysis can guide our ergonomic assessment of current situations and contribute to a better understanding of the way inclusion or improvement of prevention measures can support the workplace design process.

  13. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  14. Local-Mesh, Local-Order, Adaptive Finite Element Methods with a Posteriori Error Estimators for Elliptic Partial Differential Equations.

    DTIC Science & Technology

    1981-12-01

    I I I I I o-F--o -- oIl lI I I 0--0------0I Im I I o--G--o ] II I I ...C-0076, the Department of Energy (DOE Grant DE-AC02-77ET53053), The National Science Foundation (Graduate Fellowship), and Yale University. " i o V.IM...element method, the choice of discretization i eft to the user, who must base his decision on experience with similar equations. - In recent years,

  15. MAPPING SPATIAL ACCURACY AND ESTIMATING LANDSCAPE INDICATORS FROM THEMATIC LAND COVER MAPS USING FUZZY SET THEORY

    EPA Science Inventory

    The accuracy of thematic map products is not spatially homogenous, but instead variable across most landscapes. Properly analyzing and representing the spatial distribution (pattern) of thematic map accuracy would provide valuable user information for assessing appropriate applic...

  16. How BenMAP-CE Estimates the Health and Economic Effects of Air Pollution

    EPA Pesticide Factsheets

    The BenMAP-CE tool estimates the number and economic value of health impacts resulting from changes in air quality - specifically, ground-level ozone and fine particles. Learn what data BenMAP-CE uses and how the estimates are calculated.

  17. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  18. MAPPING SPATIAL ACCURACY AND ESTIMATING LANDSCAPE INDICATORS FROM THEMATIC LAND COVER MAPS USING FUZZY SET THEORY

    EPA Science Inventory

    This paper presents a fuzzy set-based method of mapping spatial accuracy of thematic map and computing several ecological indicators while taking into account spatial variation of accuracy associated with different land cover types and other factors (e.g., slope, soil type, etc.)...

  19. A Priori and a Posteriori Dietary Patterns during Pregnancy and Gestational Weight Gain: The Generation R Study

    PubMed Central

    Tielemans, Myrte J.; Erler, Nicole S.; Leermakers, Elisabeth T. M.; van den Broek, Marion; Jaddoe, Vincent W. V.; Steegers, Eric A. P.; Kiefte-de Jong, Jessica C.; Franco, Oscar H.

    2015-01-01

    Abnormal gestational weight gain (GWG) is associated with adverse pregnancy outcomes. We examined whether dietary patterns are associated with GWG. Participants included 3374 pregnant women from a population-based cohort in the Netherlands. Dietary intake during pregnancy was assessed with food-frequency questionnaires. Three a posteriori-derived dietary patterns were identified using principal component analysis: a “Vegetable, oil and fish”, a “Nuts, high-fiber cereals and soy”, and a “Margarine, sugar and snacks” pattern. The a priori-defined dietary pattern was based on national dietary recommendations. Weight was repeatedly measured around 13, 20 and 30 weeks of pregnancy; pre-pregnancy and maximum weight were self-reported. Normal weight women with high adherence to the “Vegetable, oil and fish” pattern had higher early-pregnancy GWG than those with low adherence (43 g/week (95% CI 16; 69) for highest vs. lowest quartile (Q)). Adherence to the “Margarine, sugar and snacks” pattern was associated with a higher prevalence of excessive GWG (OR 1.45 (95% CI 1.06; 1.99) Q4 vs. Q1). Normal weight women with higher scores on the “Nuts, high-fiber cereals and soy” pattern had more moderate GWG than women with lower scores (−0.01 (95% CI −0.02; −0.00) per SD). The a priori-defined pattern was not associated with GWG. To conclude, specific dietary patterns may play a role in early pregnancy but are not consistently associated with GWG. PMID:26569303

  20. Optimizing Spectral Wave Estimates with Adjoint-Based Sensitivity Maps

    DTIC Science & Technology

    2014-02-18

    J, Orzech MD, Ngodock HE (2013) Validation of a wave data assimilation system based on SWAN. Geophys Res Abst, (15), EGU2013-5951-1, EGU General ...surface wave spectra. Sensitivity maps are generally constructed for a selected system indicator (e.g., vorticity) by computing the differential of...spectral action balance Eq. 2, generally initialized at the off- shore boundary with spectral wave and other outputs from regional models such as

  1. Map scale effects on estimating the number of undiscovered mineral deposits

    USGS Publications Warehouse

    Singer, D.A.; Menzie, W.D.

    2008-01-01

    Estimates of numbers of undiscovered mineral deposits, fundamental to assessing mineral resources, are affected by map scale. Where consistently defined deposits of a particular type are estimated, spatial and frequency distributions of deposits are linked in that some frequency distributions can be generated by processes randomly in space whereas others are generated by processes suggesting clustering in space. Possible spatial distributions of mineral deposits and their related frequency distributions are affected by map scale and associated inclusions of non-permissive or covered geological settings. More generalized map scales are more likely to cause inclusion of geologic settings that are not really permissive for the deposit type, or that include unreported cover over permissive areas, resulting in the appearance of deposit clustering. Thus, overly generalized map scales can cause deposits to appear clustered. We propose a model that captures the effects of map scale and the related inclusion of non-permissive geologic settings on numbers of deposits estimates, the zero-inflated Poisson distribution. Effects of map scale as represented by the zero-inflated Poisson distribution suggest that the appearance of deposit clustering should diminish as mapping becomes more detailed because the number of inflated zeros would decrease with more detailed maps. Based on observed worldwide relationships between map scale and areas permissive for deposit types, mapping at a scale with twice the detail should cut permissive area size of a porphyry copper tract to 29% and a volcanic-hosted massive sulfide tract to 50% of their original sizes. Thus some direct benefits of mapping an area at a more detailed scale are indicated by significant reductions in areas permissive for deposit types, increased deposit density and, as a consequence, reduced uncertainty in the estimate of number of undiscovered deposits. Exploration enterprises benefit from reduced areas requiring

  2. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  3. Influence of resolution in irrigated area mapping and area estimation

    USGS Publications Warehouse

    Velpuri, N.M.; Thenkabail, P.S.; Gumma, M.K.; Biradar, C.; Dheeravath, V.; Noojipady, P.; Yuanjie, L.

    2009-01-01

    The overarching goal of this paper was to determine how irrigated areas change with resolution (or scale) of imagery. Specific objectives investigated were to (a) map irrigated areas using four distinct spatial resolutions (or scales), (b) determine how irrigated areas change with resolutions, and (c) establish the causes of differences in resolution-based irrigated areas. The study was conducted in the very large Krishna River basin (India), which has a high degree of formal contiguous, and informal fragmented irrigated areas. The irrigated areas were mapped using satellite sensor data at four distinct resolutions: (a) NOAA AVHRR Pathfinder 10,000 m, (b) Terra MODIS 500 m, (c) Terra MODIS 250 m, and (d) Landsat ETM+ 30 m. The proportion of irrigated areas relative to Landsat 30 m derived irrigated areas (9.36 million hectares for the Krishna basin) were (a) 95 percent using MODIS 250 m, (b) 93 percent using MODIS 500 m, and (c) 86 percent using AVHRR 10,000 m. In this study, it was found that the precise location of the irrigated areas were better established using finer spatial resolution data. A strong relationship (R2 = 0.74 to 0.95) was observed between irrigated areas determined using various resolutions. This study proved the hypotheses that "the finer the spatial resolution of the sensor used, greater was the irrigated area derived," since at finer spatial resolutions, fragmented areas are detected better. Accuracies and errors were established consistently for three classes (surface water irrigated, ground water/conjunctive use irrigated, and nonirrigated) across the four resolutions mentioned above. The results showed that the Landsat data provided significantly higher overall accuracies (84 percent) when compared to MODIS 500 m (77 percent), MODIS 250 m (79 percent), and AVHRR 10,000 m (63 percent). ?? 2009 American Society for Photogrammetry and Remote Sensing.

  4. Debris flow risk mapping on medium scale and estimation of prospective economic losses

    NASA Astrophysics Data System (ADS)

    Blahut, Jan; Sterlacchini, Simone

    2010-05-01

    Delimitation of potential zones affected by debris flow hazard, mapping of areas at risk, and estimation of future economic damage provides important information for spatial planners and local administrators in all countries endangered by this type of phenomena. This study presents a medium scale (1:25 000 - 1: 50 000) analysis applied in the Consortium of Mountain Municipalities of Valtellina di Tirano (Italian Alps, Lombardy Region). In this area a debris flow hazard map was coupled with the information about the elements at risk to obtain monetary values of prospective damage. Two available hazard maps were obtained from GIS medium scale modelling. Probability estimations of debris flow occurrence were calculated using existing susceptibility maps and two sets of aerial images. Value to the elements at risk was assigned according to the official information on housing costs and land value from the Territorial Agency of Lombardy Region. In the first risk map vulnerability values were assumed to be 1. The second risk map uses three classes of vulnerability values qualitatively estimated according to the debris flow possible propagation. Risk curves summarizing the possible economic losses were calculated. Finally these maps of economic risk were compared to maps derived from qualitative evaluation of the values of the elements at risk.

  5. Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.

    PubMed

    Mutimbu, Lawrence; Robles-Kelly, Antonio

    2016-08-31

    This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.

  6. Functional mapping of reaction norms to multiple environmental signals through nonparametric covariance estimation

    PubMed Central

    2011-01-01

    Background The identification of genes or quantitative trait loci that are expressed in response to different environmental factors such as temperature and light, through functional mapping, critically relies on precise modeling of the covariance structure. Previous work used separable parametric covariance structures, such as a Kronecker product of autoregressive one [AR(1)] matrices, that do not account for interaction effects of different environmental factors. Results We implement a more robust nonparametric covariance estimator to model these interactions within the framework of functional mapping of reaction norms to two signals. Our results from Monte Carlo simulations show that this estimator can be useful in modeling interactions that exist between two environmental signals. The interactions are simulated using nonseparable covariance models with spatio-temporal structural forms that mimic interaction effects. Conclusions The nonparametric covariance estimator has an advantage over separable parametric covariance estimators in the detection of QTL location, thus extending the breadth of use of functional mapping in practical settings. PMID:21269481

  7. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  8. Robust Parallel Motion Estimation and Mapping with Stereo Cameras in Underground Infrastructure

    NASA Astrophysics Data System (ADS)

    Liu, Chun; Li, Zhengning; Zhou, Yuan

    2016-06-01

    Presently, we developed a novel robust motion estimation method for localization and mapping in underground infrastructure using a pre-calibrated rigid stereo camera rig. Localization and mapping in underground infrastructure is important to safety. Yet it's also nontrivial since most underground infrastructures have poor lighting condition and featureless structure. Overcoming these difficulties, we discovered that parallel system is more efficient than the EKF-based SLAM approach since parallel system divides motion estimation and 3D mapping tasks into separate threads, eliminating data-association problem which is quite an issue in SLAM. Moreover, the motion estimation thread takes the advantage of state-of-art robust visual odometry algorithm which is highly functional under low illumination and provides accurate pose information. We designed and built an unmanned vehicle and used the vehicle to collect a dataset in an underground garage. The parallel system was evaluated by the actual dataset. Motion estimation results indicated a relative position error of 0.3%, and 3D mapping results showed a mean position error of 13cm. Off-line process reduced position error to 2cm. Performance evaluation by actual dataset showed that our system is capable of robust motion estimation and accurate 3D mapping in poor illumination and featureless underground environment.

  9. The Effect of Map Boundary on Estimates of Landscape Resistance to Animal Movement

    PubMed Central

    Koen, Erin L.; Garroway, Colin J.; Wilson, Paul J.; Bowman, Jeff

    2010-01-01

    Background Artificial boundaries on a map occur when the map extent does not cover the entire area of study; edges on the map do not exist on the ground. These artificial boundaries might bias the results of animal dispersal models by creating artificial barriers to movement for model organisms where there are no barriers for real organisms. Here, we characterize the effects of artificial boundaries on calculations of landscape resistance to movement using circuit theory. We then propose and test a solution to artificially inflated resistance values whereby we place a buffer around the artificial boundary as a substitute for the true, but unknown, habitat. Methodology/Principal Findings We randomly assigned landscape resistance values to map cells in the buffer in proportion to their occurrence in the known map area. We used circuit theory to estimate landscape resistance to organism movement and gene flow, and compared the output across several scenarios: a habitat-quality map with artificial boundaries and no buffer, a map with a buffer composed of randomized habitat quality data, and a map with a buffer composed of the true habitat quality data. We tested the sensitivity of the randomized buffer to the possibility that the composition of the real but unknown buffer is biased toward high or low quality. We found that artificial boundaries result in an overestimate of landscape resistance. Conclusions/Significance Artificial map boundaries overestimate resistance values. We recommend the use of a buffer composed of randomized habitat data as a solution to this problem. We found that resistance estimated using the randomized buffer did not differ from estimates using the real data, even when the composition of the real data was varied. Our results may be relevant to those interested in employing Circuitscape software in landscape connectivity and landscape genetics studies. PMID:20668690

  10. Fat fraction bias correction using T1 estimates and flip angle mapping.

    PubMed

    Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A

    2014-01-01

    To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.

  11. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  12. Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

    PubMed Central

    Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar

    2015-01-01

    This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289

  13. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part II: isohyetal maps

    USGS Publications Warehouse

    Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.

    1992-01-01

    Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances

  14. Metrics and Mappings: A Framework for Understanding Real-World Quantitative Estimation.

    ERIC Educational Resources Information Center

    Brown, Norman R.; Siegler, Robert S.

    1993-01-01

    A metrics and mapping framework is proposed to account for how heuristics, domain-specific reasoning, and intuitive statistical induction processes are integrated to generate estimates. Results of 4 experiments involving 188 undergraduates illustrate framework usefulness and suggest when people use heuristics and when they emphasize…

  15. EFFECTS OF IMPROVED PRECIPITATION ESTIMATES ON AUTOMATED RUNOFF MAPPING: EASTERN UNITED STATES

    EPA Science Inventory

    We evaluated maps of runoff created by means of two automated procedures. We implemented each procedure using precipitation estimates of both 5-km and 10-km resolution from PRISM (Parameter-elevation Regressions on Independent Slopes Model). Our goal was to determine if using the...

  16. Estimation of flood environmental effects using flood zone mapping techniques in Halilrood Kerman, Iran.

    PubMed

    Boudaghpour, Siamak; Bagheri, Majid; Bagheri, Zahra

    2014-01-01

    High flood occurrences with large environmental damages have a growing trend in Iran. Dynamic movements of water during a flood cause different environmental damages in geographical areas with different characteristics such as topographic conditions. In general, environmental effects and damages caused by a flood in an area can be investigated from different points of view. The current essay is aiming at detecting environmental effects of flood occurrences in Halilrood catchment area of Kerman province in Iran using flood zone mapping techniques. The intended flood zone map was introduced in four steps. Steps 1 to 3 pave the way to calculate and estimate flood zone map in the understudy area while step 4 determines the estimation of environmental effects of flood occurrence. Based on our studies, wide range of accuracy for estimating the environmental effects of flood occurrence was introduced by using of flood zone mapping techniques. Moreover, it was identified that the existence of Jiroft dam in the study area can decrease flood zone from 260 hectares to 225 hectares and also it can decrease 20% of flood peak intensity. As a result, 14% of flood zone in the study area can be saved environmentally.

  17. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation

    NASA Astrophysics Data System (ADS)

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  18. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation.

    PubMed

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  19. Two techniques for mapping and area estimation of small grains in California using Landsat digital data

    NASA Technical Reports Server (NTRS)

    Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.

    1984-01-01

    Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.

  20. Estimated flood-inundation maps for Cowskin Creek in western Wichita, Kansas

    USGS Publications Warehouse

    Studley, Seth E.

    2003-01-01

    The October 31, 1998, flood on Cowskin Creek in western Wichita, Kansas, caused millions of dollars in damages. Emergency management personnel and flood mitigation teams had difficulty in efficiently identifying areas affected by the flooding, and no warning was given to residents because flood-inundation information was not available. To provide detailed information about future flooding on Cowskin Creek, high-resolution estimated flood-inundation maps were developed using geographic information system technology and advanced hydraulic analysis. Two-foot-interval land-surface elevation data from a 1996 flood insurance study were used to create a three-dimensional topographic representation of the study area for hydraulic analysis. The data computed from the hydraulic analyses were converted into geographic information system format with software from the U.S. Army Corps of Engineers' Hydrologic Engineering Center. The results were overlaid on the three-dimensional topographic representation of the study area to produce maps of estimated flood-inundation areas and estimated depths of water in the inundated areas for 1-foot increments on the basis of stream stage at an index streamflow-gaging station. A Web site (http://ks.water.usgs.gov/Kansas/cowskin.floodwatch) was developed to provide the public with information pertaining to flooding in the study area. The Web site shows graphs of the real-time streamflow data for U.S. Geological Survey gaging stations in the area and monitors the National Weather Service Arkansas-Red Basin River Forecast Center for Cowskin Creek flood-forecast information. When a flood is forecast for the Cowskin Creek Basin, an estimated flood-inundation map is displayed for the stream stage closest to the National Weather Service's forecasted peak stage. Users of the Web site are able to view the estimated flood-inundation maps for selected stages at any time and to access information about this report and about flooding in general. Flood

  1. Aporrectodea caliginosa, a relevant earthworm species for a posteriori pesticide risk assessment: current knowledge and recommendations for culture and experimental design.

    PubMed

    Bart, Sylvain; Amossé, Joël; Lowe, Christopher N; Mougin, Christian; Péry, Alexandre R R; Pelosi, Céline

    2018-06-21

    Ecotoxicological tests with earthworms are widely used and are mandatory for the risk assessment of pesticides prior to registration and commercial use. The current model species for standardized tests is Eisenia fetida or Eisenia andrei. However, these species are absent from agricultural soils and often less sensitive to pesticides than other earthworm species found in mineral soils. To move towards a better assessment of pesticide effects on non-target organisms, there is a need to perform a posteriori tests using relevant species. The endogeic species Aporrectodea caliginosa (Savigny, 1826) is representative of cultivated fields in temperate regions and is suggested as a relevant model test species. After providing information on its taxonomy, biology, and ecology, we reviewed current knowledge concerning its sensitivity towards pesticides. Moreover, we highlighted research gaps and promising perspectives. Finally, advice and recommendations are given for the establishment of laboratory cultures and experiments using this soil-dwelling earthworm species.

  2. Relative risk estimation for malaria disease mapping based on stochastic SIR-SI model in Malaysia

    NASA Astrophysics Data System (ADS)

    Samat, Nor Azah; Ma'arof, Syafiqah Husna Mohd Imam

    2016-10-01

    Disease mapping is a study on the geographical distribution of a disease to represent the epidemiology data spatially. The production of maps is important to identify areas that deserve closer scrutiny or more attention. In this study, a mosquito-borne disease called Malaria is the focus of our application. Malaria disease is caused by parasites of the genus Plasmodium and is transmitted to people through the bites of infected female Anopheles mosquitoes. Precautionary steps need to be considered in order to avoid the malaria virus from spreading around the world, especially in the tropical and subtropical countries, which would subsequently increase the number of Malaria cases. Thus, the purpose of this paper is to discuss a stochastic model employed to estimate the relative risk of malaria disease in Malaysia. The outcomes of the analysis include a Malaria risk map for all 16 states in Malaysia, revealing the high and low risk areas of Malaria occurrences.

  3. Estimation and mapping of uranium content of geological units in France.

    PubMed

    Ielsch, G; Cuney, M; Buscail, F; Rossi, F; Leon, A; Cushing, M E

    2017-01-01

    In France, natural radiation accounts for most of the population exposure to ionizing radiation. The Institute for Radiological Protection and Nuclear Safety (IRSN) carries out studies to evaluate the variability of natural radioactivity over the French territory. In this framework, the present study consisted in the evaluation of uranium concentrations in bedrocks. The objective was to provide estimate of uranium content of each geological unit defined in the geological map of France (1:1,000,000). The methodology was based on the interpretation of existing geochemical data (results of whole rock sample analysis) and the knowledge of petrology and lithology of the geological units, which allowed obtaining a first estimate of the uranium content of rocks. Then, this first estimate was improved thanks to some additional information. For example, some particular or regional sedimentary rocks which could present uranium contents higher than those generally observed for these lithologies, were identified. Moreover, databases on mining provided information on the location of uranium and coal/lignite mines and thus indicated the location of particular uranium-rich rocks. The geological units, defined from their boundaries extracted from the geological map of France (1:1,000,000), were finally classified into 5 categories based on their mean uranium content. The map obtained provided useful data for establishing the geogenic radon map of France, but also for mapping countrywide exposure to terrestrial radiation and for the evaluation of background levels of natural radioactivity used for impact assessment of anthropogenic activities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Soil amplification maps for estimating earthquake ground motions in the Central US

    USGS Publications Warehouse

    Bauer, R.A.; Kiefer, J.; Hester, N.

    2001-01-01

    The State Geologists of the Central United States Earthquake Consortium (CUSEC) are developing maps to assist State and local emergency managers and community officials in evaluating the earthquake hazards for the CUSEC region. The state geological surveys have worked together to produce a series of maps that show seismic shaking potential for eleven 1 X 2 degree (scale 1:250 000 or 1 in. ??? 3.9 miles) quadrangles that cover the high-risk area of the New Madrid Seismic Zone in eight states. Shear wave velocity values for the surficial materials were gathered and used to classify the soils according to their potential to amplify earthquake ground motions. Geologic base maps of surficial materials or 3-D material maps, either existing or produced for this project, were used in conjunction with shear wave velocities to classify the soils for the upper 15-30 m. These maps are available in an electronic form suitable for inclusion in the federal emergency management agency's earthquake loss estimation program (HAZUS). ?? 2001 Elsevier Science B.V. All rights reserved.

  5. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  6. Direct estimation of tracer-kinetic parameter maps from highly undersampled brain dynamic contrast enhanced MRI.

    PubMed

    Guo, Yi; Lingala, Sajan Goud; Zhu, Yinghua; Lebel, R Marc; Nayak, Krishna S

    2017-10-01

    The purpose of this work was to develop and evaluate a T 1 -weighted dynamic contrast enhanced (DCE) MRI methodology where tracer-kinetic (TK) parameter maps are directly estimated from undersampled (k,t)-space data. The proposed reconstruction involves solving a nonlinear least squares optimization problem that includes explicit use of a full forward model to convert parameter maps to (k,t)-space, utilizing the Patlak TK model. The proposed scheme is compared against an indirect method that creates intermediate images by parallel imaging and compressed sensing before to TK modeling. Thirteen fully sampled brain tumor DCE-MRI scans with 5-second temporal resolution are retrospectively undersampled at rates R = 20, 40, 60, 80, and 100 for each dynamic frame. TK maps are quantitatively compared based on root mean-squared-error (rMSE) and Bland-Altman analysis. The approach is also applied to four prospectively R = 30 undersampled whole-brain DCE-MRI data sets. In the retrospective study, the proposed method performed statistically better than indirect method at R ≥ 80 for all 13 cases. This approach provided restoration of TK parameter values with less errors in tumor regions of interest, an improvement compared to a state-of-the-art indirect method. Applied prospectively, the proposed method provided whole-brain, high-resolution TK maps with good image quality. Model-based direct estimation of TK maps from k,t-space DCE-MRI data is feasible and is compatible up to 100-fold undersampling. Magn Reson Med 78:1566-1578, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  7. Monopole and dipole estimation for multi-frequency sky maps by linear regression

    NASA Astrophysics Data System (ADS)

    Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.

    2017-01-01

    We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.

  8. Forest Aboveground Biomass Mapping and Canopy Cover Estimation from Simulated ICESat-2 Data

    NASA Astrophysics Data System (ADS)

    Narine, L.; Popescu, S. C.; Neuenschwander, A. L.

    2017-12-01

    The assessment of forest aboveground biomass (AGB) can contribute to reducing uncertainties associated with the amount and distribution of terrestrial carbon. With a planned launch date of July 2018, the Ice, Cloud and Land Elevation Satellite-2 (ICESat-2) will provide data which will offer the possibility of mapping AGB at global scales. In this study, we develop approaches for utilizing vegetation data that will be delivered in ICESat-2's land-vegetation along track product (ATL08). The specific objectives are to: (1) simulate ICESat-2 photon-counting lidar (PCL) data using airborne lidar data, (2) utilize simulated PCL data to estimate forest canopy cover and AGB and, (3) upscale AGB predictions to create a wall-to-wall AGB map at 30-m spatial resolution. Using existing airborne lidar data for Sam Houston National Forest (SHNF) located in southeastern Texas and known ICESat-2 beam locations, PCL data are simulated from discrete return lidar points. We use multiple linear regression models to relate simulated PCL metrics for 100 m segments along the ICESat-2 ground tracks to AGB from a biomass map developed using airborne lidar data and canopy cover calculated from the same. Random Forest is then used to create an AGB map from predicted estimates and explanatory data consisting of spectral metrics derived from Landsat TM imagery and land cover data from the National Land Cover Database (NLCD). Findings from this study will demonstrate how data that will be acquired by ICESat-2 can be used to estimate forest structure and characterize the spatial distribution of AGB.

  9. The Effect of Sensor Failure on the Attitude and Rate Estimation of MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, its conclusions are more general.

  10. A Posteriori Quantification of Rate-Controlling Effects from High-Intensity Turbulence-Flame Interactions Using 4D Measurements

    DTIC Science & Technology

    2016-11-22

    Unclassified REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...compact at all conditions tested, as indicated by the overlap of OH and CH2O distributions. 5. We developed analytical techniques for pseudo- Lagrangian ...condition in a constant density flow requires that the flow divergence is zero, ∇ · ~u = 0. Three smoothing schemes were examined, a moving average (i.e

  11. Needlet estimation of cross-correlation between CMB lensing maps and LSS

    SciTech Connect

    Bianchini, Federico; Renzi, Alessandro; Marinucci, Domenico, E-mail: fbianchini@sissa.it, E-mail: renzi@mat.uniroma2.it, E-mail: marinucc@mat.uniroma2.it

    In this paper we develop a novel needlet-based estimator to investigate the cross-correlation between cosmic microwave background (CMB) lensing maps and large-scale structure (LSS) data. We compare this estimator with its harmonic counterpart and, in particular, we analyze the bias effects of different forms of masking. In order to address this bias, we also implement a MASTER-like technique in the needlet case. The resulting estimator turns out to have an extremely good signal-to-noise performance. Our analysis aims at expanding and optimizing the operating domains in CMB-LSS cross-correlation studies, similarly to CMB needlet data analysis. It is motivated especially by nextmore » generation experiments (such as Euclid) which will allow us to derive much tighter constraints on cosmological and astrophysical parameters through cross-correlation measurements between CMB and LSS.« less

  12. Combining MODIS and Landsat imagery to estimate and map boreal forest cover loss

    USGS Publications Warehouse

    Potapov, P.; Hansen, Matthew C.; Stehman, S.V.; Loveland, Thomas R.; Pittman, K.

    2008-01-01

    Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5 km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss.

  13. Estimating the resolution limit of the map equation in community detection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro; Rosvall, Martin

    2015-01-01

    A community detection algorithm is considered to have a resolution limit if the scale of the smallest modules that can be resolved depends on the size of the analyzed subnetwork. The resolution limit is known to prevent some community detection algorithms from accurately identifying the modular structure of a network. In fact, any global objective function for measuring the quality of a two-level assignment of nodes into modules must have some sort of resolution limit or an external resolution parameter. However, it is yet unknown how the resolution limit affects the so-called map equation, which is known to be an efficient objective function for community detection. We derive an analytical estimate and conclude that the resolution limit of the map equation is set by the total number of links between modules instead of the total number of links in the full network as for modularity. This mechanism makes the resolution limit much less restrictive for the map equation than for modularity; in practice, it is orders of magnitudes smaller. Furthermore, we argue that the effect of the resolution limit often results from shoehorning multilevel modular structures into two-level descriptions. As we show, the hierarchical map equation effectively eliminates the resolution limit for networks with nested multilevel modular structures.

  14. Estimating the social value of geologic map information: A regulatory application

    USGS Publications Warehouse

    Bernknopf, R.L.; Brookshire, D.S.; McKee, M.; Soller, D.R.

    1997-01-01

    People frequently regard the landscape as part of a static system. The mountains and rivers that cross the landscape, and the bedrock that supports the surface, change little during the course of a lifetime. Society can alter the geologic history of an area and, in so doing, affect the occurrence and impact of environmental hazards. For example, changes in land use can induce changes in erosion, sedimentation, and ground-water supply. As the environmental system is changed by both natural processes and human activities, the system's capacity to respond to additional stresses also changes. Information such as geologic maps describes the physical world and is critical for identifying solutions to land use and environmental issues. In this paper, a method is developed for estimating the economic value of applying geologic map information to siting a waste disposal facility. An improvement in geologic map information is shown to have a net positive value to society. Such maps enable planners to make superior land management decisions.

  15. Development of a Greek solar map based on solar model estimations

    NASA Astrophysics Data System (ADS)

    Kambezidis, H. D.; Psiloglou, B. E.; Kavadias, K. A.; Paliatsos, A. G.; Bartzokas, A.

    2016-05-01

    The realization of Renewable Energy Sources (RES) for power generation as the only environmentally friendly solution, moved solar systems to the forefront of the energy market in the last decade. The capacity of the solar power doubles almost every two years in many European countries, including Greece. This rise has brought the need for reliable predictions of meteorological data that can easily be utilized for proper RES-site allocation. The absence of solar measurements has, therefore, raised the demand for deploying a suitable model in order to create a solar map. The generation of a solar map for Greece, could provide solid foundations on the prediction of the energy production of a solar power plant that is installed in the area, by providing an estimation of the solar energy acquired at each longitude and latitude of the map. In the present work, the well-known Meteorological Radiation Model (MRM), a broadband solar radiation model, is engaged. This model utilizes common meteorological data, such as air temperature, relative humidity, barometric pressure and sunshine duration, in order to calculate solar radiation through MRM for areas where such data are not available. Hourly values of the above meteorological parameters are acquired from 39 meteorological stations, evenly dispersed around Greece; hourly values of solar radiation are calculated from MRM. Then, by using an integrated spatial interpolation method, a Greek solar energy map is generated, providing annual solar energy values all over Greece.

  16. Increasing the utility of regional water table maps: a new method for estimating groundwater recharge

    NASA Astrophysics Data System (ADS)

    Gilmore, T. E.; Zlotnik, V. A.; Johnson, M.

    2017-12-01

    Groundwater table elevations are one of the most fundamental measurements used to characterize unconfined aquifers, groundwater flow patterns, and aquifer sustainability over time. In this study, we developed an analytical model that relies on analysis of groundwater elevation contour (equipotential) shape, aquifer transmissivity, and streambed gradient between two parallel, perennial streams. Using two existing regional water table maps, created at different times using different methods, our analysis of groundwater elevation contours, transmissivity and streambed gradient produced groundwater recharge rates (42-218 mm yr-1) that were consistent with previous independent recharge estimates from different methods. The three regions we investigated overly the High Plains Aquifer in Nebraska and included some areas where groundwater is used for irrigation. The three regions ranged from 1,500 to 3,300 km2, with either Sand Hills surficial geology, or Sand Hills transitioning to loess. Based on our results, the approach may be used to increase the value of existing water table maps, and may be useful as a diagnostic tool to evaluate the quality of groundwater table maps, identify areas in need of detailed aquifer characterization and expansion of groundwater monitoring networks, and/or as a first approximation before investing in more complex approaches to groundwater recharge estimation.

  17. Soybean Crop Area Estimation and Mapping in Mato Grosso State, Brazil

    NASA Astrophysics Data System (ADS)

    Gusso, A.; Ducati, J. R.

    2012-07-01

    Evaluation of the MODIS Crop Detection Algorithm (MCDA) procedure for estimating historical planted soybean crop areas was done on fields in Mato Grosso State, Brazil. MCDA is based on temporal profiles of EVI (Enhanced Vegetation Index) derived from satellite data of the MODIS (Moderate Resolution Imaging Spectroradiometer) imager, and was previously developed for soybean area estimation in Rio Grande do Sul State, Brazil. According to the MCDA approach, in Mato Grosso soybean area estimates can be provided in December (1st forecast), using images from the sowing period, and in February (2nd forecast), using images from sowing and maximum crop development period. The results obtained by the MCDA were compared with Brazilian Institute of Geography and Statistics (IBGE) official estimates of soybean area at municipal level. Coefficients of determination were between 0.93 and 0.98, indicating a good agreement, and also the suitability of MCDA to estimations performed in Mato Grosso State. On average, the MCDA results explained 96% of the variation of the data estimated by the IBGE. In this way, MCDA calibration was able to provide annual thematic soybean maps, forecasting the planted area in the State, with results which are comparable to the official agricultural statistics.

  18. A simple algorithm to estimate the effective regional atmospheric parameters for thermal-inertia mapping

    USGS Publications Warehouse

    Watson, K.; Hummer-Miller, S.

    1981-01-01

    A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.

  19. Convergent Cross Mapping: Basic concept, influence of estimation parameters and practical application.

    PubMed

    Schiecke, Karin; Pester, Britta; Feucht, Martha; Leistritz, Lutz; Witte, Herbert

    2015-01-01

    In neuroscience, data are typically generated from neural network activity. Complex interactions between measured time series are involved, and nothing or only little is known about the underlying dynamic system. Convergent Cross Mapping (CCM) provides the possibility to investigate nonlinear causal interactions between time series by using nonlinear state space reconstruction. Aim of this study is to investigate the general applicability, and to show potentials and limitation of CCM. Influence of estimation parameters could be demonstrated by means of simulated data, whereas interval-based application of CCM on real data could be adapted for the investigation of interactions between heart rate and specific EEG components of children with temporal lobe epilepsy.

  20. Comparison of a fully mapped plot design to three alternative designs for volume and area estimates using Maine inventory data

    Treesearch

    Stanford L. Arner

    1998-01-01

    A fully mapped plot design is compared to three alternative designs using data collected for the recent inventory of Maine's forest resources. Like the fully mapped design, one alternative eliminates the bias of previous procedures, and should be less costly and more consistent. There was little difference in volume and area estimates or in sampling errors among...

  1. MareyMap Online: A User-Friendly Web Application and Database Service for Estimating Recombination Rates Using Physical and Genetic Maps.

    PubMed

    Siberchicot, Aurélie; Bessy, Adrien; Guéguen, Laurent; Marais, Gabriel A B

    2017-10-01

    Given the importance of meiotic recombination in biology, there is a need to develop robust methods to estimate meiotic recombination rates. A popular approach, called the Marey map approach, relies on comparing genetic and physical maps of a chromosome to estimate local recombination rates. In the past, we have implemented this approach in an R package called MareyMap, which includes many functionalities useful to get reliable recombination rate estimates in a semi-automated way. MareyMap has been used repeatedly in studies looking at the effect of recombination on genome evolution. Here, we propose a simpler user-friendly web service version of MareyMap, called MareyMap Online, which allows a user to get recombination rates from her/his own data or from a publicly available database that we offer in a few clicks. When the analysis is done, the user is asked whether her/his curated data can be placed in the database and shared with other users, which we hope will make meta-analysis on recombination rates including many species easy in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  2. A simple robust and accurate a posteriori sub-cell finite volume limiter for the discontinuous Galerkin method on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Loubère, Raphaël

    2016-08-01

    In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order

  3. Using gradient-based ray and candidate shadow maps for environmental illumination distribution estimation

    NASA Astrophysics Data System (ADS)

    Eem, Changkyoung; Kim, Iksu; Hong, Hyunki

    2015-07-01

    A method to estimate the environmental illumination distribution of a scene with gradient-based ray and candidate shadow maps is presented. In the shadow segmentation stage, we apply a Canny edge detector to the shadowed image by using a three-dimensional (3-D) augmented reality (AR) marker of a known size and shape. Then the hierarchical tree of the connected edge components representing the topological relation is constructed, and the connected components are merged, taking their hierarchical structures into consideration. A gradient-based ray that is perpendicular to the gradient of the edge pixel in the shadow image can be used to extract the shadow regions. In the light source detection stage, shadow regions with both a 3-D AR marker and the light sources are partitioned into candidate shadow maps. A simple logic operation between each candidate shadow map and the segmented shadow is used to efficiently compute the area ratio between them. The proposed method successively extracts the main light sources according to their relative contributions on the segmented shadows. The proposed method can reduce unwanted effects due to the sampling positions in the shadow region and the threshold values in the shadow edge detection.

  4. Estimation of visual maps with a robot network equipped with vision sensors.

    PubMed

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  5. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    PubMed Central

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930

  6. Estimating cross-validatory predictive p-values with integrated importance sampling for disease mapping models.

    PubMed

    Li, Longhai; Feng, Cindy X; Qiu, Shi

    2017-06-30

    An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Winter Crop Mapping for Improving Crop Production Estimates in Argentina Using Moderation Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Humber, M. L.; Copati, E.; Sanchez, A.; Sahajpal, R.; Puricelli, E.; Becker-Reshef, I.

    2017-12-01

    Accurate crop production data is fundamental for reducing uncertainly and volatility in the domestic and international agricultural markets. The Agricultural Estimates Department of the Buenos Aires Grain Exchange has worked since 2000 on the estimation of different crop production data. With this information, the Grain Exchange helps different actors of the agricultural chain, such as producers, traders, seed companies, market analyst, policy makers, into their day to day decision making. Since 2015/16 season, the Grain Exchange has worked on the development of a new earth observations-based method to identify winter crop planted area at a regional scale with the aim of improving crop production estimates. The objective of this new methodology is to create a reliable winter crop mask at moderate spatial resolution using Landsat-8 imagery by exploiting bi-temporal differences in the phenological stages of winter crops as compared to other landcover types. In collaboration with the University of Maryland, the map has been validated by photointerpretation of a stratified statistically random sample of independent ground truth data in the four largest producing provinces of Argentina: Buenos Aires, Cordoba, La Pampa, and Santa Fe. In situ measurements were also used to further investigate conditions in the Buenos Aires province. Preliminary results indicate that while there are some avenues for improvement, overall the classification accuracy of the cropland and non-cropland classes are sufficient to improve downstream production estimates. Continuing research will focus on improving the methodology for winter crop mapping exercises on a yearly basis as well as improving the sampling methodology to optimize collection of validation data in the future.

  8. Estimating 3D topographic map of optic nerve head from a single fundus image

    NASA Astrophysics Data System (ADS)

    Wang, Peipei; Sun, Jiuai

    2018-04-01

    Optic nerve head also called optic disc is the distal portion of optic nerve locating and clinically visible on the retinal surface. It is a 3 dimensional elliptical shaped structure with a central depression called the optic cup. This shape of the ONH and the size of the depression can be varied due to different retinopathy or angiopathy, therefore the estimation of topography of optic nerve head is significant for assisting diagnosis of those retinal related complications. This work describes a computer vision based method, i.e. shape from shading (SFS) to recover and visualize 3D topographic map of optic nerve head from a normal fundus image. The work is expected helpful for assessing those complications associated the deformation of optic nerve head such as glaucoma and diabetes. The illumination is modelled as uniform over the area around optic nerve head and its direction estimated from the available image. The Tsai discrete method has been employed to recover the 3D topographic map of the optic nerve head. The initial experimental result demonstrates our approach works on most of fundus images and provides a cheap, but good alternation for rendering and visualizing the topographic information of the optic nerve head for potential clinical use.

  9. Use of plume mapping data to estimate chlorinated solvent mass loss

    USGS Publications Warehouse

    Barbaro, J.R.; Neupane, P.P.

    2006-01-01

    Results from a plume mapping study from November 2000 through February 2001 in the sand-and-gravel surficial aquifer at Dover Air Force Base, Delaware, were used to assess the occurrence and extent of chlorinated solvent mass loss by calculating mass fluxes across two transverse cross sections and by observing changes in concentration ratios and mole fractions along a longitudinal cross section through the core of the plume. The plume mapping investigation was conducted to determine the spatial distribution of chlorinated solvents migrating from former waste disposal sites. Vertical contaminant concentration profiles were obtained with a direct-push drill rig and multilevel piezometers. These samples were supplemented with additional ground water samples collected with a minipiezometer from the bed of a perennial stream downgradient of the source areas. Results from the field program show that the plume, consisting mainly of tetrachloroethylene (PCE), trichloroethene (TCE), and cis-1,2-dichloroethene (cis-1,2-DCE), was approximately 670 m in length and 120 m in width, extended across much of the 9- to 18-m thickness of the surficial aquifer, and discharged to the stream in some areas. The analyses of the plume mapping data show that losses of the parent compounds, PCE and TCE, were negligible downgradient of the source. In contrast, losses of cis-1,2-DCE, a daughter compound, were observed in this plume. These losses very likely resulted from biodegradation, but the specific reaction mechanism could not be identified. This study demonstrates that plume mapping data can be used to estimate the occurrence and extent of chlorinated solvent mass loss from biodegradation and assess the effectiveness of natural attenuation as a remedial measure.

  10. Video attention deviation estimation using inter-frame visual saliency map analysis

    NASA Astrophysics Data System (ADS)

    Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng

    2012-01-01

    A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.

  11. Leptospirosis in American Samoa – Estimating and Mapping Risk Using Environmental Data

    PubMed Central

    Lau, Colleen L.; Clements, Archie C. A.; Skelly, Chris; Dobson, Annette J.; Smythe, Lee D.; Weinstein, Philip

    2012-01-01

    Background The recent emergence of leptospirosis has been linked to many environmental drivers of disease transmission. Accurate epidemiological data are lacking because of under-diagnosis, poor laboratory capacity, and inadequate surveillance. Predictive risk maps have been produced for many diseases to identify high-risk areas for infection and guide allocation of public health resources, and are particularly useful where disease surveillance is poor. To date, no predictive risk maps have been produced for leptospirosis. The objectives of this study were to estimate leptospirosis seroprevalence at geographic locations based on environmental factors, produce a predictive disease risk map for American Samoa, and assess the accuracy of the maps in predicting infection risk. Methodology and Principal Findings Data on seroprevalence and risk factors were obtained from a recent study of leptospirosis in American Samoa. Data on environmental variables were obtained from local sources, and included rainfall, altitude, vegetation, soil type, and location of backyard piggeries. Multivariable logistic regression was performed to investigate associations between seropositivity and risk factors. Using the multivariable models, seroprevalence at geographic locations was predicted based on environmental variables. Goodness of fit of models was measured using area under the curve of the receiver operating characteristic, and the percentage of cases correctly classified as seropositive. Environmental predictors of seroprevalence included living below median altitude of a village, in agricultural areas, on clay soil, and higher density of piggeries above the house. Models had acceptable goodness of fit, and correctly classified ∼84% of cases. Conclusions and Significance Environmental variables could be used to identify high-risk areas for leptospirosis. Environmental monitoring could potentially be a valuable strategy for leptospirosis control, and allow us to move from disease

  12. Winter wheat mapping combining variations before and after estimated heading dates

    NASA Astrophysics Data System (ADS)

    Qiu, Bingwen; Luo, Yuhan; Tang, Zhenghong; Chen, Chongcheng; Lu, Difei; Huang, Hongyu; Chen, Yunzhi; Chen, Nan; Xu, Weiming

    2017-01-01

    Accurate and updated information on winter wheat distribution is vital for food security. The intra-class variability of the temporal profiles of vegetation indices presents substantial challenges to current time series-based approaches. This study developed a new method to identify winter wheat over large regions through a transformation and metric-based approach. First, the trend surfaces were established to identify key phenological parameters of winter wheat based on altitude and latitude with references to crop calendar data from the agro-meteorological stations. Second, two phenology-based indicators were developed based on the EVI2 differences between estimated heading and seedling/harvesting dates and the change amplitudes. These two phenology-based indicators revealed variations during the estimated early and late growth stages. Finally, winter wheat data were extracted based on these two metrics. The winter wheat mapping method was applied to China based on the 250 m 8-day composite Moderate Resolution Imaging Spectroradiometer (MODIS) 2-band Enhanced Vegetation Index (EVI2) time series datasets. Accuracy was validated with field survey data, agricultural census data, and Landsat-interpreted results in test regions. When evaluated with 653 field survey sites and Landsat image interpreted data, the overall accuracy of MODIS-derived images in 2012-2013 was 92.19% and 88.86%, respectively. The MODIS-derived winter wheat areas accounted for over 82% of the variability at the municipal level when compared with agricultural census data. The winter wheat mapping method developed in this study demonstrates great adaptability to intra-class variability of the vegetation temporal profiles and has great potential for further applications to broader regions and other types of agricultural crop mapping.

  13. A-Posteriori Error Estimates for Mixed Finite Element and Finite Volume Methods for Problems Coupled Through a Boundary with Non-Matching Grids

    DTIC Science & Technology

    2013-08-01

    both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric projection and of the use of quadrature, we also report the...interest MFE ∑(e,ψ) or GFV ∑(e,ψ). Tables 1 and 2 show this using coarse and fine forward solutions. Table 1. The forward problem with solution (4.1) is run...adjoint data components ψu and ψp are constant everywhere and ψξ = 0. adj. grid MFE ∑(e,ψ) ∑MFEi ratio GFV ∑(e,ψ) ∑GFV i ratio 20x20 : 32x32 1.96E−3

  14. A system to geometrically rectify and map airborne scanner imagery and to estimate ground area. [by computer

    NASA Technical Reports Server (NTRS)

    Spencer, M. M.; Wolf, J. M.; Schall, M. A.

    1974-01-01

    A system of computer programs were developed which performs geometric rectification and line-by-line mapping of airborne multispectral scanner data to ground coordinates and estimates ground area. The system requires aircraft attitude and positional information furnished by ancillary aircraft equipment, as well as ground control points. The geometric correction and mapping procedure locates the scan lines, or the pixels on each line, in terms of map grid coordinates. The area estimation procedure gives ground area for each pixel or for a predesignated parcel specified in map grid coordinates. The results of exercising the system with simulated data showed the uncorrected video and corrected imagery and produced area estimates accurate to better than 99.7%.

  15. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    NASA Astrophysics Data System (ADS)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  16. A simulation of Earthquake Loss Estimation in Southeastern Korea using HAZUS and the local site classification Map

    NASA Astrophysics Data System (ADS)

    Kang, S.; Kim, K.

    2013-12-01

    Regionally varying seismic hazards can be estimated using an earthquake loss estimation system (e.g. HAZUS-MH). The estimations for actual earthquakes help federal and local authorities develop rapid, effective recovery measures. Estimates for scenario earthquakes help in designing a comprehensive earthquake hazard mitigation plan. Local site characteristics influence the ground motion. Although direct measurements are desirable to construct a site-amplification map, such data are expensive and time consuming to collect. Thus we derived a site classification map of the southern Korean Peninsula using geologic and geomorphologic data, which are readily available for the entire southern Korean Peninsula. Class B sites (mainly rock) are predominant in the area, although localized areas of softer soils are found along major rivers and seashores. The site classification map is compared with independent site classification studies to confirm our site classification map effectively represents the local behavior of site amplification during an earthquake. We then estimated the losses due to a magnitude 6.7 scenario earthquake in Gyeongju, southeastern Korea, with and without the site classification map. Significant differences in loss estimates were observed. The loss without the site classification map decreased without variation with increasing epicentral distance, while the loss with the site classification map varied from region to region, due to both the epicentral distance and local site effects. The major cause of the large loss expected in Gyeongju is the short epicentral distance. Pohang Nam-Gu is located farther from the earthquake source region. Nonetheless, the loss estimates in the remote city are as large as those in Gyeongju and are attributed to the site effect of soft soil found widely in the area.

  17. Estimation of 3-D conduction velocity vector fields from cardiac mapping data.

    PubMed

    Barnette, A R; Bayly, P V; Zhang, S; Walcott, G P; Ideker, R E; Smith, W M

    2000-08-01

    A method to estimate three-dimensional (3-D) conduction velocity vector fields in cardiac tissue is presented. The speed and direction of propagation are found from polynomial "surfaces" fitted to space-time (x, y, z, t) coordinates of cardiac activity. The technique is applied to sinus rhythm and paced rhythm mapped with plunge needles at 396-466 sites in the canine myocardium. The method was validated on simulated 3-D plane and spherical waves. For simulated data, conduction velocities were estimated with an accuracy of 1%-2%. In experimental data, estimates of conduction speeds during paced rhythm were slower than those found during normal sinus rhythm. Vector directions were also found to differ between different types of beats. The technique was able to distinguish between premature ventricular contractions and sinus beats and between sinus and paced beats. The proposed approach to computing velocity vector fields provides an automated, physiological, and quantitative description of local electrical activity in 3-D tissue. This method may provide insight into abnormal conduction associated with fatal ventricular arrhythmias.

  18. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  19. Economic analysis of the first 20 years of universal hepatitis B vaccination program in Italy: an a posteriori evaluation and forecast of future benefits.

    PubMed

    Boccalini, Sara; Taddei, Cristina; Ceccherini, Vega; Bechini, Angela; Levi, Miriam; Bartolozzi, Dario; Bonanni, Paolo

    2013-05-01

    Italy was one of the first countries in the world to introduce a routine vaccination program against HBV for newborns and 12-y-old children. From a clinical point of view, such strategy was clearly successful. The objective of our study was to verify whether, at 20 y from its implementation, hepatitis B universal vaccination had positive effects also from an economic point of view. An a posteriori analysis evaluated the impact that the hepatitis B immunization program had up to the present day. The implementation of vaccination brought an extensive reduction of the burden of hepatitis B-related diseases in the Italian population. As a consequence, the past and future savings due to clinical costs avoided are particularly high. We obtained a return on investment nearly equal to 1 from the National Health Service perspective, and a benefit-to-cost ratio slightly less than 1 for the Societal perspective, considering only the first 20 y from the start of the program. In the longer-time horizon, ROI and BCR values were positive (2.78 and 2.46, respectively). The break-even point was already achieved few years ago for the NHS and for the Society, and since then more and more money is progressively saved. The implementation of universal hepatitis B vaccination was very favorable during the first 20 y of adoption, and further benefits will be increasingly evident in the future. The hepatitis B vaccination program in Italy is a clear example of the great impact that universal immunization is able to provide in the medium-long-term when health care authorities are so wise as to invest in prevention.

  20. Mapping Antarctic Crustal Thickness using Gravity Inversion and Comparison with Seismic Estimates

    NASA Astrophysics Data System (ADS)

    Kusznir, Nick; Ferraccioli, Fausto; Jordan, Tom

    2017-04-01

    Using gravity anomaly inversion, we produce comprehensive regional maps of crustal thickness and oceanic lithosphere distribution for Antarctica and the Southern Ocean. Crustal thicknesses derived from gravity inversion are consistent with seismic estimates. We determine Moho depth, crustal basement thickness, continental lithosphere thinning (1-1/β) and ocean-continent transition location using a 3D spectral domain gravity inversion method, which incorporates a lithosphere thermal gravity anomaly correction (Chappell & Kusznir 2008). The gravity anomaly contribution from ice thickness is included in the gravity inversion, as is the contribution from sediments which assumes a compaction controlled sediment density increase with depth. Data used in the gravity inversion are elevation and bathymetry, free-air gravity anomaly, the Bedmap 2 ice thickness and bedrock topography compilation south of 60 degrees south and relatively sparse constraints on sediment thickness. Ocean isochrons are used to define the cooling age of oceanic lithosphere. Crustal thicknesses from gravity inversion are compared with independent seismic estimates, which are still relatively sparse over Antarctica. Our gravity inversion study predicts thick crust (> 45 km) under interior East Antarctica, which is penetrated by narrow continental rifts featuring relatively thinner crust. The largest crustal thicknesses predicted from gravity inversion lie in the region of the Gamburtsev Subglacial Mountains, and are consistent with seismic estimates. The East Antarctic Rift System (EARS), a major Permian to Cretaceous age rift system, is imaged by our inversion and appears to extend from the continental margin at the Lambert Rift to the South Pole region, a distance of 2500 km. Offshore an extensive region of either thick oceanic crust or highly thinned continental crust lies adjacent to Oates Land and north Victoria Land, and also off West Antarctica around the Amundsen Ridges. Thin crust is

  1. Estimation of intrinsic and extrinsic capacitances of graphene self-switching diode using conformal mapping technique

    NASA Astrophysics Data System (ADS)

    Singh, Arun K.; Auton, Gregory; Hill, Ernie; Song, Aimin

    2018-07-01

    Due to a very high carrier concentration and low band gap, graphene based self-switching diodes do not demonstrate a very high rectification ratio. Despite that, it takes the advantage of graphene’s high carrier mobility and has been shown to work at very high microwave frequencies. However, the AC component of these devices is hidden in the very linear current–voltage characteristics. Here, we extract and quantitatively study the device capacitance that determines the device nonlinearity by implementing a conformal mapping technique. The estimated value of the nonlinear component or curvature coefficient from DC results based on Shichman–Hodges model predicts the rectified output voltage, which is in good agreement with the experimental RF results.

  2. Multi-crop area estimation and mapping on a microprocessor/mainframe network

    NASA Technical Reports Server (NTRS)

    Sheffner, E.

    1985-01-01

    The data processing system is outlined for a 1985 test aimed at determining the performance characteristics of area estimation and mapping procedures connected with the California Cooperative Remote Sensing Project. The project is a joint effort of the USDA Statistical Reporting Service-Remote Sensing Branch, the California Department of Water Resources, NASA-Ames Research Center, and the University of California Remote Sensing Research Program. One objective of the program was to study performance when data processing is done on a microprocessor/mainframe network under operational conditions. The 1985 test covered the hardware, software, and network specifications and the integration of these three components. Plans for the year - including planned completion of PEDITOR software, testing of software on MIDAS, and accomplishment of data processing on the MIDAS-VAX-CRAY network - are discussed briefly.

  3. MAPS

    Atmospheric Science Data Center

    2014-07-03

    ... from Satellites (MAPS) data were collected during Space Shuttle flights in 1981, 1984 and 1994. The main pollutant measured was carbon ... Carbon Monoxide Relevant Documents:  NASA Facts Correlative Data  - CDIAC - Spring & Fall 1994 - Field ...

  4. Multiresolution MAP despeckling of SAR images based on locally adaptive generalized Gaussian pdf modeling.

    PubMed

    Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano

    2006-11-01

    In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.

  5. Stability estimate for the aligned magnetic field in a periodic quantum waveguide from Dirichlet-to-Neumann map

    SciTech Connect

    Mejri, Youssef, E-mail: josef-bizert@hotmail.fr; Dép. des Mathématiques, Faculté des Sciences de Bizerte, 7021 Jarzouna; Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT BP 37, Le Belvedere, 1002 Tunis

    In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.

  6. Interventional endocardial motion estimation from electroanatomical mapping data: application to scar characterization.

    PubMed

    Porras, Antonio R; Piella, Gemma; Berruezo, Antonio; Hoogendoorn, Corne; Andreu, David; Fernandez-Armenta, Juan; Sitges, Marta; Frangi, Alejandro F

    2013-05-01

    Scar presence and its characteristics play a fundamental role in several cardiac pathologies. To accurately define the extent and location of the scar is essential for a successful ventricular tachycardia ablation procedure. Nowadays, a set of widely accepted electrical voltage thresholds applied to local electrograms recorded are used intraoperatively to locate the scar. Information about cardiac mechanics could be considered to characterize tissues with different viability properties. We propose a novel method to estimate endocardial motion from data obtained with an electroanatomical mapping system together with the endocardial geometry segmented from preoperative 3-D magnetic resonance images, using a statistical atlas constructed with bilinear models. The method was validated using synthetic data generated from ultrasound images of nine volunteers and was then applied to seven ventricular tachycardia patients. Maximum bipolar voltages, commonly used to intraoperatively locate scar tissue, were compared to endocardial wall displacement and strain for all the patients. The results show that the proposed method allows endocardial motion and strain estimation and that areas with low-voltage electrograms also present low strain values.

  7. Mapping Oil and Gas Development Potential in the US Intermountain West and Estimating Impacts to Species

    PubMed Central

    Copeland, Holly E.; Doherty, Kevin E.; Naugle, David E.; Pocewicz, Amy; Kiesecker, Joseph M.

    2009-01-01

    Background Many studies have quantified the indirect effect of hydrocarbon-based economies on climate change and biodiversity, concluding that a significant proportion of species will be threatened with extinction. However, few studies have measured the direct effect of new energy production infrastructure on species persistence. Methodology/Principal Findings We propose a systematic way to forecast patterns of future energy development and calculate impacts to species using spatially-explicit predictive modeling techniques to estimate oil and gas potential and create development build-out scenarios by seeding the landscape with oil and gas wells based on underlying potential. We illustrate our approach for the greater sage-grouse (Centrocercus urophasianus) in the western US and translate the build-out scenarios into estimated impacts on sage-grouse. We project that future oil and gas development will cause a 7–19 percent decline from 2007 sage-grouse lek population counts and impact 3.7 million ha of sagebrush shrublands and 1.1 million ha of grasslands in the study area. Conclusions/Significance Maps of where oil and gas development is anticipated in the US Intermountain West can be used by decision-makers intent on minimizing impacts to sage-grouse. This analysis also provides a general framework for using predictive models and build-out scenarios to anticipate impacts to species. These predictive models and build-out scenarios allow tradeoffs to be considered between species conservation and energy development prior to implementation. PMID:19826472

  8. THREaD Mapper Studio: a novel, visual web server for the estimation of genetic linkage maps

    PubMed Central

    Cheema, Jitender; Ellis, T. H. Noel; Dicks, Jo

    2010-01-01

    The estimation of genetic linkage maps is a key component in plant and animal research, providing both an indication of the genetic structure of an organism and a mechanism for identifying candidate genes associated with traits of interest. Because of this importance, several computational solutions to genetic map estimation exist, mostly implemented as stand-alone software packages. However, the estimation process is often largely hidden from the user. Consequently, problems such as a program crashing may occur that leave a user baffled. THREaD Mapper Studio (http://cbr.jic.ac.uk/threadmapper) is a new web site that implements a novel, visual and interactive method for the estimation of genetic linkage maps from DNA markers. The rationale behind the web site is to make the estimation process as transparent and robust as possible, while also allowing users to use their expert knowledge during analysis. Indeed, the 3D visual nature of the tool allows users to spot features in a data set, such as outlying markers and potential structural rearrangements that could cause problems with the estimation procedure and to account for them in their analysis. Furthermore, THREaD Mapper Studio facilitates the visual comparison of genetic map solutions from third party software, aiding users in developing robust solutions for their data sets. PMID:20494977

  9. SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.

    PubMed

    Cao, Yuan; He, Haibo; Man, Hong

    2012-08-01

    In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.

  10. Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map.

    PubMed

    Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S

    2008-04-11

    A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker-Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes.

  11. Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map

    PubMed Central

    Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S.

    2010-01-01

    SUMMARY A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker–Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes. PMID:20454468

  12. National-scale crop type mapping and area estimation using multi-resolution remote sensing and field survey

    NASA Astrophysics Data System (ADS)

    Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.

    2016-12-01

    Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in

  13. Associations of Mediterranean Diet and a Posteriori Derived Dietary Patterns with Breast and Lung Cancer Risk: A Case-Control Study.

    PubMed

    Krusinska, Beata; Hawrysz, Iwona; Wadolowska, Lidia; Slowinska, Malgorzata Anna; Biernacki, Maciej; Czerwinska, Anna; Golota, Janusz Jacek

    2018-04-11

    Lung cancer in men and breast cancer in women are the most commonly diagnosed cancers in Poland and worldwide. Results of studies involving dietary patterns (DPs) and breast or lung cancer risk in European countries outside the Mediterranean Sea region are limited and inconclusive. This study aimed to develop a 'Polish-adapted Mediterranean Diet' ('Polish-aMED') score, and then study the associations between the 'Polish-aMED' score and a posteriori -derived dietary patterns with breast or lung cancer risk in adult Poles. This pooled analysis of two case-control studies involved 560 subjects (280 men, 280 women) aged 40-75 years from Northeastern Poland. Diagnoses of breast cancer in 140 women and lung cancer in 140 men were found. The food frequency consumption of 21 selected food groups was collected using a 62-item Food Frequency Questionnaire (FFQ)-6. The 'Polish-adapted Mediterranean Diet' score which included eight items-vegetables, fruit, whole grain, fish, legumes, nuts and seeds-as well as the ratio of vegetable oils to animal fat and red and processed meat was developed (range: 0-8 points). Three DPs were identified in a Principal Component Analysis: 'Prudent', 'Non-healthy', 'Dressings and sweetened-low-fat dairy'. In a multiple logistic regression analysis, two models were created: crude, and adjusted for age, sex, type of cancer, Body Mass Index (BMI), socioeconomic status (SES) index, overall physical activity, smoking status and alcohol abuse. The risk of breast or lung cancer was lower in the average (3-5 points) and high (6-8 points) levels of the 'Polish-aMED' score compared to the low (0-2 points) level by 51% (odds ratio (OR): 0.49; 95% confidence interval (Cl): 0.30-0.80; p < 0.01; adjusted) and 63% (OR: 0.37; 95% Cl: 0.21-0.64; p < 0.001; adjusted), respectively. In the middle and upper tertiles compared to the bottom tertile of the 'Prudent' DP, the risk of cancer was lower by 38-43% (crude) but was not significant after adjustment for

  14. A method to estimate the effect of deformable image registration uncertainties on daily dose mapping

    PubMed Central

    Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin

    2012-01-01

    Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766

  15. A neural-network based estimator to search for primordial non-Gaussianity in Planck CMB maps

    SciTech Connect

    Novaes, C.P.; Bernui, A.; Ferreira, I.S.

    2015-09-01

    We present an upgraded combined estimator, based on Minkowski Functionals and Neural Networks, with excellent performance in detecting primordial non-Gaussianity in simulated maps that also contain a weighted mixture of Galactic contaminations, besides real pixel's noise from Planck cosmic microwave background radiation data. We rigorously test the efficiency of our estimator considering several plausible scenarios for residual non-Gaussianities in the foreground-cleaned Planck maps, with the intuition to optimize the training procedure of the Neural Network to discriminate between contaminations with primordial and secondary non-Gaussian signatures. We look for constraints of primordial local non-Gaussianity at large angular scales in the foreground-cleanedmore » Planck maps. For the SMICA map we found f{sub NL} = 33 ± 23, at 1σ confidence level, in excellent agreement with the WMAP-9yr and Planck results. In addition, for the other three Planck maps we obtain similar constraints with values in the interval f{sub NL}  element of  [33, 41], concomitant with the fact that these maps manifest distinct features in reported analyses, like having different pixel's noise intensities.« less

  16. Effect of different tropospheric mapping functions on the TRF, CRF and position time-series estimated from VLBI

    NASA Astrophysics Data System (ADS)

    Tesmer, Volker; Boehm, Johannes; Heinkelmann, Robert; Schuh, Harald

    2007-06-01

    This paper compares estimated terrestrial reference frames (TRF) and celestial reference frames (CRF) as well as position time-series in terms of systematic differences, scale, annual signals and station position repeatabilities using four different tropospheric mapping functions (MF): The NMF (Niell Mapping Function) and the recently developed GMF (Global Mapping Function) consist of easy-to-handle stand-alone formulae, whereas the IMF (Isobaric Mapping Function) and the VMF1 (Vienna Mapping Function 1) are determined from numerical weather models. All computations were performed at the Deutsches Geodätisches Forschungsinstitut (DGFI) using the OCCAM 6.1 and DOGS-CS software packages for Very Long Baseline Interferometry (VLBI) data from 1984 until 2005. While it turned out that CRF estimates only slightly depend on the MF used, showing small systematic effects up to 0.025 mas, some station heights of the computed TRF change by up to 13 mm. The best agreement was achieved for the VMF1 and GMF results concerning the TRFs, and for the VMF1 and IMF results concerning scale variations and position time-series. The amplitudes of the annual periodical signals in the time-series of estimated heights differ by up to 5 mm. The best precision in terms of station height repeatability is found for the VMF1, which is 5 7% better than for the other MFs.

  17. Where Have All the Interactions Gone? Estimating the Coverage of Two-Hybrid Protein Interaction Maps

    PubMed Central

    Huang, Hailiang; Jedynak, Bruno M; Bader, Joel S

    2007-01-01

    Yeast two-hybrid screens are an important method for mapping pairwise physical interactions between proteins. The fraction of interactions detected in independent screens can be very small, and an outstanding challenge is to determine the reason for the low overlap. Low overlap can arise from either a high false-discovery rate (interaction sets have low overlap because each set is contaminated by a large number of stochastic false-positive interactions) or a high false-negative rate (interaction sets have low overlap because each misses many true interactions). We extend capture–recapture theory to provide the first unified model for false-positive and false-negative rates for two-hybrid screens. Analysis of yeast, worm, and fly data indicates that 25% to 45% of the reported interactions are likely false positives. Membrane proteins have higher false-discovery rates on average, and signal transduction proteins have lower rates. The overall false-negative rate ranges from 75% for worm to 90% for fly, which arises from a roughly 50% false-negative rate due to statistical undersampling and a 55% to 85% false-negative rate due to proteins that appear to be systematically lost from the assays. Finally, statistical model selection conclusively rejects the Erdös-Rényi network model in favor of the power law model for yeast and the truncated power law for worm and fly degree distributions. Much as genome sequencing coverage estimates were essential for planning the human genome sequencing project, the coverage estimates developed here will be valuable for guiding future proteomic screens. All software and datasets are available in Datasets S1 and S2, Figures S1–S5, and Tables S1−S6, and are also available from our Web site, http://www.baderzone.org. PMID:18039026

  18. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    SciTech Connect

    Hub, Martina; Thieke, Christian; Kessler, Marc L.

    2012-04-15

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts formore » the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.« less

  19. A stochastic approach to estimate the uncertainty of dose mapping caused by uncertainties in b-spline registration

    PubMed Central

    Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.

    2012-01-01

    Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well. PMID:22482640

  20. Estimation and Mapping of Coastal Mangrove Biomass Using Both Passive and Active Remote Sensing Method

    NASA Astrophysics Data System (ADS)

    Yiqiong, L.; Lu, W.; Zhou, J.; Gan, W.; Cui, X.; Lin, G., Sr.

    2015-12-01

    Mangrove forests play an important role in global carbon cycle, but carbon stocks in different mangrove forests are not easily measured at large scale. In this research, both active and passive remote sensing methods were used to estimate the aboveground biomass of dominant mangrove communities in Zhanjiang National Mangrove Nature Reserve in Guangdong, China. We set up a decision tree including spectral, texture, position and geometry indexes to achieve mangrove inter-species classification among 5 main species named Aegiceras corniculatum, Aricennia marina, Bruguiera gymnorrhiza, Kandelia candel, Sonneratia apetala by using 5.8m multispectral ZY-3 images. In addition, Lidar data were collected and used to obtain the canopy height of different mangrove species. Then, regression equations between the field measured aboveground biomass and the canopy height deduced from Lidar data were established for these mangrove species. By combining these results, we were able to establish a relatively accurate method for differentiating mangrove species and mapping their aboveground biomass distribution at the estuary scale, which could be applied to mangrove forests in other regions.

  1. Estimates of the Lightning NOx Profile in the Vicinity of the North Alabama Lightning Mapping Array

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Peterson, Harold

    2010-01-01

    The NASA Marshall Space Flight Center Lightning Nitrogen Oxides Model (LNOM) is applied to August 2006 North Alabama Lightning Mapping Array (LMA) data to estimate the raw (i.e., unmixed and otherwise environmentally unmodified) vertical profile of lightning nitrogen oxides, NOx = NO + NO 2 . This is part of a larger effort aimed at building a more realistic lightning NOx emissions inventory for use by the U.S. Environmental Protection Agency (EPA) Community Multiscale Air Quality (CMAQ) modeling system. Data from the National Lightning Detection Network TM (NLDN) is also employed. Overall, special attention is given to several important lightning variables including: the frequency and geographical distribution of lightning in the vicinity of the LMA network, lightning type (ground or cloud flash), lightning channel length, channel altitude, channel peak current, and the number of strokes per flash. Laboratory spark chamber results from the literature are used to convert 1-meter channel segments (that are located at a particular known altitude; i.e., air density) to NOx concentration. The resulting raw NOx profiles are discussed.

  2. Estimates of the Lightning NOx Profile in the Vicinity of the North Alabama Lightning Mapping Array

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Peterson, Harold S.; McCaul, Eugene W.; Blazar, Arastoo

    2010-01-01

    The NASA Marshall Space Flight Center Lightning Nitrogen Oxides Model (LNOM) is applied to August 2006 North Alabama Lightning Mapping Array (NALMA) data to estimate the (unmixed and otherwise environmentally unmodified) vertical source profile of lightning nitrogen oxides, NOx = NO + NO2. Data from the National Lightning Detection Network (Trademark) (NLDN) is also employed. This is part of a larger effort aimed at building a more realistic lightning NOx emissions inventory for use by the U.S. Environmental Protection Agency (EPA) Community Multiscale Air Quality (CMAQ) modeling system. Overall, special attention is given to several important lightning variables including: the frequency and geographical distribution of lightning in the vicinity of the NALMA network, lightning type (ground or cloud flash), lightning channel length, channel altitude, channel peak current, and the number of strokes per flash. Laboratory spark chamber results from the literature are used to convert 1-meter channel segments (that are located at a particular known altitude; i.e., air density) to NOx concentration. The resulting lightning NOx source profiles are discussed.

  3. Extended estimator approach for 2×2 games and its mapping to the Ising Hamiltonian

    NASA Astrophysics Data System (ADS)

    Ariosa, D.; Fort, H.

    2005-01-01

    We consider a system of adaptive self-interested agents interacting by playing an iterated pairwise prisoner’s dilemma (PD) game. Each player has two options: either cooperate (C) or defect (D). Agents have no (long term) memory to reciprocate nor identifying tags to distinguish C from D. We show how their 16 possible elementary Markovian (one-step memory) strategies can be cast in a simple general formalism in terms of an estimator of expected utilities Δ* . This formalism is helpful to map a subset of these strategies into an Ising Hamiltonian in a straightforward way. This connection in turn serves to shed light on the evolution of the iterated games played by agents, which can represent a broad variety of individuals from firms of a market to species coexisting in an ecosystem. Additionally, this magnetic description may be useful to introduce noise in a natural and simple way. The equilibrium states reached by the system depend strongly on whether the dynamics are synchronous or asynchronous and also on the system connectivity.

  4. Accurate estimation of short read mapping quality for next-generation genome sequencing

    PubMed Central

    Ruffalo, Matthew; Koyutürk, Mehmet; Ray, Soumya; LaFramboise, Thomas

    2012-01-01

    Motivation: Several software tools specialize in the alignment of short next-generation sequencing reads to a reference sequence. Some of these tools report a mapping quality score for each alignment—in principle, this quality score tells researchers the likelihood that the alignment is correct. However, the reported mapping quality often correlates weakly with actual accuracy and the qualities of many mappings are underestimated, encouraging the researchers to discard correct mappings. Further, these low-quality mappings tend to correlate with variations in the genome (both single nucleotide and structural), and such mappings are important in accurately identifying genomic variants. Approach: We develop a machine learning tool, LoQuM (LOgistic regression tool for calibrating the Quality of short read mappings, to assign reliable mapping quality scores to mappings of Illumina reads returned by any alignment tool. LoQuM uses statistics on the read (base quality scores reported by the sequencer) and the alignment (number of matches, mismatches and deletions, mapping quality score returned by the alignment tool, if available, and number of mappings) as features for classification and uses simulated reads to learn a logistic regression model that relates these features to actual mapping quality. Results: We test the predictions of LoQuM on an independent dataset generated by the ART short read simulation software and observe that LoQuM can ‘resurrect’ many mappings that are assigned zero quality scores by the alignment tools and are therefore likely to be discarded by researchers. We also observe that the recalibration of mapping quality scores greatly enhances the precision of called single nucleotide polymorphisms. Availability: LoQuM is available as open source at http://compbio.case.edu/loqum/. Contact: matthew.ruffalo@case.edu. PMID:22962451

  5. Fusion of Kinect depth data with trifocal disparity estimation for near real-time high quality depth maps generation

    NASA Astrophysics Data System (ADS)

    Boisson, Guillaume; Kerbiriou, Paul; Drazic, Valter; Bureller, Olivier; Sabater, Neus; Schubert, Arno

    2014-03-01

    Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows. Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D reconstruction or virtual view synthesis.

  6. Fractal-Based Lightning Channel Length Estimation from Convex-Hull Flash Areas for DC3 Lightning Mapping Array Data

    NASA Technical Reports Server (NTRS)

    Bruning, Eric C.; Thomas, Ronald J.; Krehbiel, Paul R.; Rison, William; Carey, Larry D.; Koshak, William; Peterson, Harold; MacGorman, Donald R.

    2013-01-01

    We will use VHF Lightning Mapping Array data to estimate NOx per flash and per unit channel length, including the vertical distribution of channel length. What s the best way to find channel length from VHF sources? This paper presents the rationale for the fractal method, which is closely related to the box-covering method.

  7. MAP Reconstruction for Fourier Rebinned TOF-PET Data

    PubMed Central

    Bai, Bing; Lin, Yanguang; Zhu, Wentao; Ren, Ran; Li, Quanzheng; Dahlbom, Magnus; DiFilippo, Frank; Leahy, Richard M.

    2014-01-01

    Time-of-flight (TOF) information improves signal to noise ratio in Positron Emission Tomography (PET). Computation cost in processing TOF-PET sinograms is substantially higher than for nonTOF data because the data in each line of response is divided among multiple time of flight bins. This additional cost has motivated research into methods for rebinning TOF data into lower dimensional representations that exploit redundancies inherent in TOF data. We have previously developed approximate Fourier methods that rebin TOF data into either 3D nonTOF or 2D nonTOF formats. We refer to these methods respectively as FORET-3D and FORET-2D. Here we describe maximum a posteriori (MAP) estimators for use with FORET rebinned data. We first derive approximate expressions for the variance of the rebinned data. We then use these results to rescale the data so that the variance and mean are approximately equal allowing us to use the Poisson likelihood model for MAP reconstruction. MAP reconstruction from these rebinned data uses a system matrix in which the detector response model accounts for the effects of rebinning. Using these methods we compare performance of FORET-2D and 3D with TOF and nonTOF reconstructions using phantom and clinical data. Our phantom results show a small loss in contrast recovery at matched noise levels using FORET compared to reconstruction from the original TOF data. Clinical examples show FORET images that are qualitatively similar to those obtained from the original TOF-PET data but a small increase in variance at matched resolution. Reconstruction time is reduced by a factor of 5 and 30 using FORET3D+MAP and FORET2D+MAP respectively compared to 3D TOF MAP, which makes these methods attractive for clinical applications. PMID:24504374

  8. Evaluating rapid ground sampling and scaling estimated plant cover using UAV imagery up to Landsat for mapping arctic vegetation

    NASA Astrophysics Data System (ADS)

    Nelson, P.; Paradis, D. P.

    2017-12-01

    The small stature and spectral diversity of arctic plant taxa presents challenges in mapping arctic vegetation. Mapping vegetation at the appropriate scale is needed to visualize effects of disturbance, directional vegetation change or mapping of specific plant groups for other applications (eg. habitat mapping). Fine spatial grain of remotely sensed data (ca. 10 cm pixels) is often necessary to resolve patches of many arctic plant groups, such as bryophytes and lichens. These groups are also spectrally different from mineral, litter and vascular plants. We sought to explore method to generate high-resolution spatial and spectral data to explore better mapping methods for arctic vegetation. We sampled ground vegetation at seven sites north or west of tree-line in Alaska, four north of Fairbanks and three northwest of Bethel, respectively. At each site, we estimated cover of plant functional types in 1m2 quadrats spaced approximately every 10 m along a 100 m long transect. Each quadrat was also scanned using a field spectroradiometer (PSR+ Spectral Evolution, 400-2500 nm range) and photographed from multiple perspectives. We then flew our small UAV with a RGB camera over the transect and at least 50 m on either side collecting on imagery of the plot, which were used to generate a image mosaic and digital surface model of the plot. We compare plant functional group cover ocular estimated in situ to post-hoc estimation, either automated or using a human observer, using the quadrat photos. We also compare interpolated lichen cover from UAV scenes to estimated lichen cover using a statistical models using Landsat data, with focus on lichens. Light and yellow lichens are discernable in the UAV imagery but certain lichens, especially dark colored lichens or those with spectral signatures similar to graminoid litter, present challenges. Future efforts will focus on integrating UAV-upscaled ground cover estimates to hyperspectral sensors (eg. AVIRIS ng) for better combined

  9. MRI Estimates of Brain Iron Concentration in Normal Aging Using Quantitative Susceptibility Mapping

    PubMed Central

    Bilgic, Berkin; Pfefferbaum, Adolf; Rohlfing, Torsten; Sullivan, Edith V.; Adalsteinsson, Elfar

    2011-01-01

    Quantifying tissue iron concentration in vivo is instrumental for understanding the role of iron in physiology and in neurological diseases associated with abnormal iron distribution. Herein, we use recently-developed Quantitative Susceptibility Mapping (QSM) methodology to estimate the tissue magnetic susceptibility based on MRI signal phase. To investigate the effect of different regularization choices, we implement and compare ℓ1 and ℓ2 norm regularized QSM algorithms. These regularized approaches solve for the underlying magnetic susceptibility distribution, a sensitive measure of the tissue iron concentration, that gives rise to the observed signal phase. Regularized QSM methodology also involves a pre-processing step that removes, by dipole fitting, unwanted background phase effects due to bulk susceptibility variations between air and tissue and requires data acquisition only at a single field strength. For validation, performances of the two QSM methods were measured against published estimates of regional brain iron from postmortem and in vivo data. The in vivo comparison was based on data previously acquired using Field-Dependent Relaxation Rate Increase (FDRI), an estimate of MRI relaxivity enhancement due to increased main magnetic field strength, requiring data acquired at two different field strengths. The QSM analysis was based on susceptibility-weighted images acquired at 1.5T, whereas FDRI analysis used Multi-Shot Echo-Planar Spin Echo images collected at 1.5T and 3.0T. Both datasets were collected in the same healthy young and elderly adults. The in vivo estimates of regional iron concentration comported well with published postmortem measurements; both QSM approaches yielded the same rank ordering of iron concentration by brain structure, with the lowest in white matter and the highest in globus pallidus. Further validation was provided by comparison of the in vivo measurements, ℓ1-regularized QSM versus FDRI and ℓ2-regularized QSM versus

  10. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  11. Associations of Mediterranean Diet and a Posteriori Derived Dietary Patterns with Breast and Lung Cancer Risk: A Case-Control Study

    PubMed Central

    Krusinska, Beata; Hawrysz, Iwona; Wadolowska, Lidia; Slowinska, Malgorzata Anna; Biernacki, Maciej; Czerwinska, Anna; Golota, Janusz Jacek

    2018-01-01

    Lung cancer in men and breast cancer in women are the most commonly diagnosed cancers in Poland and worldwide. Results of studies involving dietary patterns (DPs) and breast or lung cancer risk in European countries outside the Mediterranean Sea region are limited and inconclusive. This study aimed to develop a ‘Polish-adapted Mediterranean Diet’ (‘Polish-aMED’) score, and then study the associations between the ‘Polish-aMED’ score and a posteriori-derived dietary patterns with breast or lung cancer risk in adult Poles. This pooled analysis of two case-control studies involved 560 subjects (280 men, 280 women) aged 40–75 years from Northeastern Poland. Diagnoses of breast cancer in 140 women and lung cancer in 140 men were found. The food frequency consumption of 21 selected food groups was collected using a 62-item Food Frequency Questionnaire (FFQ)-6. The ‘Polish-adapted Mediterranean Diet’ score which included eight items—vegetables, fruit, whole grain, fish, legumes, nuts and seeds—as well as the ratio of vegetable oils to animal fat and red and processed meat was developed (range: 0–8 points). Three DPs were identified in a Principal Component Analysis: ‘Prudent’, ‘Non-healthy’, ‘Dressings and sweetened-low-fat dairy’. In a multiple logistic regression analysis, two models were created: crude, and adjusted for age, sex, type of cancer, Body Mass Index (BMI), socioeconomic status (SES) index, overall physical activity, smoking status and alcohol abuse. The risk of breast or lung cancer was lower in the average (3–5 points) and high (6–8 points) levels of the ‘Polish-aMED’ score compared to the low (0–2 points) level by 51% (odds ratio (OR): 0.49; 95% confidence interval (Cl): 0.30–0.80; p < 0.01; adjusted) and 63% (OR: 0.37; 95% Cl: 0.21–0.64; p < 0.001; adjusted), respectively. In the middle and upper tertiles compared to the bottom tertile of the ‘Prudent’ DP, the risk of cancer was lower by 38–43

  12. A priori and a posteriori investigations for developing large eddy simulations of multi-species turbulent mixing under high-pressure conditions

    SciTech Connect

    Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099

    2015-03-15

    work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less

  13. Soil Organic Carbon Estimation and Mapping Using "on-the-go" VisNIR Spectroscopy

    NASA Astrophysics Data System (ADS)

    Brown, D. J.; Bricklemyer, R. S.; Christy, C.

    2007-12-01

    Soil organic carbon (SOC) and other soil properties related to carbon sequestration (eg. soil clay content and mineralogy) vary spatially across landscapes. To cost effectively capture this variability, new technologies, such as Visible and Near Infrared (VisNIR) spectroscopy, have been applied to soils for rapid, accurate, and inexpensive estimation of SOC and other soil properties. For this study, we evaluated an "on the go" VisNIR sensor developed by Veris Technologies, Inc. (Salinas, KS) for mapping SOC, soil clay content and mineralogy. The Veris spectrometer spanned 350 to 2224 nm with 8 nm spectral resolution, and 25 spectra were integrated every 2 seconds resulting in 3 -5 m scanning distances on the ground. The unit was mounted to a mobile sensor platform pulled by a tractor, and scanned soils at an average depth of 10 cm through a quartz-sapphire window. We scanned eight 16.2 ha (40 ac) wheat fields in north central Montana (USA), with 15 m transect intervals. Using random sampling with spatial inhibition, 100 soil samples from 0-10 cm depths were extracted along scanned transects from each field and were analyzed for SOC. Neat, sieved (<2 mm) soil sample materials were also scanned in the lab using an Analytical Spectral Devices (ASD, Boulder, CO, USA) Fieldspec Pro FR spectroradiometer with a spectral range of 350-2500 and spectral resolution of 2-10 nm. The analyzed samples were used to calibrate and validate a number of partial least squares regression (PLSR) VisNIR models to compare on-the-go scanning vs. higher spectral resolution laboratory spectroscopy vs. standard SOC measurement methods.

  14. Mapping Transient Hyperventilation Induced Alterations with Estimates of the Multi-Scale Dynamics of BOLD Signal.

    PubMed

    Kiviniemi, Vesa; Remes, Jukka; Starck, Tuomo; Nikkinen, Juha; Haapea, Marianne; Silven, Olli; Tervonen, Osmo

    2009-01-01

    Temporal blood oxygen level dependent (BOLD) contrast signals in functional MRI during rest may be characterized by power spectral distribution (PSD) trends of the form 1/f(alpha). Trends with 1/f characteristics comprise fractal properties with repeating oscillation patterns in multiple time scales. Estimates of the fractal properties enable the quantification of phenomena that may otherwise be difficult to measure, such as transient, non-linear changes. In this study it was hypothesized that the fractal metrics of 1/f BOLD signal trends can map changes related to dynamic, multi-scale alterations in cerebral blood flow (CBF) after a transient hyperventilation challenge. Twenty-three normal adults were imaged in a resting-state before and after hyperventilation. Different variables (1/f trend constant alpha, fractal dimension D(f), and, Hurst exponent H) characterizing the trends were measured from BOLD signals. The results show that fractal metrics of the BOLD signal follow the fractional Gaussian noise model, even during the dynamic CBF change that follows hyperventilation. The most dominant effect on the fractal metrics was detected in grey matter, in line with previous hyperventilation vaso-reactivity studies. The alpha was able to differentiate also blood vessels from grey matter changes. D(f) was most sensitive to grey matter. H correlated with default mode network areas before hyperventilation but this pattern vanished after hyperventilation due to a global increase in H. In the future, resting-state fMRI combined with fractal metrics of the BOLD signal may be used for analyzing multi-scale alterations of cerebral blood flow.

  15. Estimating missing hourly climatic data using artificial neural network for energy balance based ET mapping applications

    USDA-ARS?s Scientific Manuscript database

    Remote sensing based evapotranspiration (ET) mapping has become an important tool for water resources management at a regional scale. Accurate hourly climatic data and reference ET are crucial input for successfully implementing remote sensing based ET models such as Mapping ET with internal calibra...

  16. A genetic map and germplasm diversity estimation of Mangifera indica (mango) with SNPs

    USDA-ARS?s Scientific Manuscript database

    Mango (Mangifera indica) is often referred to as the “King of Fruits”. As the first steps in developing a mango genomics project, we genotyped 582 individuals comprising six mapping populations with 1054 SNP markers. The resulting consensus map had 20 linkage groups defined by 726 SNP markers with...

  17. COSMIC MICROWAVE BACKGROUND POLARIZATION AND TEMPERATURE POWER SPECTRA ESTIMATION USING LINEAR COMBINATION OF WMAP 5 YEAR MAPS

    SciTech Connect

    Samal, Pramoda Kumar; Jain, Pankaj; Saha, Rajib

    We estimate cosmic microwave background (CMB) polarization and temperature power spectra using Wilkinson Microwave Anisotropy Probe (WMAP) 5 year foreground contaminated maps. The power spectrum is estimated by using a model-independent method, which does not utilize directly the diffuse foreground templates nor the detector noise model. The method essentially consists of two steps: (1) removal of diffuse foregrounds contamination by making linear combination of individual maps in harmonic space and (2) cross-correlation of foreground cleaned maps to minimize detector noise bias. For the temperature power spectrum we also estimate and subtract residual unresolved point source contamination in the cross-power spectrummore » using the point source model provided by the WMAP science team. Our TT, TE, and EE power spectra are in good agreement with the published results of the WMAP science team. We perform detailed numerical simulations to test for bias in our procedure. We find that the bias is small in almost all cases. A negative bias at low l in TT power spectrum has been pointed out in an earlier publication. We find that the bias-corrected quadrupole power (l(l + 1)C{sub l} /2{pi}) is 532 {mu}K{sup 2}, approximately 2.5 times the estimate (213.4 {mu}K{sup 2}) made by the WMAP team.« less

  18. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  19. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Bayesian Estimation of Cosmic Microwave Background Polarization Maps

    NASA Astrophysics Data System (ADS)

    Dunkley, J.; Spergel, D. N.; Komatsu, E.; Hinshaw, G.; Larson, D.; Nolta, M. R.; Odegard, N.; Page, L.; Bennett, C. L.; Gold, B.; Hill, R. S.; Jarosik, N.; Weiland, J. L.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wollack, E.; Wright, E. L.

    2009-08-01

    We describe a sampling method to estimate the polarized cosmic microwave background (CMB) signal from observed maps of the sky. We use a Metropolis-within-Gibbs algorithm to estimate the polarized CMB map, containing Q and U Stokes parameters at each pixel, and its covariance matrix. These can be used as inputs for cosmological analyses. The polarized sky signal is parameterized as the sum of three components: CMB, synchrotron emission, and thermal dust emission. The polarized Galactic components are modeled with spatially varying power-law spectral indices for the synchrotron, and a fixed power law for the dust, and their component maps are estimated as by-products. We apply the method to simulated low-resolution maps with pixels of side 7.2 deg, using diagonal and full noise realizations drawn from the WMAP noise matrices. The CMB maps are recovered with goodness of fit consistent with errors. Computing the likelihood of the E-mode power in the maps as a function of optical depth to reionization, τ, for fixed temperature anisotropy power, we recover τ = 0.091 ± 0.019 for a simulation with input τ = 0.1, and mean τ = 0.098 averaged over 10 simulations. A "null" simulation with no polarized CMB signal has maximum likelihood consistent with τ = 0. The method is applied to the five-year WMAP data, using the K, Ka, Q, and V channels. We find τ = 0.090 ± 0.019, compared to τ = 0.086 ± 0.016 from the template-cleaned maps used in the primary WMAP analysis. The synchrotron spectral index, β, averaged over high signal-to-noise pixels with standard deviation σ(β) < 0.25, but excluding ~6% of the sky masked in the Galactic plane, is -3.03 ± 0.04. This estimate does not vary significantly with Galactic latitude, although includes an informative prior. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.

  20. G6PD Deficiency Prevalence and Estimates of Affected Populations in Malaria Endemic Countries: A Geostatistical Model-Based Map

    PubMed Central

    Howes, Rosalind E.; Piel, Frédéric B.; Patil, Anand P.; Nyangiri, Oscar A.; Gething, Peter W.; Dewi, Mewahyu; Hogg, Mariana M.; Battle, Katherine E.; Padilla, Carmencita D.; Baird, J. Kevin; Hay, Simon I.

    2012-01-01

    Background Primaquine is a key drug for malaria elimination. In addition to being the only drug active against the dormant relapsing forms of Plasmodium vivax, primaquine is the sole effective treatment of infectious P. falciparum gametocytes, and may interrupt transmission and help contain the spread of artemisinin resistance. However, primaquine can trigger haemolysis in patients with a deficiency in glucose-6-phosphate dehydrogenase (G6PDd). Poor information is available about the distribution of individuals at risk of primaquine-induced haemolysis. We present a continuous evidence-based prevalence map of G6PDd and estimates of affected populations, together with a national index of relative haemolytic risk. Methods and Findings Representative community surveys of phenotypic G6PDd prevalence were identified for 1,734 spatially unique sites. These surveys formed the evidence-base for a Bayesian geostatistical model adapted to the gene's X-linked inheritance, which predicted a G6PDd allele frequency map across malaria endemic countries (MECs) and generated population-weighted estimates of affected populations. Highest median prevalence (peaking at 32.5%) was predicted across sub-Saharan Africa and the Arabian Peninsula. Although G6PDd prevalence was generally lower across central and southeast Asia, rarely exceeding 20%, the majority of G6PDd individuals (67.5% median estimate) were from Asian countries. We estimated a G6PDd allele frequency of 8.0% (interquartile range: 7.4–8.8) across MECs, and 5.3% (4.4–6.7) within malaria-eliminating countries. The reliability of the map is contingent on the underlying data informing the model; population heterogeneity can only be represented by the available surveys, and important weaknesses exist in the map across data-sparse regions. Uncertainty metrics are used to quantify some aspects of these limitations in the map. Finally, we assembled a database of G6PDd variant occurrences to inform a national-level index of

  1. G6PD deficiency prevalence and estimates of affected populations in malaria endemic countries: a geostatistical model-based map.

    PubMed

    Howes, Rosalind E; Piel, Frédéric B; Patil, Anand P; Nyangiri, Oscar A; Gething, Peter W; Dewi, Mewahyu; Hogg, Mariana M; Battle, Katherine E; Padilla, Carmencita D; Baird, J Kevin; Hay, Simon I

    2012-01-01

    Primaquine is a key drug for malaria elimination. In addition to being the only drug active against the dormant relapsing forms of Plasmodium vivax, primaquine is the sole effective treatment of infectious P. falciparum gametocytes, and may interrupt transmission and help contain the spread of artemisinin resistance. However, primaquine can trigger haemolysis in patients with a deficiency in glucose-6-phosphate dehydrogenase (G6PDd). Poor information is available about the distribution of individuals at risk of primaquine-induced haemolysis. We present a continuous evidence-based prevalence map of G6PDd and estimates of affected populations, together with a national index of relative haemolytic risk. Representative community surveys of phenotypic G6PDd prevalence were identified for 1,734 spatially unique sites. These surveys formed the evidence-base for a Bayesian geostatistical model adapted to the gene's X-linked inheritance, which predicted a G6PDd allele frequency map across malaria endemic countries (MECs) and generated population-weighted estimates of affected populations. Highest median prevalence (peaking at 32.5%) was predicted across sub-Saharan Africa and the Arabian Peninsula. Although G6PDd prevalence was generally lower across central and southeast Asia, rarely exceeding 20%, the majority of G6PDd individuals (67.5% median estimate) were from Asian countries. We estimated a G6PDd allele frequency of 8.0% (interquartile range: 7.4-8.8) across MECs, and 5.3% (4.4-6.7) within malaria-eliminating countries. The reliability of the map is contingent on the underlying data informing the model; population heterogeneity can only be represented by the available surveys, and important weaknesses exist in the map across data-sparse regions. Uncertainty metrics are used to quantify some aspects of these limitations in the map. Finally, we assembled a database of G6PDd variant occurrences to inform a national-level index of relative G6PDd haemolytic risk. Asian

  2. Bayes filter modification for drivability map estimation with observations from stereo vision

    NASA Astrophysics Data System (ADS)

    Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri

    2017-02-01

    Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.

  3. Estimating and mapping the incidence of dengue and chikungunya in Honduras during 2015 using Geographic Information Systems (GIS).

    PubMed

    Zambrano, Lysien I; Sierra, Manuel; Lara, Bredy; Rodríguez-Núñez, Iván; Medina, Marco T; Lozada-Riascos, Carlos O; Rodríguez-Morales, Alfonso J

    Geographical information systems (GIS) use for development of epidemiological maps in dengue has been extensively used, however not in other emerging arboviral diseases, nor in Central America. Surveillance cases data (2015) were used to estimate annual incidence rates of dengue and chikungunya (cases/100,000 pop) to develop the first maps in the departments and municipalities of Honduras. The GIS software used was Kosmo Desktop 3.0RC1 ® . Four thematic maps were developed according departments, municipalities, diseases incidence rates. A total of 19,289 cases of dengue and 85,386 of chikungunya were reported (median, 726 cases/week for dengue and 1460 for chikungunya). Highest peaks were observed at weeks 25th and 27th, respectively. There was association between progression by weeks (p<0.0001). The cumulated crude national rate was estimated in 224.9 cases/100,000 pop for dengue and 995.6 for chikungunya. The incidence rates ratio between chikungunya and dengue is 4.42 (ranging in municipalities from 0.0 up to 893.0 [San Vicente Centenario]). Burden of both arboviral diseases is concentrated in capital Central District (>37%, both). Use of GIS-based epidemiological maps allow to guide decisions-taking for prevention and control of diseases that still represents significant issues in the region and the country, but also in emerging conditions. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  4. A priori and a posteriori approaches for finding genes of evolutionary interest in non-model species: osmoregulatory genes in the kidney transcriptome of the desert rodent Dipodomys spectabilis (banner-tailed kangaroo rat).

    PubMed

    Marra, Nicholas J; Eo, Soo Hyung; Hale, Matthew C; Waser, Peter M; DeWoody, J Andrew

    2012-12-01

    One common goal in evolutionary biology is the identification of genes underlying adaptive traits of evolutionary interest. Recently next-generation sequencing techniques have greatly facilitated such evolutionary studies in species otherwise depauperate of genomic resources. Kangaroo rats (Dipodomys sp.) serve as exemplars of adaptation in that they inhabit extremely arid environments, yet require no drinking water because of ultra-efficient kidney function and osmoregulation. As a basis for identifying water conservation genes in kangaroo rats, we conducted a priori bioinformatics searches in model rodents (Mus musculus and Rattus norvegicus) to identify candidate genes with known or suspected osmoregulatory function. We then obtained 446,758 reads via 454 pyrosequencing to characterize genes expressed in the kidney of banner-tailed kangaroo rats (Dipodomys spectabilis). We also determined candidates a posteriori by identifying genes that were overexpressed in the kidney. The kangaroo rat sequences revealed nine different a priori candidate genes predicted from our Mus and Rattus searches, as well as 32 a posteriori candidate genes that were overexpressed in kidney. Mutations in two of these genes, Slc12a1 and Slc12a3, cause human renal diseases that result in the inability to concentrate urine. These genes are likely key determinants of physiological water conservation in desert rodents. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Mapping land water and energy balance relations through conditional sampling of remote sensing estimates of atmospheric forcing and surface states

    NASA Astrophysics Data System (ADS)

    Farhadi, Leila; Entekhabi, Dara; Salvucci, Guido

    2016-04-01

    In this study, we develop and apply a mapping estimation capability for key unknown parameters that link the surface water and energy balance equations. The method is applied to the Gourma region in West Africa. The accuracy of the estimation method at point scale was previously examined using flux tower data. In this study, the capability is scaled to be applicable with remotely sensed data products and hence allow mapping. Parameters of the system are estimated through a process that links atmospheric forcing (precipitation and incident radiation), surface states, and unknown parameters. Based on conditional averaging of land surface temperature and moisture states, respectively, a single objective function is posed that measures moisture and temperature-dependent errors solely in terms of observed forcings and surface states. This objective function is minimized with respect to parameters to identify evapotranspiration and drainage models and estimate water and energy balance flux components. The uncertainty of the estimated parameters (and associated statistical confidence limits) is obtained through the inverse of Hessian of the objective function, which is an approximation of the covariance matrix. This calibration-free method is applied to the mesoscale region of Gourma in West Africa using multiplatform remote sensing data. The retrievals are verified against tower-flux field site data and physiographic characteristics of the region. The focus is to find the functional form of the evaporative fraction dependence on soil moisture, a key closure function for surface and subsurface heat and moisture dynamics, using remote sensing data.

  6. Towards a publicly available, map-based regional software tool to estimate unregulated daily streamflow at ungauged rivers

    USGS Publications Warehouse

    Archfield, Stacey A.; Steeves, Peter A.; Guthrie, John D.; Ries, Kernell G.

    2013-01-01

    Streamflow information is critical for addressing any number of hydrologic problems. Often, streamflow information is needed at locations that are ungauged and, therefore, have no observations on which to base water management decisions. Furthermore, there has been increasing need for daily streamflow time series to manage rivers for both human and ecological functions. To facilitate negotiation between human and ecological demands for water, this paper presents the first publicly available, map-based, regional software tool to estimate historical, unregulated, daily streamflow time series (streamflow not affected by human alteration such as dams or water withdrawals) at any user-selected ungauged river location. The map interface allows users to locate and click on a river location, which then links to a spreadsheet-based program that computes estimates of daily streamflow for the river location selected. For a demonstration region in the northeast United States, daily streamflow was, in general, shown to be reliably estimated by the software tool. Estimating the highest and lowest streamflows that occurred in the demonstration region over the period from 1960 through 2004 also was accomplished but with more difficulty and limitations. The software tool provides a general framework that can be applied to other regions for which daily streamflow estimates are needed.

  7. Crop Frequency Mapping for Land Use Intensity Estimation During Three Decades

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael; Tindall, Dan

    2016-08-01

    Crop extent and frequency maps are an important input to inform the debate around land value and competitive land uses, food security and sustainability of agricultural practices. Such spatial datasets are likely to support decisions on natural resource management, planning and policy. The complete Landsat Time Series (LTS) archive for 23 Landsat footprints in western Queensland from 1987 to 2015 was used in a multi-temporal mapping approach. Spatial, spectral and temporal information were combined in multiple crop-modelling steps, supported by on ground training data sampled across space and time for the classes Crop and No-Crop. Temporal information within summer and winter growing seasons for each year were summarised, and combined with various vegetation indices and band ratios computed from a mid-season spectral-composite image. All available temporal information was spatially aggregated to the scale of image segments in the mid- season composite for each growing season and used to train a random forest classifier for a Crop and No- Crop classification. Validation revealed that the predictive accuracy varied by growing season and region to be within k = 0.88 to 0.97 and are thus suitable for mapping current and historic cropping activity. Crop frequency maps were produced for all regions at different time intervals. The crop frequency maps were validated separately with a historic crop information time series. Different land use intensities and conversions e.g. from agricultural to pastures are apparent and potential drivers of these conversions are discussed.

  8. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

    PubMed

    O'Reilly, Joseph E; Donoghue, Philip C J

    2018-03-01

    Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

  9. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data

    PubMed Central

    O’Reilly, Joseph E; Donoghue, Philip C J

    2018-01-01

    Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675

  10. METRIC model for the estimation and mapping of evapotranspiration in a super intensive olive orchard in Southern Portugal

    NASA Astrophysics Data System (ADS)

    Pôças, Isabel; Nogueira, António; Paço, Teresa A.; Sousa, Adélia; Valente, Fernanda; Silvestre, José; Andrade, José A.; Santos, Francisco L.; Pereira, Luís S.; Allen, Richard G.

    2013-04-01

    Satellite-based surface energy balance models have been successfully applied to estimate and map evapotranspiration (ET). The METRICtm model, Mapping EvapoTranspiration at high Resolution using Internalized Calibration, is one of such models. METRIC has been widely used over an extensive range of vegetation types and applications, mostly focusing annual crops. In the current study, the single-layer-blended METRIC model was applied to Landsat5 TM and Landsat7 ETM+ images to produce estimates of evapotranspiration (ET) in a super intensive olive orchard in Southern Portugal. In sparse woody canopies as in olive orchards, some adjustments in METRIC application related to the estimation of vegetation temperature and of momentum roughness length and sensible heat flux (H) for tall vegetation must be considered. To minimize biases in H estimates due to uncertainties in the definition of momentum roughness length, the Perrier function based on leaf area index and tree canopy architecture, associated with an adjusted estimation of crop height, was used to obtain momentum roughness length estimates. Additionally, to minimize the biases in surface temperature simulations, due to soil and shadow effects, the computation of radiometric temperature considered a three-source condition, where Ts=fcTc+fshadowTshadow+fsunlitTsunlit. As such, the surface temperature (Ts), derived from the thermal band of the Landsat images, integrates the temperature of the canopy (Tc), the temperature of the shaded ground surface (Tshadow), and the temperature of the sunlit ground surface (Tsunlit), according to the relative fraction of vegetation (fc), shadow (fshadow) and sunlit (fsunlit) ground surface, respectively. As the sunlit canopies are the primary source of energy exchange, the effective temperature for the canopy was estimated by solving the three-source condition equation for Tc. To evaluate METRIC performance to estimate ET over the olive grove, several parameters derived from the

  11. Automated Land Cover Change Detection and Mapping from Hidden Parameter Estimates of Normalized Difference Vegetation Index (NDVI) Time-Series

    NASA Astrophysics Data System (ADS)

    Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.

    2017-12-01

    Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.

  12. A feasibility study on estimation of tissue mixture contributions in 3D arterial spin labeling sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing

    2017-03-01

    Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.

  13. Estimating missing hourly climatic data using artificial neural network for energy balance based ET mapping applications

    USDA-ARS?s Scientific Manuscript database

    Remote sensing based evapotranspiration (ET) mapping is an important improvement for water resources management. Hourly climatic data and reference ET are crucial for implementing remote sensing based ET models such as METRIC and SEBAL. In Turkey, data on all climatic variables may not be available ...

  14. A COMPARISON OF MAPPED ESTIMATES OF LONG-TERM RUNOFF IN THE NORTHEAST UNITED STATES

    EPA Science Inventory

    We evaluated the relative accuracy of four methods of producing maps of long-term runoff for part of the northeast United States: MAN, a manual procedure that incorporates expert opinion in contour placement; RPRIS, an automated procedure based on water balance considerations, Pn...

  15. Illicit Drug Users in the Tanzanian Hinterland: Population Size Estimation Through Key Informant-Driven Hot Spot Mapping.

    PubMed

    Ndayongeje, Joel; Msami, Amani; Laurent, Yovin Ivo; Mwankemwa, Syangu; Makumbuli, Moza; Ngonyani, Alois M; Tiberio, Jenny; Welty, Susie; Said, Christen; Morris, Meghan D; McFarland, Willi

    2018-02-12

    We mapped hot spots and estimated the numbers of people who use drugs (PWUD) and who inject drugs (PWID) in 12 regions of Tanzania. Primary (ie, current and past PWUD) and secondary (eg, police, service providers) key informants identified potential hot spots, which we visited to verify and count the number of PWUD and PWID present. Adjustments to counts and extrapolation to regional estimates were done by local experts through iterative rounds of discussion. Drug use, specifically cocaine and heroin, occurred in all regions. Tanga had the largest numbers of PWUD and PWID (5190 and 540, respectively), followed by Mwanza (3300 and 300, respectively). Findings highlight the need to strengthen awareness of drug use and develop prevention and harm reduction programs with broader reach in Tanzania. This exercise provides a foundation for understanding the extent and locations of drug use, a baseline for future size estimations, and a sampling frame for future research.

  16. New features added to EVALIDator: ratio estimation and county choropleth maps

    Treesearch

    Patrick D. Miles; Mark H. Hansen

    2012-01-01

    The EVALIDator Web application, developed in 2007, provides estimates and sampling errors for many user selected forest statistics from the Forest Inventory and Analysis Database (FIADB). Among the statistics estimated are forest area, number of trees, biomass, volume, growth, removals, and mortality. A new release of EVALIDator, developed in 2012, has an option to...

  17. Considerations in Forest Growth Estimation Between Two Measurements of Mapped Forest Inventory Plots

    Treesearch

    Michael T. Thompson

    2006-01-01

    Several aspects of the enhanced Forest Inventory and Analysis (FIA) program?s national plot design complicate change estimation. The design incorporates up to three separate plot sizes (microplot, subplot, and macroplot) to sample trees of different sizes. Because multiple plot sizes are involved, change estimators designed for polyareal plot sampling, such as those...

  18. Using a remote sensing-based, percent tree cover map to enhance forest inventory estimation

    Treesearch

    Ronald E. McRoberts; Greg C. Liknes; Grant M. Domke

    2014-01-01

    For most national forest inventories, the variables of primary interest to users are forest area and growing stock volume. The precision of estimates of parameters related to these variables can be increased using remotely sensed auxiliary variables, often in combination with stratified estimators. However, acquisition and processing of large amounts of remotely sensed...

  19. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  20. Using satellite image-based maps and ground inventory data to estimate the area of the remaining Atlantic forest in the Brazilian state of Santa Catarina

    Treesearch

    Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti

    2013-01-01

    Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...

  1. Decoding fMRI events in sensorimotor motor network using sparse paradigm free mapping and activation likelihood estimates.

    PubMed

    Tan, Francisca M; Caballero-Gaudes, César; Mullinger, Karen J; Cho, Siu-Yeung; Zhang, Yaping; Dryden, Ian L; Francis, Susan T; Gowland, Penny A

    2017-11-01

    Most functional MRI (fMRI) studies map task-driven brain activity using a block or event-related paradigm. Sparse paradigm free mapping (SPFM) can detect the onset and spatial distribution of BOLD events in the brain without prior timing information, but relating the detected events to brain function remains a challenge. In this study, we developed a decoding method for SPFM using a coordinate-based meta-analysis method of activation likelihood estimation (ALE). We defined meta-maps of statistically significant ALE values that correspond to types of events and calculated a summation overlap between the normalized meta-maps and SPFM maps. As a proof of concept, this framework was applied to relate SPFM-detected events in the sensorimotor network (SMN) to six motor functions (left/right fingers, left/right toes, swallowing, and eye blinks). We validated the framework using simultaneous electromyography (EMG)-fMRI experiments and motor tasks with short and long duration, and random interstimulus interval. The decoding scores were considerably lower for eye movements relative to other movement types tested. The average successful rate for short and long motor events were 77 ± 13% and 74 ± 16%, respectively, excluding eye movements. We found good agreement between the decoding results and EMG for most events and subjects, with a range in sensitivity between 55% and 100%, excluding eye movements. The proposed method was then used to classify the movement types of spontaneous single-trial events in the SMN during resting state, which produced an average successful rate of 22 ± 12%. Finally, this article discusses methodological implications and improvements to increase the decoding performance. Hum Brain Mapp 38:5778-5794, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Decoding fMRI events in Sensorimotor Motor Network using Sparse Paradigm Free Mapping and Activation Likelihood Estimates

    PubMed Central

    Tan, Francisca M.; Caballero-Gaudes, César; Mullinger, Karen J.; Cho, Siu-Yeung; Zhang, Yaping; Dryden, Ian L.; Francis, Susan T.; Gowland, Penny A.

    2017-01-01

    Most fMRI studies map task-driven brain activity using a block or event-related paradigm. Sparse Paradigm Free Mapping (SPFM) can detect the onset and spatial distribution of BOLD events in the brain without prior timing information; but relating the detected events to brain function remains a challenge. In this study, we developed a decoding method for SPFM using a coordinate-based meta-analysis method of Activation Likelihood Estimation (ALE). We defined meta-maps of statistically significant ALE values that correspond to types of events and calculated a summation overlap between the normalized meta-maps and SPFM maps. As a proof of concept, this framework was applied to relate SPFM-detected events in the Sensorimotor Network (SMN) to six motor function (left/right fingers, left/right toes, swallowing and eye blinks). We validated the framework using simultaneous Electromyography-fMRI experiments and motor tasks with short and long duration, and random inter-stimulus interval. The decoding scores were considerably lower for eye movements relative to other movement types tested. The average successful rate for short and long motor events was 77 ± 13% and 74 ± 16% respectively, excluding eye movements. We found good agreement between the decoding results and EMG for most events and subjects, with a range in sensitivity between 55 and 100%, excluding eye movements. The proposed method was then used to classify the movement types of spontaneous single-trial events in the SMN during resting state, which produced an average successful rate of 22 ± 12%. Finally, this paper discusses methodological implications and improvements to increase the decoding performance. PMID:28815863

  3. Extending the Precipitation Map Offshore Using Daily and 3-Hourly Combined Precipitation Estimates

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Curtis, Scott; Einaudi, Franco (Technical Monitor)

    2001-01-01

    One of the difficulties in studying landfalling extratropical cyclones along the Pacific Coast is the lack of antecedent data over the ocean, including precipitation. Recent research on combining various satellite-based precipitation estimates opens the possibility of realistic precipitation estimates on a global 1 deg. x 1 deg. latitude-longitude grid at the daily or even 3-hourly interval. The goal in this work is to provide quantitative precipitation estimates that correctly represent the precipitation- related variables in the hydrological cycle: surface accumulations (fresh-water flux into oceans), frequency and duration statistics, net latent heating, etc.

  4. Estimate of the cosmological bispectrum from the MAXIMA-1 cosmic microwave background map.

    PubMed

    Santos, M G; Balbi, A; Borrill, J; Ferreira, P G; Hanany, S; Jaffe, A H; Lee, A T; Magueijo, J; Rabii, B; Richards, P L; Smoot, G F; Stompor, R; Winant, C D; Wu, J H P

    2002-06-17

    We use the measurement of the cosmic microwave background taken during the MAXIMA-1 flight to estimate the bispectrum of cosmological perturbations. We propose an estimator for the bispectrum that is appropriate in the flat sky approximation, apply it to the MAXIMA-1 data, and evaluate errors using bootstrap methods. We compare the estimated value with what would be expected if the sky signal were Gaussian and find that it is indeed consistent, with a chi(2) per degree of freedom of approximately unity. This measurement places constraints on models of inflation.

  5. Black-backed woodpecker habitat suitability mapping using conifer snag basal area estimated from airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Casas Planes, Á.; Garcia, M.; Siegel, R.; Koltunov, A.; Ramirez, C.; Ustin, S.

    2015-12-01

    Occupancy and habitat suitability models for snag-dependent wildlife species are commonly defined as a function of snag basal area. Although critical for predicting or assessing habitat suitability, spatially distributed estimates of snag basal area are not generally available across landscapes at spatial scales relevant for conservation planning. This study evaluates the use of airborne laser scanning (ALS) to 1) identify individual conifer snags and map their basal area across a recently burned forest, and 2) map habitat suitability for a wildlife species known to be dependent on snag basal area, specifically the black-backed woodpecker (Picoides arcticus). This study focuses on the Rim Fire, a megafire that took place in 2013 in the Sierra Nevada Mountains of California, creating large patches of medium- and high-severity burned forest. We use forest inventory plots, single-tree ALS-derived metrics and Gaussian processes classification and regression to identify conifer snags and estimate their stem diameter and basal area. Then, we use the results to map habitat suitability for the black-backed woodpecker using thresholds for conifer basal area from a previously published habitat suitability model. Local maxima detection and watershed segmentation algorithms resulted in 75% detection of trees with stem diameter larger than 30 cm. Snags are identified with an overall accuracy of 91.8 % and conifer snags are identified with an overall accuracy of 84.8 %. Finally, Gaussian process regression reliably estimated stem diameter (R2 = 0.8) using height and crown area. This work provides a fast and efficient methodology to characterize the extent of a burned forest at the tree level and a critical tool for early wildlife assessment in post-fire forest management and biodiversity conservation.

  6. Estimated flood-inundation mapping for the Lower Blue River in Kansas City, Missouri, 2003-2005

    USGS Publications Warehouse

    Kelly, Brian P.; Rydlund, Jr., Paul H.

    2006-01-01

    The U.S. Geological Survey, in cooperation with the city of Kansas City, Missouri, began a study in 2003 of the lower Blue River in Kansas City, Missouri, from Gregory Boulevard to the mouth at the Missouri River to determine the estimated extent of flood inundation in the Blue River valley from flooding on the lower Blue River and from Missouri River backwater. Much of the lower Blue River flood plain is covered by industrial development. Rapid development in the upper end of the watershed has increased the volume of runoff, and thus the discharge of flood events for the Blue River. Modifications to the channel of the Blue River began in late 1983 in response to the need for flood control. By 2004, the channel had been widened and straightened from the mouth to immediately downstream from Blue Parkway to convey a 30-year flood. A two-dimensional depth-averaged flow model was used to simulate flooding within a 2-mile study reach of the Blue River between 63rd Street and Blue Parkway. Hydraulic simulation of the study reach provided information for the design and performance of proposed hydraulic structures and channel improvements and for the production of estimated flood-inundation maps and maps representing an areal distribution of water velocity, both magnitude and direction. Flood profiles of the Blue River were developed between Gregory Boulevard and 63rd Street from stage elevations calculated from high water marks from the flood of May 19, 2004; between 63rd Street and Blue Parkway from two-dimensional hydraulic modeling conducted for this study; and between Blue Parkway and the mouth from an existing one-dimensional hydraulic model by the U.S. Army Corps of Engineers. Twelve inundation maps were produced at 2-foot intervals for Blue Parkway stage elevations from 750 to 772 feet. Each map is associated with National Weather Service flood-peak forecast locations at 63rd Street, Blue Parkway, Stadium Drive, U.S. Highway 40, 12th Street, and the Missouri River

  7. Detection, mapping and estimation of rate of spread of grass fires from southern African ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Wightman, J. M.

    1973-01-01

    Sequential band-6 imagery of the Zambesi Basin of southern Africa recorded substantial changes in burn patterns resulting from late dry season grass fires. One example from northern Botswana, indicates that a fire consumed approximately 70 square miles of grassland over a 24-hour period. Another example from western Zambia indicates increased fire activity over a 19-day period. Other examples clearly define the area of widespread grass fires in Angola, Botswana, Rhodesia and Zambia. From the fire patterns visible on the sequential portions of the imagery, and the time intervals involved, the rates of spread of the fires are estimated and compared with estimates derived from experimental burning plots in Zambia and Canada. It is concluded that sequential ERTS-1 imagery, of the quality studied, clearly provides the information needed to detect and map grass fires and to monitor their rates of spread in this region during the late dry season.

  8. Effects of shipping on marine acoustic habitats in Canadian Arctic estimated via probabilistic modeling and mapping.

    PubMed

    Aulanier, Florian; Simard, Yvan; Roy, Nathalie; Gervaise, Cédric; Bandet, Marion

    2017-12-15

    Canadian Arctic and Subarctic regions experience a rapid decrease of sea ice accompanied with increasing shipping traffic. The resulting time-space changes in shipping noise are studied for four key regions of this pristine environment, for 2013 traffic conditions and a hypothetical tenfold traffic increase. A probabilistic modeling and mapping framework, called Ramdam, which integrates the intrinsic variability and uncertainties of shipping noise and its effects on marine habitats, is developed and applied. A substantial transformation of soundscapes is observed in areas where shipping noise changes from present occasional-transient contributor to a dominant noise source. Examination of impacts on low-frequency mammals within ecologically and biologically significant areas reveals that shipping noise has the potential to trigger behavioral responses and masking in the future, although no risk of temporary or permanent hearing threshold shifts is noted. Such probabilistic modeling and mapping is strategic in marine spatial planning of this emerging noise issues. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  9. Data-based estimates of the ocean carbon sink variability - first results of the Surface Ocean pCO2 Mapping intercomparison (SOCOM)

    NASA Astrophysics Data System (ADS)

    Rödenbeck, C.; Bakker, D. C. E.; Gruber, N.; Iida, Y.; Jacobson, A. R.; Jones, S.; Landschützer, P.; Metzl, N.; Nakaoka, S.; Olsen, A.; Park, G.-H.; Peylin, P.; Rodgers, K. B.; Sasse, T. P.; Schuster, U.; Shutler, J. D.; Valsala, V.; Wanninkhof, R.; Zeng, J.

    2015-08-01

    Using measurements of the surface-ocean CO2 partial pressure (pCO2) and 14 different pCO2 mapping methods recently collated by the Surface Ocean pCO2 Mapping intercomparison (SOCOM) initiative, variations in regional and global sea-air CO2 fluxes have been investigated. Though the available mapping methods use widely different approaches, we find relatively consistent estimates of regional pCO2 seasonality, in line with previous estimates. In terms of interannual variability (IAV), all mapping methods estimate the largest variations to occur in the Eastern equatorial Pacific. Despite considerable spead in the detailed variations, mapping methods with closer match to the data also tend to be more consistent with each other. Encouragingly, this includes mapping methods belonging to complementary types - taking variability either directly from the pCO2 data or indirectly from driver data via regression. From a weighted ensemble average, we find an IAV amplitude of the global sea-air CO2 flux of 0.31 PgC yr-1 (standard deviation over 1992-2009), which is larger than simulated by biogeochemical process models. On a decadal perspective, the global CO2 uptake is estimated to have gradually increased since about 2000, with little decadal change prior to 2000. The weighted mean total ocean CO2 sink estimated by the SOCOM ensemble is consistent within uncertainties with estimates from ocean-interior carbon data or atmospheric oxygen trends.

  10. Performance of near real-time Global Satellite Mapping of Precipitation estimates during heavy precipitation events over northern China

    NASA Astrophysics Data System (ADS)

    Chen, Sheng; Hu, Junjun; Zhang, Asi; Min, Chao; Huang, Chaoying; Liang, Zhenqing

    2018-02-01

    This study assesses the performance of near real-time Global Satellite Mapping of Precipitation (GSMaP_NRT) estimates over northern China, including Beijing and its adjacent regions, during three heavy precipitation events from 21 July 2012 to 2 August 2012. Two additional near real-time satellite-based products, the Climate Prediction Center morphing method (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS), were used for parallel comparison with GSMaP_NRT. Gridded gauge observations were used as reference for a performance evaluation with respect to spatiotemporal variability, probability distribution of precipitation rate and volume, and contingency scores. Overall, GSMaP_NRT generally captures the spatiotemporal variability of precipitation and shows promising potential in near real-time mapping applications. GSMaP_NRT misplaced storm centers in all three storms. GSMaP_NRT demonstrated higher skill scores in the first high-impact storm event on 21 July 2015. GSMaP_NRT passive microwave only precipitation can generally capture the pattern of heavy precipitation distributions over flat areas but failed to capture the intensive rain belt over complicated mountainous terrain. The results of this study can be useful to both algorithm developers and the scientific end users, providing a better understanding of strengths and weaknesses to hydrologists using satellite precipitation products.

  11. Enhancing the applicability of Kohonen Self-Organizing Map (KSOM) estimator for gap-filling in hydrometeorological timeseries data

    NASA Astrophysics Data System (ADS)

    Nanda, Trushnamayee; Sahoo, Bhabagrahi; Chatterjee, Chandranath

    2017-06-01

    The Kohonen Self-Organizing Map (KSOM) estimator is prescribed as a useful tool for infilling the missing data in hydrometeorology. However, in this study, when the performance of the KSOM estimator is tested for gap-filling in the streamflow, rainfall, evapotranspiration (ET), and temperature timeseries data, collected from 30 gauging stations in India under missing data situations, it is felt that the KSOM modeling performance could be further improved. Consequently, this study tries to answer the research questions as to whether the length of record of the historical data and its variability has any effect on the performance of the KSOM? Whether inclusion of temporal distribution of timeseries data and the nature of outliers in the KSOM framework enhances its performance further? Subsequently, it is established that the KSOM framework should include the coefficient of variation of the datasets for determination of the number of map units, without considering it as a single value function of the sample data size. This could help to upscale and generalize the applicability of KSOM for varied hydrometeorological data types.

  12. Estimation and mapping of wet and dry mercury deposition across northeastern North America

    USGS Publications Warehouse

    Miller, E.K.; Vanarsdale, A.; Keeler, G.J.; Chalmers, A.; Poissant, L.; Kamman, N.C.; Brulotte, R.

    2005-01-01

    Whereas many ecosystem characteristics and processes influence mercury accumulation in higher trophic-level organisms, the mercury flux from the atmosphere to a lake and its watershed is a likely factor in potential risk to biota. Atmospheric deposition clearly affects mercury accumulation in soils and lake sediments. Thus, knowledge of spatial patterns in atmospheric deposition may provide information for assessing the relative risk for ecosystems to exhibit excessive biotic mercury contamination. Atmospheric mercury concentrations in aerosol, vapor, and liquid phases from four observation networks were used to estimate regional surface concentration fields. Statistical models were developed to relate sparsely measured mercury vapor and aerosol concentrations to the more commonly measured mercury concentration in precipitation. High spatial resolution deposition velocities for different phases (precipitation, cloud droplets, aerosols, and reactive gaseous mercury (RGM)) were computed using inferential models. An empirical model was developed to estimate gaseous elemental mercury (GEM) deposition. Spatial patterns of estimated total mercury deposition were complex. Generally, deposition was higher in the southwest and lower in the northeast. Elevation, land cover, and proximity to urban areas modified the general pattern. The estimated net GEM and RGM fluxes were each greater than or equal to wet deposition in many areas. Mercury assimilation by plant foliage may provide a substantial input of methyl-mercury (MeHg) to ecosystems. ?? 2005 Springer Science+Business Media, Inc.

  13. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  14. The Boston Methane Project: Mapping Surface Emissions to Inform Atmospheric Estimation of Urban Methane Flux

    NASA Astrophysics Data System (ADS)

    Phillips, N.; Crosson, E.; Down, A.; Hutyra, L.; Jackson, R. B.; McKain, K.; Rella, C.; Raciti, S. M.; Wofsy, S. C.

    2012-12-01

    Lost and unaccounted natural gas can amount to over 6% of Massachusetts' total annual greenhouse gas inventory (expressed as equivalent CO2 tonnage). An unknown portion of this loss is due to natural gas leaks in pipeline distribution systems. The objective of the Boston Methane Project is to estimate the overall leak rate from natural gas systems in metropolitan Boston, and to compare this flux with fluxes from the other primary methane emissions sources. Companion talks at this meeting describe the atmospheric measurement and modeling framework, and chemical and isotopic tracers that can partition total atmospheric methane flux into natural gas and non-natural gas components. This talk focuses on estimation of surface emissions that inform the atmospheric modeling and partitioning. These surface emissions include over 3,300 pipeline natural gas leaks in Boston. For the state of Massachusetts as a whole, the amount of natural gas reported as lost and unaccounted for by utility companies was greater than estimated landfill emissions by an order of magnitude. Moreover, these landfill emissions were overwhelmingly located outside of metro Boston, while gas leaks are concentrated in exactly the opposite pattern, increasing from suburban Boston toward the urban core. Work is in progress to estimate spatial distribution of methane emissions from wetlands and sewer systems. We conclude with a description of how these spatial data sets will be combined and represented for application in atmospheric modeling.

  15. Using the Landsat Archive to Estimate and Map Changes in Agriculture, Forests, and other Land Cover Types in East Africa

    NASA Astrophysics Data System (ADS)

    Healey, S. P.; Oduor, P.; Cohen, W. B.; Yang, Z.; Ouko, E.; Gorelick, N.; Wilson, S.

    2017-12-01

    Every country's land is distributed among different cover types, such as: agriculture; forests; rangeland; urban areas; and barren lands. Changes in the distribution of these classes can inform us about many things, including: population pressure; effectiveness of preservation efforts; desertification; and stability of the food supply. Good assessment of these changes can also support wise planning, use, and preservation of natural resources. We are using the Landsat archive in two ways to provide needed information about land cover change since the year 2000 in seven East African countries (Ethiopia, Kenya, Malawi, Rwanda, Tanzania, Uganda, and Zambia). First, we are working with local experts to interpret historical land cover change from historical imagery at a probabilistic sample of 2000 locations in each country. This will provide a statistical estimate of land cover change since 2000. Second, we will use the same data to calibrate and validate annual land cover maps for each country. Because spatial context can be critical to development planning through the identification of hot spots, these maps will be a useful complement to the statistical, country-level estimates of change. The Landsat platform is an ideal tool for mapping land cover change because it combines a mix of appropriate spatial and spectral resolution with unparalleled length of service (Landsat 1 launched in 1972). Pilot tests have shown that time series analysis accessing the entire Landsat archive (i.e., many images per year) improves classification accuracy and stability. It is anticipated that this project will meet the civil needs of both governmental and non-governmental users across a range of disciplines.

  16. Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition

    NASA Astrophysics Data System (ADS)

    Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.

    2016-12-01

    Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.

  17. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    SciTech Connect

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.

    Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for ourmore » fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  18. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    SciTech Connect

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.

    Here, line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small formore » our fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  19. Nonprofit health care services marketing: persuasive messages based on multidimensional concept mapping and direct magnitude estimation.

    PubMed

    Hall, Michael L

    2009-01-01

    Persuasive messages for marketing healthcare services in general and coordinated care in particular are more important now for providers, hospitals, and third-party payers than ever before. The combination of measurement-based information and creativity may be among the most critical factors in reaching markets or expanding markets. The research presented here provides an approach to marketing coordinated care services which allows healthcare managers to plan persuasive messages given the market conditions they face. Using market respondents' thinking about product attributes combined with distance measurement between pairs of product attributes, a conceptual marketing map is presented and applied to advertising, message copy, and delivery. The data reported here are representative of the potential caregivers for which the messages are intended. Results are described with implications for application to coordinated care services. Theory building and marketing practice are discussed in the light of findings and methodology.

  20. [Precordial mapping and enzymatic analysis for estimating infarct size in man. A comparative study (author's transl)].

    PubMed

    Tommasini, G; Cobelli, F; Birolli, M; Oddone, A; Orlandi, M; Malusardi, R

    1976-01-01

    To investigate the relationships between electrocardiographic and enzymatic indexes of infarct size (I.S.), a group of 19 patients with anterior infarction was studied by serial precordial mapping and CPK curves analysis. The time course of ST and QRS changes was examined and a sharp, spontaneous fall of sigmaST was shown to occur within 10-12 hours after onset of symptoms, followed by a gradual rise. sigmaST on admission appears to be a poor predictor of subsequent enzymatic I.S. (r=0.49). Good correlations with I.S. were observed, for sigmaST at 48-96 hours (r=0.82) and, especially, for the percent decrease of sigmaR, with respect to the initial values (deltaR%), (r=0.94).

  1. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    DOE PAGES

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.; ...

    2017-08-31

    Here, line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN–halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small formore » our fiducial model, which is based on current understanding of the galaxy–halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.« less

  2. On Estimation of Contamination from Hydrogen Cyanide in Carbon Monoxide Line-intensity Mapping

    NASA Astrophysics Data System (ADS)

    Chung, Dongwoo T.; Li, Tony Y.; Viero, Marco P.; Church, Sarah E.; Wechsler, Risa H.

    2017-09-01

    Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjointed population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN-halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for our fiducial model, which is based on current understanding of the galaxy-halo connection. Given the fiducial model, we expect the bias in overall CO detection significance due to HCN to be less than 1%.

  3. Mapping anuran habitat suitability to estimate effects of grassland and wetland conservation programs

    USGS Publications Warehouse

    Mushet, David M.; Euliss, Ned H.; Stockwell, Craig A.

    2012-01-01

    The conversion of the Northern Great Plains of North America to a landscape favoring agricultural commodity production has negatively impacted wildlife habitats. To offset impacts, conservation programs have been implemented by the U.S. Department of Agriculture and other agencies to restore grassland and wetland habitat components. To evaluate effects of these efforts on anuran habitats, we used call survey data and environmental data in ecological niche factor analyses implemented through the program Biomapper to quantify habitat suitability for five anuran species within a 196 km2 study area. Our amphibian call surveys identified Northern Leopard Frogs (Lithobates pipiens), Wood Frogs (Lithobates sylvaticus), Boreal Chorus Frogs (Pseudacris maculata), Great Plains Toads (Anaxyrus cognatus), and Woodhouse’s Toads (Anaxyrus woodhousii) occurring within the study area. Habitat suitability maps developed for each species revealed differing patterns of suitable habitat among species. The most significant findings of our mapping effort were 1) the influence of deep-water overwintering wetlands on suitable habitat for all species encountered except the Boreal Chorus Frog; 2) the lack of overlap between areas of core habitat for both the Northern Leopard Frog and Wood Frog compared to the core habitat for both toad species; and 3) the importance of conservation programs in providing grassland components of Northern Leopard Frog and Wood Frog habitat. The differences in habitats suitable for the five species we studied in the Northern Great Plains, i.e., their ecological niches, highlight the importance of utilizing an ecosystem based approach that considers the varying needs of multiple species in the development of amphibian conservation and management plans.

  4. A population-based tissue probability map-driven level set method for fully automated mammographic density estimations.

    PubMed

    Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo

    2014-07-01

    A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts

  5. Filling the white space on maps of European runoff trends: estimates from a multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Stahl, K.; Tallaksen, L. M.; Hannaford, J.; van Lanen, H. A. J.

    2012-02-01

    An overall appraisal of runoff changes at the European scale has been hindered by "white space" on maps of observed trends due to a paucity of readily-available streamflow data. This study tested whether this white space can be filled using estimates of trends derived from model simulations of European runoff. The simulations stem from an ensemble of eight global hydrological models that were forced with the same climate input for the period 1963-2000. A validation of the derived trends for 293 grid cells across the European domain with observation-based trend estimates, allowed an assessment of the uncertainty of the modelled trends. The models agreed on the predominant continental scale patterns of trends, but disagreed on magnitudes and even on trend directions at the transition between regions with increasing and decreasing runoff trends, in complex terrain with a high spatial variability, and in snow-dominated regimes. Model estimates appeared most reliable in reproducing trends in annual runoff, winter runoff, and 7-day high flow. Modelled trends in runoff during the summer months, spring (for snow influenced regions) and autumn, and trends in summer low flow, were more variable and should be viewed with caution due to higher uncertainty. The ensemble mean overall provided the best representation of the trends in the observations. Maps of trends in annual runoff based on the ensemble mean demonstrated a pronounced continental dipole pattern of positive trends in western and northern Europe and negative trends in southern and parts of Eastern Europe, which has not previously been demonstrated and discussed in comparable detail.

  6. Self-Organizing Map Neural Network-Based Nearest Neighbor Position Estimation Scheme for Continuous Crystal PET Detectors

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Li, Deng; Lu, Xiaoming; Cheng, Xinyi; Wang, Liwei

    2014-10-01

    Continuous crystal-based positron emission tomography (PET) detectors could be an ideal alternative for current high-resolution pixelated PET detectors if the issues of high performance γ interaction position estimation and its real-time implementation are solved. Unfortunately, existing position estimators are not very feasible for implementation on field-programmable gate array (FPGA). In this paper, we propose a new self-organizing map neural network-based nearest neighbor (SOM-NN) positioning scheme aiming not only at providing high performance, but also at being realistic for FPGA implementation. Benefitting from the SOM feature mapping mechanism, the large set of input reference events at each calibration position is approximated by a small set of prototypes, and the computation of the nearest neighbor searching for unknown events is largely reduced. Using our experimental data, the scheme was evaluated, optimized and compared with the smoothed k-NN method. The spatial resolutions of full-width-at-half-maximum (FWHM) of both methods averaged over the center axis of the detector were obtained as 1.87 ±0.17 mm and 1.92 ±0.09 mm, respectively. The test results show that the SOM-NN scheme has an equivalent positioning performance with the smoothed k-NN method, but the amount of computation is only about one-tenth of the smoothed k-NN method. In addition, the algorithm structure of the SOM-NN scheme is more feasible for implementation on FPGA. It has the potential to realize real-time position estimation on an FPGA with a high-event processing throughput.

  7. Empirical approach for estimating the ExB velocity from VTEC map

    NASA Astrophysics Data System (ADS)

    Ao, Xi

    For the development of wireless communication, the Earth's ionosphere is very critical. A Matlab program is designed to improve the techniques for monitoring and forecasting the conditions of the Earth's ionosphere. The work in this thesis aims to modeling of the dependency between the equatorial anomaly gap (EAP) in the Earth's ionosphere and the crucial driver, ExB velocity, of the Earth's ionosphere. In this thesis, we review the mathematics of the model in the eleventh generation of the International Geomagnetic Reference Field (IGRF) and an enhancement version of Global Assimilative Ionospheric Model (GAIM), GAIM++ Model. We then use the IGRF Model and a Vertical Total Electron Content (VTEC) map from GAIM++ Model to determine the EAP in the Earth's ionosphere. Then, by changing the main parameters, the 10.7cm solar radio flux (F10.7) and the planetary geomagnetic activity index (AP), we compare the different value of the EAP in the Earth's ionosphere and the ExB velocity of the Earth's ionosphere. At last, we demonstrate that the program can be effective in determining the dependency between the EAP in the Earth's ionosphere and the ExB velocity of the Earth's ionosphere.

  8. Morphological estimators on Sunyaev-Zel'dovich maps of MUSIC clusters of galaxies

    NASA Astrophysics Data System (ADS)

    Cialone, Giammarco; De Petris, Marco; Sembolini, Federico; Yepes, Gustavo; Baldi, Anna Silvia; Rasia, Elena

    2018-06-01

    The determination of the morphology of galaxy clusters has important repercussions for cosmological and astrophysical studies of them. In this paper, we address the morphological characterization of synthetic maps of the Sunyaev-Zel'dovich (SZ) effect for a sample of 258 massive clusters (Mvir > 5 × 1014 h-1 M⊙ at z = 0), extracted from the MUSIC hydrodynamical simulations. Specifically, we use five known morphological parameters (which are already used in X-ray) and two newly introduced ones, and we combine them in a single parameter. We analyse two sets of simulations obtained with different prescriptions of the gas physics (non-radiative and with cooling, star formation and stellar feedback) at four red shifts between 0.43 and 0.82. For each parameter, we test its stability and efficiency in discriminating the true cluster dynamical state, measured by theoretical indicators. The combined parameter is more efficient at discriminating between relaxed and disturbed clusters. This parameter had a mild correlation with the hydrostatic mass (˜0.3) and a strong correlation (˜0.8) with the offset between the SZ centroid and the cluster centre of mass. The latter quantity is, thus, the most accessible and efficient indicator of the dynamical state for SZ studies.

  9. Computer code for estimating installed performance of aircraft gas turbine engines. Volume 3: Library of maps

    NASA Technical Reports Server (NTRS)

    Kowalski, E. J.

    1979-01-01

    A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. The use of two data base files to represent the engine and the inlet/nozzle/aftbody performance characteristics is discussed. The existing library of performance characteristics for inlets and nozzle/aftbodies and an example of the 1000 series of engine data tables is presented.

  10. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  11. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  12. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  13. Computation of probabilistic hazard maps and source parameter estimation for volcanic ash transport and dispersion

    SciTech Connect

    Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu

    Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less

  14. Estimated Flood Discharges and Map of Flood-Inundated Areas for Omaha Creek, near Homer, Nebraska, 2005

    USGS Publications Warehouse

    Dietsch, Benjamin J.; Wilson, Richard C.; Strauch, Kellan R.

    2008-01-01

    Repeated flooding of Omaha Creek has caused damage in the Village of Homer. Long-term degradation and bridge scouring have changed substantially the channel characteristics of Omaha Creek. Flood-plain managers, planners, homeowners, and others rely on maps to identify areas at risk of being inundated. To identify areas at risk for inundation by a flood having a 1-percent annual probability, maps were created using topographic data and water-surface elevations resulting from hydrologic and hydraulic analyses. The hydrologic analysis for the Omaha Creek study area was performed using historical peak flows obtained from the U.S. Geological Survey streamflow gage (station number 06601000). Flood frequency and magnitude were estimated using the PEAKFQ Log-Pearson Type III analysis software. The U.S. Army Corps of Engineers' Hydrologic Engineering Center River Analysis System, version 3.1.3, software was used to simulate the water-surface elevation for flood events. The calibrated model was used to compute streamflow-gage stages and inundation elevations for the discharges corresponding to floods of selected probabilities. Results of the hydrologic and hydraulic analyses indicated that flood inundation elevations are substantially lower than from a previous study.

  15. Airborne Laser Swath Mapping (ALSM) for Enhanced Riparian Water Use Estimates, Basin Sediment Budgets, and Terrain Characterization

    NASA Astrophysics Data System (ADS)

    Goodrich, D. C.; Farid, A.; Miller, S. N.; Semmens, D.; Williams, D. J.; Moran, S.; Unkrich, C. L.

    2003-12-01

    The uses of Airborne Laser Swath Mapping (ALSM) or LIDAR for earth science applications beyond topographic mapping are rapidly expanding. The USDA-ARS Southwest Watershed Research Center, in collaboration with the Geosensing Systems Engineering Group at the Univ. of Florida and a wide range of other investigators, designed and conducted a multi-purpose ALSM mission over southeastern Arizona. Research goals include: 1) differentiate young and old riparian cottonwood trees to improve riparian water use estimates; 2) assess the ability of LIDAR to define channel bank steepness and thus cross-channel trafficability; 3) assess the ability of LIDAR to define relatively small, isolated depressions where higher soil moisture may persist; and, 4) quantify changes in channel morphology and sediment movement between pre- and post-monsoon flights. The first flight mission was successfully completed in early June and a post-monsoon mission is scheduled for October. Research goals, mission planning, and initial results will be further developed in this presentation. Acknowledgements: The Upper San Pedro Partnership, DOD-Legacy Program, EPA-Landscape Ecology Branch, U.S. Army-TEC, and the Bureau of Land Management are gratefully acknowledged for supporting this effort. The second author is supported by SAHRA (Sustainability of semi-Arid Hydrology and Riparian Areas) under the STC Program of the National Science Foundation, Agreement No. EAR-9876800.

  16. Estimation of elasticity map of soft biological tissue mimicking phantom using laser speckle contrast analysis

    NASA Astrophysics Data System (ADS)

    Suheshkumar Singh, M.; Rajan, K.; Vasu, R. M.

    2011-05-01

    Scattering of coherent light from scattering particles causes phase shift to the scattered light. The interference of unscattered and scattered light causes the formation of speckles. When the scattering particles, under the influence of an ultrasound (US) pressure wave, vibrate, the phase shift fluctuates, thereby causing fluctuation in speckle intensity. We use the laser speckle contrast analysis (LSCA) to reconstruct a map of the elastic property (Young's modulus) of soft tissue-mimicking phantom. The displacement of the scatters is inversely related to the Young's modulus of the medium. The elastic properties of soft biological tissues vary, many fold with malignancy. The experimental results show that laser speckle contrast (LSC) is very sensitive to the pathological changes in a soft tissue medium. The experiments are carried out on a phantom with two cylindrical inclusions of sizes 6mm in diameter, separated by 8mm between them. Three samples are made. One inclusion has Young's modulus E of 40kPa. The second inclusion has either a Young's modulus E of 20kPa, or scattering coefficient of μs'=3.00mm-1 or absorption coefficient of μa=0.03mm-1. The optical absorption (μa), reduced scattering (μs') coefficient, and the Young's modulus of the background are μa=0.01mm-1, μs'=1.00mm-1 and 12kPa, respectively. The experiments are carried out on all three phantoms. On a phantom with two inclusions of Young's modulus of 20 and 40kPa, the measured relative speckle image contrasts are 36.55% and 63.72%, respectively. Experiments are repeated on phantoms with inclusions of μa=0.03mm-1, E =40kPa and μs'=3.00mm-1. The results show that it is possible to detect inclusions with contrasts in optical absorption, optical scattering, and Young's modulus. Studies of the variation of laser speckle contrast with ultrasound driving force for various values of μa, μs', and Young's modulus of the tissue mimicking medium are also carried out.

  17. Mapping Multi-Cropped Land Use to Estimate Water Demand Using the California Pesticide Reporting Database

    NASA Astrophysics Data System (ADS)

    Henson, W.; Baillie, M. N.; Martin, D.

    2017-12-01

    Detailed and dynamic land-use data is one of the biggest data deficiencies facing food and water security issues. Better land-use data results in improved integrated hydrologic models that are needed to look at the feedback between land and water use, specifically for adequately representing changes and dynamics in rainfall-runoff, urban and agricultural water demands, and surface fluxes of water (e.g., evapotranspiration, runoff, and infiltration). Currently, land-use data typically are compiled from annual (e.g., Crop Scape) or multi-year composites if mapped at all. While this approach provides information about interannual land-use practices, it does not capture the dynamic changes in highly developed agricultural lands prevalent in California agriculture such as (1) dynamic land-use changes from high frequency multi-crop rotations and (2) uncertainty in sub-annual crop distribution, planting times, and cropped areas. California has collected spatially distributed data for agricultural pesticide use since 1974 through the California Pesticide Information Portal (CalPIP). A method leveraging the CalPIP database has been developed to provide vital information about dynamic agricultural land use (e.g., crop distribution and planting times) and water demand issues in Salinas Valley, California, along the central coast. This 7 billion dollar/year agricultural area produces up to 50% of U.S. lettuce and broccoli. Therefore, effective and sustainable water resource development in the area must balance the needs of this essential industry, other beneficial uses, and the environment. This new tool provides a way to provide more dynamic crop data in hydrologic models. While the current application focuses on the Salinas Valley, the methods are extensible to all of California and other states with similar pesticide reporting. The improvements in representing variability in crop patterns and associated water demands increase our understanding of land-use change and

  18. Systematical estimation of GPM-based global satellite mapping of precipitation products over China

    NASA Astrophysics Data System (ADS)

    Zhao, Haigen; Yang, Bogang; Yang, Shengtian; Huang, Yingchun; Dong, Guotao; Bai, Juan; Wang, Zhiwei

    2018-03-01

    As the Global Precipitation Measurement (GPM) Core Observatory satellite continues its mission, new version 6 products for Global Satellite Mapping of Precipitation (GSMaP) have been released. However, few studies have systematically evaluated the GSMaP products over mainland China. This study quantitatively evaluated three GPM-based GSMaP version 6 precipitation products for China and eight subregions referring to the Chinese daily Precipitation Analysis Product (CPAP). The GSMaP products included near-real-time (GSMaP_NRT), microwave-infrared reanalyzed (GSMaP_MVK), and gauge-adjusted (GSMaP_Gau) data. Additionally, the gauge-adjusted Integrated Multi-Satellite Retrievals for Global Precipitation Measurement Mission (IMERG_Gau) was also assessed and compared with GSMaP_Gau. The analyses of the selected daily products were carried out at spatiotemporal resolutions of 1/4° for the period of March 2014 to December 2015 in consideration of the resolution of CPAP and the consistency of the coverage periods of the satellite products. The results indicated that GSMaP_MVK and GSMaP_NRT performed comparably and underdetected light rainfall events (< 5 mm/day) in the northwest and northeast of China. All the statistical metrics of GSMaP_MVK were slightly improved compared with GSMaP_NRT in spring, autumn, and winter, whereas GSMaP_NRT demonstrated superior Pearson linear correlation coefficient (CC), fractional standard error (FSE), and root-mean-square error (RMSE) metrics during the summer. Compared with GSMaP_NRT and GSMaP_MVK, GSMaP_Gau possessed significantly improved metrics over mainland China and the eight subregions and performed better in terms of CC, RMSE, and FSE but underestimated precipitation to a greater degree than IMERG_Gau. As a quantitative assessment of the GPM-era GSMaP products, these validation results will supply helpful references for both end users and algorithm developers. However, the study findings need to be confirmed over a longer future

  19. Evaluating the condition of a mangrove forest of the Mexican Pacific based on an estimated leaf area index mapping approach.

    PubMed

    Kovacs, J M; King, J M L; Flores de Santiago, F; Flores-Verdugo, F

    2009-10-01

    Given the alarming global rates of mangrove forest loss it is important that resource managers have access to updated information regarding both the extent and condition of their mangrove forests. Mexican mangroves in particular have been identified as experiencing an exceptional high annual rate of loss. However, conflicting studies, using remote sensing techniques, of the current state of many of these forests may be hindering all efforts to conserve and manage what remains. Focusing on one such system, the Teacapán-Agua Brava-Las Haciendas estuarine-mangrove complex of the Mexican Pacific, an attempt was made to develop a rapid method of mapping the current condition of the mangroves based on estimated LAI. Specifically, using an AccuPAR LP-80 Ceptometer, 300 indirect in situ LAI measurements were taken at various sites within the black mangrove (Avicennia germinans) dominated forests of the northern section of this system. From this sample, 225 measurements were then used to develop linear regression models based on their relationship with corresponding values derived from QuickBird very high resolution optical satellite data. Specifically, regression analyses of the in situ LAI with both the normalized difference vegetation index (NDVI) and the simple ration (SR) vegetation index revealed significant positive relationships [LAI versus NDVI (R (2) = 0.63); LAI versus SR (R (2) = 0.68)]. Moreover, using the remaining sample, further examination of standard errors and of an F test of the residual variances indicated little difference between the two models. Based on the NDVI model, a map of estimated mangrove LAI was then created. Excluding the dead mangrove areas (i.e. LAI = 0), which represented 40% of the total 30.4 km(2) of mangrove area identified in the scene, a mean estimated LAI value of 2.71 was recorded. By grouping the healthy fringe mangrove with the healthy riverine mangrove and by grouping the dwarf mangrove together with the poor condition

  20. Estimating population abundance and mapping distribution of wintering sea ducks in coastal waters of the mid-Atlantic

    USGS Publications Warehouse

    Koneff, M.D.; Royle, J. Andrew; Forsell, D.J.; Wortham, J.S.; Boomer, G.S.; Perry, M.C.

    2005-01-01

    Survey design for wintering scoters (Melanitta sp.) and other sea ducks that occur in offshore waters is challenging because these species have large ranges, are subject to distributional shifts among years and within a season, and can occur in aggregations. Interest in winter sea duck population abundance surveys has grown in recent years. This interest stems from concern over the population status of some sea ducks, limitations of extant breeding waterfowl survey programs in North America and logistical challenges and costs of conducting surveys in northern breeding regions, high winter area philopatry in some species and potential conservation implications, and increasing concern over offshore development and other threats to sea duck wintering habitats. The efficiency and practicality of statistically-rigorous monitoring strategies for mobile, aggregated wintering sea duck populations have not been sufficiently investigated. This study evaluated a 2-phase adaptive stratified strip transect sampling plan to estimate wintering population size of scoters, long-tailed ducks (Clangua hyemalis), and other sea ducks and provide information on distribution. The sampling plan results in an optimal allocation of a fixed sampling effort among offshore strata in the U.S. mid-Atlantic coast region. Phase I transect selection probabilities were based on historic distribution and abundance data, while Phase 2 selection probabilities were based on observations made during Phase 1 flights. Distance sampling methods were used to estimate detection rates. Environmental variables thought to affect detection rates were recorded during the survey and post-stratification and covariate modeling were investigated to reduce the effect of heterogeneity on detection estimation. We assessed cost-precision tradeoffs under a number of fixed-cost sampling scenarios using Monte Carlo simulation. We discuss advantages and limitations of this sampling design for estimating wintering sea duck

  1. Symmetric Epistasis Estimation (SEE) and its application to dissecting interaction map of Plasmodium falciparum.

    PubMed

    Huang, Yang; Siwo, Geoffrey; Wuchty, Stefan; Ferdig, Michael T; Przytycka, Teresa M

    2012-04-01

    It is being increasingly recognized that many important phenotypic traits, including various diseases, are governed by a combination of weak genetic effects and their interactions. While the detection of epistatic interactions that involve a non-additive effect of two loci on a quantitative trait is particularly challenging, this interaction type is fundamental for the understanding of genome organization and gene regulation. However, current methods that detect epistatic interactions typically rely on the existence of a strong primary effect, considerably limiting the sensitivity of the search. To fill this gap, we developed a new method, SEE (Symmetric Epistasis Estimation), allowing the genome-wide detection of epistatic interactions without the need for a strong primary effect. We applied our approach to progeny crosses of the human malaria parasite P. falciparum and S. cerevisiae. We found an abundance of epistatic interactions in the parasite and a much smaller number of such interactions in yeast. The genome of P. falciparum also harboured several epistatic interaction hotspots that putatively play a role in drug resistance mechanisms. The abundance of observed epistatic interactions might suggest a mechanism of compensation for the extremely limited repertoire of transcription factors. Interestingly, epistatic interaction hotspots were associated with elevated levels of linkage disequilibrium, an observation that suggests selection pressure acting on P. falciparum, potentially reflecting host-pathogen interactions or drug-induced selection.

  2. Total protein measurement in canine cerebrospinal fluid: agreement between a turbidimetric assay and 2 dye-binding methods and determination of reference intervals using an indirect a posteriori method.

    PubMed

    Riond, B; Steffen, F; Schmied, O; Hofmann-Lehmann, R; Lutz, H

    2014-03-01

    In veterinary clinical laboratories, qualitative tests for total protein measurement in canine cerebrospinal fluid (CSF) have been replaced by quantitative methods, which can be divided into dye-binding assays and turbidimetric methods. There is a lack of validation data and reference intervals (RIs) for these assays. The aim of the present study was to assess agreement between the turbidimetric benzethonium chloride method and 2 dye-binding methods (Pyrogallol Red-Molybdate method [PRM], Coomassie Brilliant Blue [CBB] technique) for measurement of total protein concentration in canine CSF. Furthermore, RIs were determined for all 3 methods using an indirect a posteriori method. For assay comparison, a total of 118 canine CSF specimens were analyzed. For RIs calculation, clinical records of 401 canine patients with normal CSF analysis were studied and classified according to their final diagnosis in pathologic and nonpathologic values. The turbidimetric assay showed excellent agreement with the PRM assay (mean bias 0.003 g/L [-0.26-0.27]). The CBB method generally showed higher total protein values than the turbidimetric assay and the PRM assay (mean bias -0.14 g/L for turbidimetric and PRM assay). From 90 of 401 canine patients, nonparametric reference intervals (2.5%, 97.5% quantile) were calculated (turbidimetric assay and PRM method: 0.08-0.35 g/L (90% CI: 0.07-0.08/0.33-0.39); CBB method: 0.17-0.55 g/L (90% CI: 0.16-0.18/0.52-0.61). Total protein concentration in canine CSF specimens remained stable for up to 6 months of storage at -80°C. Due to variations among methods, RIs for total protein concentration in canine CSF have to be calculated for each method. The a posteriori method of RIs calculation described here should encourage other veterinary laboratories to establish RIs that are laboratory-specific. ©2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.

  3. Antarctic ice sheet mass loss estimates using Modified Antarctic Mapping Mission surface flow observations

    NASA Astrophysics Data System (ADS)

    Ren, Diandong; Leslie, Lance M.; Lynch, Mervyn J.

    2013-03-01

    The long residence time of ice and the relatively gentle slopes of the Antarctica Ice Sheet make basal sliding a unique positive feedback mechanism in enhancing ice discharge along preferred routes. The highly organized ice stream channels extending to the interior from the lower reach of the outlets are a manifestation of the role of basal granular material in enhancing the ice flow. In this study, constraining the model-simulated year 2000 ice flow fields with surface velocities obtained from InSAR measurements permits retrieval of the basal sliding parameters. Forward integrations of the ice model driven by atmospheric and oceanic parameters from coupled general circulation models under different emission scenarios provide a range of estimates of total ice mass loss during the 21st century. The total mass loss rate has a small intermodel and interscenario spread, rising from approximately -160 km3/yr at present to approximately -220 km3/yr by 2100. The accelerated mass loss rate of the Antarctica Ice Sheet in a warming climate is due primarily to a dynamic response in the form of an increase in ice flow speed. Ice shelves contribute to this feedback through a reduced buttressing effect due to more frequent systematic, tabular calving events. For example, by 2100 the Ross Ice Shelf is projected to shed 40 km3 during each systematic tabular calving. After the frontal section's attrition, the remaining shelf will rebound. Consequently, the submerged cross-sectional area will reduce, as will the buttressing stress. Longitudinal differential warming of ocean temperature contributes to tabular calving. Because of the prevalence of fringe ice shelves, oceanic effects likely will play a very important role in the future mass balance of the Antarctica Ice Sheet, under a possible future warming climate.

  4. Developing a 30-m grassland productivity estimation map for central Nebraska using 250-m MODIS and 30-m Landsat-8 observations

    USGS Publications Warehouse

    Gu, Yingxin; Wylie, Bruce K.

    2015-01-01

    Accurately estimating aboveground vegetation biomass productivity is essential for local ecosystem assessment and best land management practice. Satellite-derived growing season time-integrated Normalized Difference Vegetation Index (GSN) has been used as a proxy for vegetation biomass productivity. A 250-m grassland biomass productivity map for the Greater Platte River Basin had been developed based on the relationship between Moderate Resolution Imaging Spectroradiometer (MODIS) GSN and Soil Survey Geographic (SSURGO) annual grassland productivity. However, the 250-m MODIS grassland biomass productivity map does not capture detailed ecological features (or patterns) and may result in only generalized estimation of the regional total productivity. Developing a high or moderate spatial resolution (e.g., 30-m) productivity map to better understand the regional detailed vegetation condition and ecosystem services is preferred. The 30-m Landsat data provide spatial detail for characterizing human-scale processes and have been successfully used for land cover and land change studies. The main goal of this study is to develop a 30-m grassland biomass productivity estimation map for central Nebraska, leveraging 250-m MODIS GSN and 30-m Landsat data. A rule-based piecewise regression GSN model based on MODIS and Landsat (r = 0.91) was developed, and a 30-m MODIS equivalent GSN map was generated. Finally, a 30-m grassland biomass productivity estimation map, which provides spatially detailed ecological features and conditions for central Nebraska, was produced. The resulting 30-m grassland productivity map was generally supported by the SSURGO biomass production map and will be useful for regional ecosystem study and local land management practices.

  5. Estimating and mapping forest biomass using regression models and Spot-6 images (case study: Hyrcanian forests of north of Iran).

    PubMed

    Motlagh, Mohadeseh Ghanbari; Kafaky, Sasan Babaie; Mataji, Asadollah; Akhavan, Reza

    2018-05-21

    Hyrcanian forests of North of Iran are of great importance in terms of various economic and environmental aspects. In this study, Spot-6 satellite images and regression models were applied to estimate above-ground biomass in these forests. This research was carried out in six compartments in three climatic (semi-arid to humid) types and two altitude classes. In the first step, ground sampling methods at the compartment level were used to estimate aboveground biomass (Mg/ha). Then, by reviewing the results of other studies, the most appropriate vegetation indices were selected. In this study, three indices of NDVI, RVI, and TVI were calculated. We investigated the relationship between the vegetation indices and aboveground biomass measured at sample-plot level. Based on the results, the relationship between aboveground biomass values and vegetation indices was a linear regression with the highest level of significance for NDVI in all compartments. Since at the compartment level the correlation coefficient between NDVI and aboveground biomass was the highest, NDVI was used for mapping aboveground biomass. According to the results of this study, biomass values were highly different in various climatic and altitudinal classes with the highest biomass value observed in humid climate and high-altitude class.

  6. Remote sensing based crop type mapping and evapotranspiration estimates at the farm level in arid regions of the globe

    NASA Astrophysics Data System (ADS)

    Ozdogan, M.; Serrat-Capdevila, A.; Anderson, M. C.

    2017-12-01

    Despite increasing scarcity of freshwater resources, there is dearth of spatially explicit information on irrigation water consumption through evapotranspiration, particularly in semi-arid and arid geographies. Remote sensing, either alone or in combination with ground surveys, is increasingly being used for irrigation water management by quantifying evaporative losses at the farm level. Increased availability of observations, sophisticated algorithms, and access to cloud-based computing is also helping this effort. This presentation will focus on crop-specific evapotranspiration estimates at the farm level derived from remote sensing in a number of water-scarce regions of the world. The work is part of a larger effort to quantify irrigation water use and improve use efficiencies associated with several World Bank projects. Examples will be drawn from India, where groundwater based irrigation withdrawals are monitored with the help of crop type mapping and evapotranspiration estimates from remote sensing. Another example will be provided from a northern irrigation district in Mexico, where remote sensing is used for detailed water accounting at the farm level. These locations exemplify the success stories in irrigation water management with the help of remote sensing with the hope that spatially disaggregated information on evapotranspiration can be used as inputs for various water management decisions as well as for better water allocation strategies in many other water scarce regions.

  7. The October 2015 flash-floods in south eastern France: hydrological analyses, inundation mapping and impact estimations

    NASA Astrophysics Data System (ADS)

    Payrastre, Olivier; Bourgin, François; Lebouc, Laurent; Le Bihan, Guillaume; Gaume, Eric

    2017-04-01

    The October 2015 flash-floods in south eastern France caused more than twenty fatalities, high damages and large economic losses in high density urban areas of the Mediterranean coast, including the cities of Mandelieu-La Napoule, Cannes and Antibes. Following a post event survey and preliminary analyses conducted within the framework of the Hymex project, we set up an entire simulation chain at the regional scale to better understand this outstanding event. Rainfall-runoff simulations, inundation mapping and a first estimation of the impacts are conducted following the approach developed and successfully applied for two large flash-flood events in two different French regions (Gard in 2002 and Var in 2010) by Le Bihan (2016). A distributed rainfall-runoff model applied at high resolution for the whole area - including numerous small ungauged basins - is used to feed a semi-automatic hydraulic approach (Cartino method) applied along the river network - including small tributaries. Estimation of the impacts is then performed based on the delineation of the flooded areas and geographic databases identifying buildings and population at risk.

  8. Filling the white space on maps of European runoff trends: estimates from a multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Stahl, K.; Tallaksen, L. M.; Hannaford, J.; van Lanen, H. A. J.

    2012-07-01

    An overall appraisal of runoff changes at the European scale has been hindered by "white space" on maps of observed trends due to a paucity of readily-available streamflow data. This study tested whether this white space can be filled using estimates of trends derived from model simulations of European runoff. The simulations stem from an ensemble of eight global hydrological models that were forced with the same climate input for the period 1963-2000. The derived trends were validated for 293 grid cells across the European domain with observation-based trend estimates. The ensemble mean overall provided the best representation of trends in the observations. Maps of trends in annual runoff based on the ensemble mean demonstrated a pronounced continental dipole pattern of positive trends in western and northern Europe and negative trends in southern and parts of eastern Europe, which has not previously been demonstrated and discussed in comparable detail. Overall, positive trends in annual streamflow appear to reflect the marked wetting trends of the winter months, whereas negative annual trends result primarily from a widespread decrease in streamflow in spring and summer months, consistent with a decrease in summer low flow in large parts of Europe. High flow appears to have increased in rain-dominated hydrological regimes, whereas an inconsistent or decreasing signal was found in snow-dominated regimes. The different models agreed on the predominant continental-scale pattern of trends, but in some areas disagreed on the magnitude and even the direction of trends, particularly in transition zones between regions with increasing and decreasing runoff trends, in complex terrain with a high spatial variability, and in snow-dominated regimes. Model estimates appeared most reliable in reproducing observed trends in annual runoff, winter runoff, and 7-day high flow. Modelled trends in runoff during the summer months, spring (for snow influenced regions) and autumn, and

  9. Fast estimation of diffusion tensors under Rician noise by the EM algorithm.

    PubMed

    Liu, Jia; Gasbarra, Dario; Railavo, Juha

    2016-01-15

    Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Data-based estimates of the ocean carbon sink variability - results of the Surface Ocean pCO2 Mapping intercomparison (SOCOM)

    NASA Astrophysics Data System (ADS)

    Rödenbeck, Christian; Bakker, Dorothee; Gruber, Nicolas; Iida, Yosuke; Jacobson, Andy; Jones, Steve; Landschützer, Peter; Metzl, Nicolas; Nakaoka, Shin-ichiro; Olsen, Are; Park, Geun-Ha; Peylin, Philippe; Rodgers, Keith; Sasse, Tristan; Schuster, Ute; Shutler, James; Valsala, Vinu; Wanninkhof, Rik; Zeng, Jiye

    2016-04-01

    Using measurements of the surface-ocean COtwo partial pressure (pCOtwo) from the SOCAT and LDEO data bases and 14 different pCOtwo mapping methods recently collated by the Surface Ocean pCOtwo Mapping intercomparison (SOCOM) initiative, variations in regional and global sea-air COtwo fluxes are investigated. Though the available mapping methods use widely different approaches, we find relatively consistent estimates of regional pCOtwo seasonality, in line with previous estimates. In terms of interannual variability (IAV), all mapping methods estimate the largest variations to occur in the Eastern equatorial Pacific. Despite considerable spread in the detailed variations, mapping methods that fit the data more closely also tend to agree more closely with each other in regional averages. Encouragingly, this includes mapping methods belonging to complementary types - taking variability either directly from the pCOtwo data or indirectly from driver data via regression. From a weighted ensemble average, we find an IAV amplitude of the global sea-air COtwo flux of IAVampl (standard deviation over AnalysisPeriod), which is larger than simulated by biogeochemical process models. On a decadal perspective, the global ocean COtwo uptake is estimated to have gradually increased since about 2000, with little decadal change prior to that. The weighted mean net global ocean COtwo sink estimated by the SOCOM ensemble is -1.75 UPgCyr (AnalysisPeriod), consistent within uncertainties with estimates from ocean-interior carbon data or atmospheric oxygen trends. Using data-based sea-air COtwo fluxes in atmospheric COtwo inversions also helps to better constrain land-atmosphere COtwo fluxes.

  11. Data-based estimates of the ocean carbon sink variability - first results of the Surface Ocean pCO2 Mapping intercomparison (SOCOM)

    NASA Astrophysics Data System (ADS)

    Rödenbeck, C.; Bakker, D. C. E.; Gruber, N.; Iida, Y.; Jacobson, A. R.; Jones, S.; Landschützer, P.; Metzl, N.; Nakaoka, S.; Olsen, A.; Park, G.-H.; Peylin, P.; Rodgers, K. B.; Sasse, T. P.; Schuster, U.; Shutler, J. D.; Valsala, V.; Wanninkhof, R.; Zeng, J.

    2015-12-01

    Using measurements of the surface-ocean CO2 partial pressure (pCO2) and 14 different pCO2 mapping methods recently collated by the Surface Ocean pCO2 Mapping intercomparison (SOCOM) initiative, variations in regional and global sea-air CO2 fluxes are investigated. Though the available mapping methods use widely different approaches, we find relatively consistent estimates of regional pCO2 seasonality, in line with previous estimates. In terms of interannual variability (IAV), all mapping methods estimate the largest variations to occur in the eastern equatorial Pacific. Despite considerable spread in the detailed variations, mapping methods that fit the data more closely also tend to agree more closely with each other in regional averages. Encouragingly, this includes mapping methods belonging to complementary types - taking variability either directly from the pCO2 data or indirectly from driver data via regression. From a weighted ensemble average, we find an IAV amplitude of the global sea-air CO2 flux of 0.31 PgC yr-1 (standard deviation over 1992-2009), which is larger than simulated by biogeochemical process models. From a decadal perspective, the global ocean CO2 uptake is estimated to have gradually increased since about 2000, with little decadal change prior to that. The weighted mean net global ocean CO2 sink estimated by the SOCOM ensemble is -1.75 PgC yr-1 (1992-2009), consistent within uncertainties with estimates from ocean-interior carbon data or atmospheric oxygen trends.

  12. Estimation of austral summer net community production in the Amundsen Sea: Self-organizing map analysis approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Hahm, D.; Lee, D. G.; Rhee, T. S.; Kim, H. C.

    2014-12-01

    The Amundsen Sea, Antarctica, has been known for one of the most susceptible region to the current climate change such as sea ice melting and sea surface temperature change. In the Southern Ocean, a predominant amount of primary production is occurring in the continental shelf region. Phytoplankton blooms take place during the austral summer due to the limited sunlit and sea ice cover. Thus, quantifying the variation of summer season net community production (NCP) in the Amundsen Sea is essential to analyze the influence of climate change to the variation of biogeochemical cycle in the Southern Ocean. During the past three years of 2011, 2012 and 2014 in austral summer, we have conducted underway observations of ΔO2/Ar and derived NCP of the Amundsen Sea. Despite the importance of NCP for understanding biological carbon cycle of the ocean, the observations are rather limited to see the spatio-temporal variation in the Amundsen Sea. Therefore, we applied self-organizing map (SOM) analysis to expand our observed data sets and estimate the NCP during the summer season. SOM analysis, a type of artificial neural network, has been proved to be a useful method for extracting and classifying features in geoscience. In oceanography, SOM has applied for the analysis of various properties of the seawater such as sea surface temperature, chlorophyll concentration, pCO2, and NCP. Especially it is useful to expand a spatial coverage of direct measurements or to estimate properties whose satellite observations are technically or spatially limited. In this study, we estimate summer season NCP and find a variables set which optimally delineates the NCP variation in the Amundsen Sea as well. Moreover, we attempt to analyze the interannual variation of the Amundsen Sea NCP by taking climatological factors into account for the SOM analysis.

  13. Vegetation Coverage Mapping and Soil Effect Correction in Estimating Vegetation Water Content and Dry Biomass from Satellites

    NASA Astrophysics Data System (ADS)

    Huang, J.; Chen, D.

    2005-12-01

    Vegetation water content (VWC) attracts great research interests in hydrology research in recent years. As an important parameter describing the horizontal expansion of vegetation, vegetation coverage is essential to implement soil effect correction for partially vegetated fields to estimate VWC accurately. Ground measurements of corn and soybeans in SMEX02 resulted in an identical expolinear relationship between vegetation coverage and leaf area index (LAI), which is used for vegetation coverage mapping. Results illustrated two parts of LAI growth quantitatively: the horizontal expansion of leaf coverage and the vertical accumulation of leaf layers. It is believed that the former part contributes significantly to LAI growth at initial vegetation growth stage and the latter is more dominant after vegetation coverage reaches a certain level. The Normalized Difference Water Index (NDWI) using short-wave infrared bands is convinced for its late saturation at high LAI values, in contrast to the Normalized Difference Vegetation Index (NDVI). NDWI is then utilized to estimate LAI, via another expolinear relationship, which is evidenced having vegetation species independency in study of corn and soybeans in SMEX02 sites. It is believed that the surface reflectance measured at satellites spectral bands are the mixed results of signals reflected from vegetation and bare soil, especially at partially vegetated fields. A simple linear mixture model utilizing vegetation coverage information is proposed to correct soil effect in such cases. Surface reflectance fractions for -rpure- vegetation are derived from the model. Comparing with ground measurements, empirical models using soil effect corrected vegetation indices to estimate VWC and dry biomass (DB) are generated. The study enhanced the in-depth understanding of the mechanisms how vegetation growth takes effect on satellites spectral reflectance with and without soil effect, which are particularly useful for modeling in

  14. NASA/BLM Applications Pilot Test (APT), phase 2. Volume 1: Executive summary. [vegetation mapping and production estimation in northwestern Arizona

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Data from LANDSAT, low altitude color aerial photography, and ground visits were combined and used to produce vegetation cover maps and to estimate productivity of range, woodland, and forest resources in northwestern Arizona. A planning session, two workshops, and four status reviews were held to assist technology transfer from NASA. Computer aided digital classification of LANDSAT data was selected as a major source of input data. An overview is presented of the data processing, data collection, productivity estimation, and map verification techniques used. Cost analysis and digital LANDSAT digital products are also considered.

  15. Ecosystem services - from assessements of estimations to quantitative, validated, high-resolution, continental-scale mapping via airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Pfeifer, Norbert

    2016-04-01

    "Ecosystem services" defined vaguely as "nature's benefits to people" are a trending concept in ecology and conservation. Quantifying and mapping these services is a longtime demand of both ecosystems science and environmental policy. The current state of the art is to use existing maps of land cover, and assign certain average ecosystem service values to their unit areas. This approach has some major weaknesses: the concept of "ecosystem services", the input land cover maps and the value indicators. Such assessments often aim at valueing services in terms of human currency as a basis for decision-making, although this approach remains contested. Land cover maps used for ecosystem service assessments (typically the CORINE land cover product) are generated from continental-scale satellite imagery, with resolution in the range of hundreds of meters. In some rare cases, airborne sensors are used, with higher resolution but less covered area. Typically, general land cover classes are used instead of categories defined specifically for the purpose of ecosystem service assessment. The value indicators are developed for and tested on small study sites, but widely applied and adapted to other sites far away (a process called benefit transfer) where local information may not be available. Upscaling is always problematic since such measurements investigate areas much smaller than the output map unit. Nevertheless, remote sensing is still expected to play a major role in conceptualization and assessment of ecosystem services. We propose that an improvement of several orders of magnitude in resolution and accuracy is possible through the application of airborne LIDAR, a measurement technique now routinely used for collection of countrywide three-dimensional datasets with typically sub-meter resolution. However, this requires a clear definition of the concept of ecosystem services and the variables in focus: remote sensing can measure variables closely related to "ecosystem

  16. Drift-Free Indoor Navigation Using Simultaneous Localization and Mapping of the Ambient Heterogeneous Magnetic Field

    NASA Astrophysics Data System (ADS)

    Chow, J. C. K.

    2017-09-01

    In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions

  17. Illness Mapping: a time and cost effective method to estimate healthcare data needed to establish community-based health insurance.

    PubMed

    Binnendijk, Erika; Gautham, Meenakshi; Koren, Ruth; Dror, David M

    2012-10-09

    Most healthcare spending in developing countries is private out-of-pocket. One explanation for low penetration of health insurance is that poorer individuals doubt their ability to enforce insurance contracts. Community-based health insurance schemes (CBHI) are a solution, but launching CBHI requires obtaining accurate local data on morbidity, healthcare utilization and other details to inform package design and pricing. We developed the "Illness Mapping" method (IM) for data collection (faster and cheaper than household surveys). IM is a modification of two non-interactive consensus group methods (Delphi and Nominal Group Technique) to operate as interactive methods. We elicited estimates from "Experts" in the target community on morbidity and healthcare utilization. Interaction between facilitator and experts became essential to bridge literacy constraints and to reach consensus.The study was conducted in Gaya District, Bihar (India) during April-June 2010. The intervention included the IM and a household survey (HHS). IM included 18 women's and 17 men's groups. The HHS was conducted in 50 villages with1,000 randomly selected households (6,656 individuals). We found good agreement between the two methods on overall prevalence of illness (IM: 25.9% ±3.6; HHS: 31.4%) and on prevalence of acute (IM: 76.9%; HHS: 69.2%) and chronic illnesses (IM: 20.1%; HHS: 16.6%). We also found good agreement on incidence of deliveries (IM: 3.9% ±0.4; HHS: 3.9%), and on hospital deliveries (IM: 61.0%. ± 5.4; HHS: 51.4%). For hospitalizations, we obtained a lower estimate from the IM (1.1%) than from the HHS (2.6%). The IM required less time and less person-power than a household survey, which translate into reduced costs. We have shown that our Illness Mapping method can be carried out at lower financial and human cost for sourcing essential local data, at acceptably accurate levels. In view of the good fit of results obtained, we assume that the method could work elsewhere

  18. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  19. Ice Sheet Roughness Estimation Based on Impulse Responses Acquired in the Global Ice Sheet Mapping Orbiter Mission

    NASA Astrophysics Data System (ADS)

    Niamsuwan, N.; Johnson, J. T.; Jezek, K. C.; Gogineni, P.

    2008-12-01

    The Global Ice Sheet Mapping Orbiter (GISMO) mission was developed to address scientific needs to understand the polar ice subsurface structure. This NASA Instrument Incubator Program project is a collaboration between Ohio State University, the University of Kansas, Vexcel Corporation and NASA. The GISMO design utilizes an interferometric SAR (InSAR) strategy in which ice sheet reflected signals received by a dual-antenna system are used to produce an interference pattern. The resulting interferogram can be used to filter out surface clutter so as to reveal the signals scattered from the base of the ice sheet. These signals are further processed to produce 3D-images representing basal topography of the ice sheet. In the past three years, the GISMO airborne field campaigns that have been conducted provide a set of useful data for studying geophysical properties of the Greenland ice sheet. While topography information can be obtained using interferometric SAR processing techniques, ice sheet roughness statistics can also be derived by a relatively simple procedure that involves analyzing power levels and the shape of the radar impulse response waveforms. An electromagnetic scattering model describing GISMO impulse responses has previously been proposed and validated. This model suggested that rms-heights and correlation lengths of the upper surface profile can be determined from the peak power and the decay rate of the pulse return waveform, respectively. This presentation will demonstrate a procedure for estimating the roughness of ice surfaces by fitting the GISMO impulse response model to retrieved waveforms from selected GISMO flights. Furthermore, an extension of this procedure to estimate the scattering coefficient of the glacier bed will be addressed as well. Planned future applications involving the classification of glacier bed conditions based on the derived scattering coefficients will also be described.

  20. Bathymetric map, area/capacity table, and sediment volume estimate for Millwood Lake near Ashdown, Arkansas, 2013

    USGS Publications Warehouse

    Richards, Joseph M.; Green, W. Reed

    2013-01-01

    Millwood Lake, in southwestern Arkansas, was constructed and is operated by the U.S. Army Corps of Engineers (USACE) for flood-risk reduction, water supply, and recreation. The lake was completed in 1966 and it is likely that with time sedimentation has resulted in the reduction of storage capacity of the lake. The loss of storage capacity can cause less water to be available for water supply, and lessens the ability of the lake to mitigate flooding. Excessive sediment accumulation also can cause a reduction in aquatic habitat in some areas of the lake. Although many lakes operated by the USACE have periodic bathymetric and sediment surveys, none have been completed for Millwood Lake. In March 2013, the U.S. Geological Survey (USGS), in cooperation with the USACE, surveyed the bathymetry of Millwood Lake to prepare an updated bathymetric map and area/capacity table. The USGS also collected sediment thickness data in June 2013 to estimate the volume of sediment accumulated in the lake.

  1. Mapping grey matter reductions in schizophrenia: an anatomical likelihood estimation analysis of voxel-based morphometry studies.

    PubMed

    Fornito, A; Yücel, M; Patti, J; Wood, S J; Pantelis, C

    2009-03-01

    Voxel-based morphometry (VBM) is a popular tool for mapping neuroanatomical changes in schizophrenia patients. Several recent meta-analyses have identified the brain regions in which patients most consistently show grey matter reductions, although they have not examined whether such changes reflect differences in grey matter concentration (GMC) or grey matter volume (GMV). These measures assess different aspects of grey matter integrity, and may therefore reflect different pathological processes. In this study, we used the Anatomical Likelihood Estimation procedure to analyse significant differences reported in 37 VBM studies of schizophrenia patients, incorporating data from 1646 patients and 1690 controls, and compared the findings of studies using either GMC or GMV to index grey matter differences. Analysis of all studies combined indicated that grey matter reductions in a network of frontal, temporal, thalamic and striatal regions are among the most frequently reported in literature. GMC reductions were generally larger and more consistent than GMV reductions, and were more frequent in the insula, medial prefrontal, medial temporal and striatal regions. GMV reductions were more frequent in dorso-medial frontal cortex, and lateral and orbital frontal areas. These findings support the primacy of frontal, limbic, and subcortical dysfunction in the pathophysiology of schizophrenia, and suggest that the grey matter changes observed with MRI may not necessarily result from a unitary pathological process.

  2. Estimating Integrated Water Vapor (IWV) regional map distribution using METEOSAT satellite data and GPS Zenith Wet Delay (ZWD)

    NASA Astrophysics Data System (ADS)

    Reuveni, Y.; Leontiev, A.

    2016-12-01

    Using GPS satellites signals, we can study atmospheric processes and coupling mechanisms, which can help us understand the physical conditions in the upper atmosphere that might lead or act as proxies for severe weather events such as extreme storms and flooding. GPS signals received by geodetic stations on the ground are multi-purpose and can also provide estimates of tropospheric zenith delays, which can be converted into mm-accuracy Precipitable Water Vapor (PWV) using collocated pressure and temperature measurements on the ground. Here, we present the use of Israel's geodetic GPS receivers network for extracting tropospheric zenith path delays combined with near Real Time (RT) METEOSAT-10 Water Vapor (WV) and surface temperature pixel intensity values (7.3 and 12.1 channels, respectively) in order to obtain absolute IWV (kg/m2) or PWV (mm) map distribution. The results show good agreement between the absolute values obtained from our triangulation strategy based solely on GPS Zenith Total Delays (ZTD) and METEOSAT-10 surface temperature data compared with available radiosonde Precipitable IWV/PWV absolute values. The presented strategy can provide unprecedented temporal and special IWV/PWV distribution, which is needed as part of the accurate and comprehensive initial conditions pro­vided by upper-air observation systems at temporal and spatial resolutions consistent with the models assimilating them.

  3. Estimation for the Linear Model With Uncertain Covariance Matrices

    NASA Astrophysics Data System (ADS)

    Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat

    2014-03-01

    We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.

  4. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  5. Learning-based subject-specific estimation of dynamic maps of cortical morphology at missing time points in longitudinal infant studies.

    PubMed

    Meng, Yu; Li, Gang; Gao, Yaozong; Lin, Weili; Shen, Dinggang

    2016-11-01

    Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Quantitative evaluation of dual-flip-angle T1 mapping on DCE-MRI kinetic parameter estimation in head and neck

    PubMed Central

    Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D

    2012-01-01

    Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084

  7. A data centred method to estimate and map changes in the full distribution of daily surface temperature

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nicholas

    2016-04-01

    Characterizing how our climate is changing includes local information which can inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily surface temperature. Here we focus on these local changes and on a model independent method to transform daily observations into patterns of local climate change. Our method [1] is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of the distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. For temperature, changes in the distribution itself can yield robust results [2]. We demonstrate how the fundamental timescales of anthropogenic climate change limit the identification of societally relevant aspects of changes. We show that it is nevertheless possible to extract, solely from observations, some confident quantified assessments of change at certain thresholds and locations [3]. We demonstrate this approach using E-OBS gridded data [4] timeseries of local daily surface temperature from specific locations across Europe over the last 60 years. [1] Chapman, S. C., D. A. Stainforth, N. W. Watkins, On estimating long term local climate trends, Phil. Trans. Royal Soc., A,371 20120287 (2013) [2] Stainforth, D. A. S. C. Chapman, N. W. Watkins, Mapping climate change in European temperature distributions, ERL 8, 034031 (2013) [3] Chapman, S. C., Stainforth, D. A., Watkins, N. W. Limits to the quantification of local climate change, ERL 10, 094018 (2015) [4] Haylock M. R. et al ., A European daily high-resolution gridded dataset of

  8. Quantitative estimation of Tropical Rainfall Mapping Mission precipitation radar signals from ground-based polarimetric radar observations

    NASA Astrophysics Data System (ADS)

    Bolen, Steven M.; Chandrasekar, V.

    2003-06-01

    The Tropical Rainfall Mapping Mission (TRMM) is the first mission dedicated to measuring rainfall from space using radar. The precipitation radar (PR) is one of several instruments aboard the TRMM satellite that is operating in a nearly circular orbit with nominal altitude of 350 km, inclination of 35°, and period of 91.5 min. The PR is a single-frequency Ku-band instrument that is designed to yield information about the vertical storm structure so as to gain insight into the intensity and distribution of rainfall. Attenuation effects on PR measurements, however, can be significant and as high as 10-15 dB. This can seriously impair the accuracy of rain rate retrieval algorithms derived from PR signal returns. Quantitative estimation of PR attenuation is made along the PR beam via ground-based polarimetric observations to validate attenuation correction procedures used by the PR. The reflectivity (Zh) at horizontal polarization and specific differential phase (Kdp) are found along the beam from S-band ground radar measurements, and theoretical modeling is used to determine the expected specific attenuation (k) along the space-Earth path at Ku-band frequency from these measurements. A theoretical k-Kdp relationship is determined for rain when Kdp ≥ 0.5°/km, and a power law relationship, k = a Zhb, is determined for light rain and other types of hydrometers encountered along the path. After alignment and resolution volume matching is made between ground and PR measurements, the two-way path-integrated attenuation (PIA) is calculated along the PR propagation path by integrating the specific attenuation along the path. The PR reflectivity derived after removing the PIA is also compared against ground radar observations.

  9. Multisensory processing of naturalistic objects in motion: a high-density electrical mapping and source estimation study.

    PubMed

    Senkowski, Daniel; Saint-Amour, Dave; Kelly, Simon P; Foxe, John J

    2007-07-01

    In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects in motion as reflected in event-related potentials (ERPs). A randomized stream of naturalistic multisensory-audiovisual (AV), unisensory-auditory (A), and unisensory-visual (V) "splash" clips (i.e., a drop falling and hitting a water surface) was presented among non-naturalistic abstract motion stimuli. The visual clip onset preceded the "splash" onset by 100 ms for multisensory stimuli. For naturalistic objects early multisensory integration effects beginning 120-140 ms after sound onset were observed over posterior scalp, with distributed sources localized to occipital cortex, temporal lobule, insular, and medial frontal gyrus (MFG). These effects, together with longer latency interactions (210-250 and 300-350 ms) found in a widespread network of occipital, temporal, and frontal areas, suggest that naturalistic objects in motion are processed at multiple stages of multisensory integration. The pattern of integration effects differed considerably for non-naturalistic stimuli. Unlike naturalistic objects, no early interactions were found for non-naturalistic objects. The earliest integration effects for non-naturalistic stimuli were observed 210-250 ms after sound onset including large portions of the inferior parietal cortex (IPC). As such, there were clear differences in the cortical networks activated by multisensory motion stimuli as a consequence of the semantic relatedness (or lack thereof) of the constituent sensory elements.

  10. Systematized water content calculation in cartilage using T1-mapping MR estimations: design and validation of a mathematical model.

    PubMed

    Shiguetomi-Medina, J M; Ramirez-Gl, J L; Stødkilde-Jørgensen, H; Møller-Madsen, B

    2017-09-01

    Up to 80 % of cartilage is water; the rest is collagen fibers and proteoglycans. Magnetic resonance (MR) T1-weighted measurements can be employed to calculate the water content of a tissue using T1 mapping. In this study, a method that translates T1 values into water content data was tested statistically. To develop a predictive equation, T1 values were obtained for tissue-mimicking gelatin samples. 1.5 T MRI was performed using inverse angle phase and an inverse sequence at 37 (±0.5) °C. Regions of interest were manually delineated and the mean T1 value was estimated in arbitrary units. Data were collected and modeled using linear regression. To validate the method, articular cartilage from six healthy pigs was used. The experiment was conducted in accordance with the Danish Animal Experiment Committee. Double measurements were performed for each animal. Ex vivo, all water in the tissue was extracted by lyophilization, thus allowing the volume of water to be measured. This was then compared with the predicted water content via Lin's concordance correlation coefficient at the 95 % confidence level. The mathematical model was highly significant when compared to a null model (p < 0.0001). 97.3 % of the variation in water content can be explained by absolute T1 values. Percentage water content could be predicted as 0.476 + (T1 value) × 0.000193 × 100 %. We found that there was 98 % concordance between the actual and predicted water contents. The results of this study demonstrate that MR data can be used to predict percentage water contents of cartilage samples. 3 (case-control study).

  11. A data centred method to estimate and map how the local distribution of daily precipitation is changing

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nick

    2014-05-01

    adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, 2013, S. C. Chapman, N. W. Watkins, Mapping climate change in European temperature distributions, Environ. Res. Lett. 8, 034031 [2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  12. Illness Mapping: a time and cost effective method to estimate healthcare data needed to establish community-based health insurance

    PubMed Central

    2012-01-01

    Background Most healthcare spending in developing countries is private out-of-pocket. One explanation for low penetration of health insurance is that poorer individuals doubt their ability to enforce insurance contracts. Community-based health insurance schemes (CBHI) are a solution, but launching CBHI requires obtaining accurate local data on morbidity, healthcare utilization and other details to inform package design and pricing. We developed the “Illness Mapping” method (IM) for data collection (faster and cheaper than household surveys). Methods IM is a modification of two non-interactive consensus group methods (Delphi and Nominal Group Technique) to operate as interactive methods. We elicited estimates from “Experts” in the target community on morbidity and healthcare utilization. Interaction between facilitator and experts became essential to bridge literacy constraints and to reach consensus. The study was conducted in Gaya District, Bihar (India) during April-June 2010. The intervention included the IM and a household survey (HHS). IM included 18 women’s and 17 men’s groups. The HHS was conducted in 50 villages with1,000 randomly selected households (6,656 individuals). Results We found good agreement between the two methods on overall prevalence of illness (IM: 25.9% ±3.6; HHS: 31.4%) and on prevalence of acute (IM: 76.9%; HHS: 69.2%) and chronic illnesses (IM: 20.1%; HHS: 16.6%). We also found good agreement on incidence of deliveries (IM: 3.9% ±0.4; HHS: 3.9%), and on hospital deliveries (IM: 61.0%. ± 5.4; HHS: 51.4%). For hospitalizations, we obtained a lower estimate from the IM (1.1%) than from the HHS (2.6%). The IM required less time and less person-power than a household survey, which translate into reduced costs. Conclusions We have shown that our Illness Mapping method can be carried out at lower financial and human cost for sourcing essential local data, at acceptably accurate levels. In view of the good fit of results

  13. Global Forest Canopy Height Maps Validation and Calibration for The Potential of Forest Biomass Estimation in The Southern United States

    NASA Astrophysics Data System (ADS)

    Ku, N. W.; Popescu, S. C.

    2015-12-01

    In the past few years, three global forest canopy height maps have been released. Lefsky (2010) first utilized the Geoscience Laser Altimeter System (GLAS) on the Ice, Cloud and land Elevation Satellite (ICESat) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to generate a global forest canopy height map in 2010. Simard et al. (2011) integrated GLAS data and other ancillary variables, such as MODIS, Shuttle Radar Topography Mission (STRM), and climatic data, to generate another global forest canopy height map in 2011. Los et al. (2012) also used GLAS data to create a vegetation height map in 2012.Several studies attempted to compare these global height maps to other sources of data., Bolton et al. (2013) concluded that Simard's forest canopy height map has strong agreement with airborne lidar derived heights. Los map is a coarse spatial resolution vegetation height map with a 0.5 decimal degrees horizontal resolution, around 50 km in the US, which is not feasible for the purpose of our research. Thus, Simard's global forest canopy height map is the primary map for this research study. The main objectives of this research were to validate and calibrate Simard's map with airborne lidar data and other ancillary variables in the southern United States. The airborne lidar data was collected between 2010 and 2012 from: (1) NASA LiDAR, Hyperspectral & Thermal Image (G-LiHT) program; (2) National Ecological Observatory Network's (NEON) prototype data sharing program; (3) NSF Open Topography Facility; and (4) the Department of Ecosystem Science and Management at Texas A&M University. The airborne lidar study areas also cover a wide variety of vegetation types across the southern US. The airborne lidar data is post-processed to generate lidar-derived metrics and assigned to four different classes of point cloud data. The four classes of point cloud data are the data with ground points, above 1 m, above 3 m, and above 5 m. The root mean square error (RMSE) and

  14. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  15. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  16. Mapping to Estimate Health-State Utility from Non-Preference-Based Outcome Measures: An ISPOR Good Practices for Outcomes Research Task Force Report.

    PubMed

    Wailoo, Allan J; Hernandez-Alava, Monica; Manca, Andrea; Mejia, Aurelio; Ray, Joshua; Crawford, Bruce; Botteman, Marc; Busschbach, Jan

    2017-01-01

    Economic evaluation conducted in terms of cost per quality-adjusted life-year (QALY) provides information that decision makers find useful in many parts of the world. Ideally, clinical studies designed to assess the effectiveness of health technologies would include outcome measures that are directly linked to health utility to calculate QALYs. Often this does not happen, and even when it does, clinical studies may be insufficient for a cost-utility assessment. Mapping can solve this problem. It uses an additional data set to estimate the relationship between outcomes measured in clinical studies and health utility. This bridges the evidence gap between available evidence on the effect of a health technology in one metric and the requirement for decision makers to express it in a different one (QALYs). In 2014, ISPOR established a Good Practices for Outcome Research Task Force for mapping studies. This task force report provides recommendations to analysts undertaking mapping studies, those that use the results in cost-utility analysis, and those that need to critically review such studies. The recommendations cover all areas of mapping practice: the selection of data sets for the mapping estimation, model selection and performance assessment, reporting standards, and the use of results including the appropriate reflection of variability and uncertainty. This report is unique because it takes an international perspective, is comprehensive in its coverage of the aspects of mapping practice, and reflects the current state of the art. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. Generalized watermarking attack based on watermark estimation and perceptual remodulation

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Sviatoslav V.; Pereira, Shelby; Herrigel, Alexander; Baumgartner, Nazanin; Pun, Thierry

    2000-05-01

    Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test them. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual System (HVS). The new attack satisfies two important requirements. First, image quality after the attack as perceived by the HVS is not worse than the quality of the stego image. Secondly, the attack uses all available prior information about the watermark and cover image statistics to perform the best watermark removal or damage. The proposed attack is based on a stochastic formulation of the watermark removal problem, considering the embedded watermark as additive noise with some probability distribution. The attack scheme consists of two main stages: (1) watermark estimation and partial removal by a filtering based on a Maximum a Posteriori (MAP) approach; (2) watermark alteration and hiding through addition of noise to the filtered image, taking into account the statistics of the embedded watermark and exploiting HVS characteristics. Experiments on a number of real world and computer generated images show the high efficiency of the proposed attack against known academic and commercial methods: the watermark is completely destroyed in all tested images without altering the image quality. The approach can be used against watermark embedding schemes that operate either in coordinate domain, or transform domains like Fourier, DCT or wavelet.

  18. B-spline goal-oriented error estimators for geometrically nonlinear rods

    DTIC Science & Technology

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  19. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  20. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India—An application of small area estimation techniques

    PubMed Central

    Aditya, Kaustav; Sud, U. C.

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202

  1. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India-An application of small area estimation techniques.

    PubMed

    Chandra, Hukum; Aditya, Kaustav; Sud, U C

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.

  2. Advances in 3D soil mapping and water content estimation using multi-channel ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2011-12-01

    Multi-channel ground-penetrating radar systems have recently become widely available, thereby opening new possibilities for shallow imaging of the subsurface. One advantage of these systems is that they can significantly reduce survey times by simultaneously collecting multiple lines of GPR reflection data. As a result, it is becoming more practical to complete 3D surveys - particularly in situations where the subsurface undergoes rapid changes, e.g., when monitoring infiltration and redistribution of water in soils. While 3D and 4D surveys can provide a degree of clarity that significantly improves interpretation of the subsurface, an even more powerful feature of the new multi-channel systems for hydrologists is their ability to collect data using multiple antenna offsets. Central mid-point (CMP) surveys have been widely used to estimate radar wave velocities, which can be related to water contents, by sequentially increasing the distance, i.e., offset, between the source and receiver antennas. This process is highly labor intensive using single-channel systems and therefore such surveys are often only performed at a few locations at any given site. In contrast, with multi-channel GPR systems it is possible to physically arrange an array of antennas at different offsets, such that a CMP-style survey is performed at every point along a radar transect. It is then possible to process this data to obtain detailed maps of wave velocity with a horizontal resolution on the order of centimeters. In this talk I review concepts underlying multi-channel GPR imaging with an emphasis on multi-offset profiling for water content estimation. Numerical simulations are used to provide examples that illustrate situations where multi-offset GPR profiling is likely to be successful, with an emphasis on considering how issues like noise, soil heterogeneity, vertical variations in water content and weak reflection returns affect algorithms for automated analysis of the data. Overall

  3. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    NASA Astrophysics Data System (ADS)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal

  4. Method for estimating potential wetland extent by utilizing streamflow statistics and flood-inundation mapping techniques: Pilot study for land along the Wabash River near Terre Haute, Indiana

    USGS Publications Warehouse

    Kim, Moon H.; Ritz, Christian T.; Arvin, Donald V.

    2012-01-01

    Potential wetland extents were estimated for a 14-mile reach of the Wabash River near Terre Haute, Indiana. This pilot study was completed by the U.S. Geological Survey in cooperation with the U.S. Department of Agriculture, Natural Resources Conservation Service (NRCS). The study showed that potential wetland extents can be estimated by analyzing streamflow statistics with the available streamgage data, calculating the approximate water-surface elevation along the river, and generating maps by use of flood-inundation mapping techniques. Planning successful restorations for Wetland Reserve Program (WRP) easements requires a determination of areas that show evidence of being in a zone prone to sustained or frequent flooding. Zone determinations of this type are used by WRP planners to define the actively inundated area and make decisions on restoration-practice installation. According to WRP planning guidelines, a site needs to show evidence of being in an "inundation zone" that is prone to sustained or frequent flooding for a period of 7 consecutive days at least once every 2 years on average in order to meet the planning criteria for determining a wetland for a restoration in agricultural land. By calculating the annual highest 7-consecutive-day mean discharge with a 2-year recurrence interval (7MQ2) at a streamgage on the basis of available streamflow data, one can determine the water-surface elevation corresponding to the calculated flow that defines the estimated inundation zone along the river. By using the estimated water-surface elevation ("inundation elevation") along the river, an approximate extent of potential wetland for a restoration in agricultural land can be mapped. As part of the pilot study, a set of maps representing the estimated potential wetland extents was generated in a geographic information system (GIS) application by combining (1) a digital water-surface plane representing the surface of inundation elevation that sloped in the downstream

  5. www.common-metrics.org: a web application to estimate scores from different patient-reported outcome measures on a common scale.

    PubMed

    Fischer, H Felix; Rose, Matthias

    2016-10-19

    Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.

  6. Difficulties with estimating city-wide urban forest cover change from national, remotely-sensed tree canopy maps

    Treesearch

    Jeffrey T. Walton

    2008-01-01

    Two datasets of percent urban tree canopy cover were compared. The first dataset was based on a 1991 AVHRR forest density map. The second was the US Geological Survey's National Land Cover Database (NLCD) 2001 sub-pixel tree canopy. A comparison of these two tree canopy layers was conducted in 36 census designated places of western New York State. Reference data...

  7. Mapping land cover and estimating forest structure using satellite imagery and coarse resolution lidar in the Virgin Islands

    Treesearch

    T.A. Kennaway; E.H. Helmer; M.A. Lefsky; T.A. Brandeis; K.R. Sherill

    2008-01-01

    Current information on land cover, forest type and forest structure for the Virgin Islands is critical to land managers and researchers for accurate forest inventory and ecological monitoring. In this study, we use cloud free image mosaics of panchromatic sharpened Landsat ETM+ images and decision tree classification software to map land cover and forest type for the...

  8. Mapping land cover and estimating forest structure using satellite imagery and coarse resolution lidar in the Virgin Islands

    Treesearch

    Todd Kennaway; Eileen Helmer; Michael Lefsky; Thomas Brandeis; Kirk Sherrill

    2009-01-01

    Current information on land cover, forest type and forest structure for the Virgin Islands is critical to land managers and researachers for accurate forest inverntory and ecological monitoring. In this study, we use cloud free image mosaics of panchromatic sharpened Landsat ETM+ images and decision tree classification software to map land cover and forest type for the...

  9. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    NASA Astrophysics Data System (ADS)

    Bird, P.

    2010-12-01

    The hope expressed in the title question above can be contradicted in 5 ways, listed below. To summarize, an earthquake rupture can be larger than anticipated either because the fault system has not been fully mapped, or because the rupture is not limited to the pre-existing fault network. 1. Geologic mapping of faults is always incomplete due to four limitations: (a) Map-scale limitation: Faults below a certain (scale-dependent) apparent offset are omitted; (b) Field-time limitation: The most obvious fault(s) get(s) the most attention; (c) Outcrop limitation: You can't map what you can't see; and (d) Lithologic-contrast limitation: Intra-formation faults can be tough to map, so they are often assumed to be minor and omitted. If mapping is incomplete, fault traces may be longer and/or better-connected than we realize. 2. Fault trace “lengths” are unreliable guides to maximum magnitude. Fault networks have multiply-branching, quasi-fractal shapes, so fault “length” may be meaningless. Naming conventions for main strands are unclear, and rarely reviewed. Gaps due to Quaternary alluvial cover may not reflect deeper seismogenic structure. Mapped kinks and other “segment boundary asperities” may be only shallow structures. Also, some recent earthquakes have jumped and linked “separate” faults (Landers, California 1992; Denali, Alaska, 2002) [Wesnousky, 2006; Black, 2008]. 3. Distributed faulting (“eventually occurring everywhere”) is predicted by several simple theories: (a) Viscoelastic stress redistribution in plate/microplate interiors concentrates deviatoric stress upward until they fail by faulting; (b) Unstable triple-junctions (e.g., between 3 strike-slip faults) in 2-D plate theory require new faults to form; and (c) Faults which appear to end (on a geologic map) imply distributed permanent deformation. This means that all fault networks evolve and that even a perfect fault map would be incomplete for future ruptures. 4. A recent attempt

  10. Estimation of Flow Duration Curve for Ungauged Catchments using Adaptive Neuro-Fuzzy Inference System and Map Correlation Method: A Case Study from Turkey

    NASA Astrophysics Data System (ADS)

    Kentel, E.; Dogulu, N.

    2015-12-01

    In Turkey the experience and data required for a hydrological model setup is limited and very often not available. Moreover there are many ungauged catchments where there are also many planned projects aimed at utilization of water resources including development of existing hydropower potential. This situation makes runoff prediction at locations with lack of data and ungauged locations where small hydropower plants, reservoirs, etc. are planned an increasingly significant challenge and concern in the country. Flow duration curves have many practical applications in hydrology and integrated water resources management. Estimation of flood duration curve (FDC) at ungauged locations is essential, particularly for hydropower feasibility studies and selection of the installed capacities. In this study, we test and compare the performances of two methods for estimating FDCs in the Western Black Sea catchment, Turkey: (i) FDC based on Map Correlation Method (MCM) flow estimates. MCM is a recently proposed method (Archfield and Vogel, 2010) which uses geospatial information to estimate flow. Flow measurements of stream gauging stations nearby the ungauged location are the only data requirement for this method. This fact makes MCM very attractive for flow estimation in Turkey, (ii) Adaptive Neuro-Fuzzy Inference System (ANFIS) is a data-driven method which is used to relate FDC to a number of variables representing catchment and climate characteristics. However, it`s ease of implementation makes it very useful for practical purposes. Both methods use easily collectable data and are computationally efficient. Comparison of the results is realized based on two different measures: the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE) value. Ref: Archfield, S. A., and R. M. Vogel (2010), Map correlation method: Selection of a reference streamgage to estimate daily streamflow at ungaged catchments, Water Resour. Res., 46, W10513, doi:10.1029/2009WR008481.

  11. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  12. Land Cover Mapping using GEOBIA to Estimate Loss of Salacca zalacca Trees in Landslide Area of Clapar, Madukara District of Banjarnegara

    NASA Astrophysics Data System (ADS)

    Permata, Anggi; Juniansah, Anwar; Nurcahyati, Eka; Dimas Afrizal, Mousafi; Adnan Shafry Untoro, Muhammad; Arifatha, Na'ima; Ramadhani Yudha Adiwijaya, Raden; Farda, Nur Mohammad

    2016-11-01

    Landslide is an unpredictable natural disaster which commonly happens in highslope area. Aerial photography in small format is one of acquisition method that can reach and obtain high resolution spatial data faster than other methods, and provide data such as orthomosaic and Digital Surface Model (DSM). The study area contained landslide area in Clapar, Madukara District of Banjarnegara. Aerial photographs of landslide area provided advantage in objects visibility. Object's characters such as shape, size, and texture were clearly seen, therefore GEOBIA (Geography Object Based Image Analysis) was compatible as method for classifying land cover in study area. Dissimilar with PPA (PerPixel Analyst) method that used spectral information as base object detection, GEOBIA could use spatial elements as classification basis to establish a land cover map with better accuracy. GEOBIA method used classification hierarchy to divide post disaster land cover into three main objects: vegetation, landslide/soil, and building. Those three were required to obtain more detailed information that can be used in estimating loss caused by landslide and establishing land cover map in landslide area. Estimating loss in landslide area related to damage in Salak (Salacca zalacca) plantations. This estimation towards quantity of Salak tree that were drifted away by landslide was calculated in assumption that every tree damaged by landslide had same age and production class with other tree that weren't damaged. Loss calculation was done by approximating quantity of damaged trees in landslide area with data of trees around area that were acquired from GEOBIA classification method.

  13. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data*

    PubMed Central

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-01-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T a) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T a estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T a based on MODIS land surface temperature (LST) data. The verification results of maximum T a, minimum T a, GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001–2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale. PMID:23365013

  14. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data.

    PubMed

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-02-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T(a)) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T(a) estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T(a) based on MODIS land surface temperature (LST) data. The verification results of maximum T(a), minimum T(a), GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001-2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale.

  15. A fast estimator for the bispectrum and beyond - a practical method for measuring non-Gaussianity in 21-cm maps

    NASA Astrophysics Data System (ADS)

    Watkinson, Catherine A.; Majumdar, Suman; Pritchard, Jonathan R.; Mondal, Rajesh

    2017-12-01

    In this paper, we establish the accuracy and robustness of a fast estimator for the bispectrum - the 'FFT-bispectrum estimator'. The implementation of the estimator presented here offers speed and simplicity benefits over a direct-measurement approach. We also generalize the derivation so it may be easily be applied to any order polyspectra, such as the trispectrum, with the cost of only a handful of Fast-Fourier Transforms (FFTs). All lower order statistics can also be calculated simultaneously for little extra cost. To test the estimator, we make use of a non-linear density field, and for a more strongly non-Gaussian test case, we use a toy-model of reionization in which ionized bubbles at a given redshift are all of equal size and are randomly distributed. Our tests find that the FFT-estimator remains accurate over a wide range of k, and so should be extremely useful for analysis of 21-cm observations. The speed of the FFT-bispectrum estimator makes it suitable for sampling applications, such as Bayesian inference. The algorithm we describe should prove valuable in the analysis of simulations and observations, and whilst, we apply it within the field of cosmology, this estimator is useful in any field that deals with non-Gaussian data.

  16. Estimating temporal and spatial variation of ocean surface pCO2 in the North Pacific using a Self Organizing Map neural network technique

    NASA Astrophysics Data System (ADS)

    Nakaoka, S.; Telszewski, M.; Nojiri, Y.; Yasunaka, S.; Miyazaki, C.; Mukai, H.; Usui, N.

    2013-03-01

    This study produced maps of the partial pressure of oceanic carbon dioxide (pCO2sea) in the North Pacific on a 0.25° latitude × 0.25° longitude grid from 2002 to 2008. The pCO2sea values were estimated by using a self-organizing map neural network technique to explain the non-linear relationships between observed pCO2sea data and four oceanic parameters: sea surface temperature (SST), mixed layer depth, chlorophyll a concentration, and sea surface salinity (SSS). The observed pCO2sea data was obtained from an extensive dataset generated by the volunteer observation ship program operated by the National Institute for Environmental Studies. The reconstructed pCO2sea values agreed rather well with the pCO2sea measurements, the root mean square error being 17.6 μatm. The pCO2sea estimates were improved by including SSS as one of the training parameters and by taking into account secular increases of pCO2sea that have tracked increases in atmospheric CO2. Estimated pCO2sea values accurately reproduced pCO2sea data at several stations in the North Pacific. The distributions of pCO2sea revealed by seven-year averaged monthly pCO2sea maps were similar to Lamont-Doherty Earth Observatory pCO2sea climatology and more precisely reflected oceanic conditions. The distributions of pCO2sea anomalies over the North Pacific during the winter clearly showed regional contrasts between El Niño and La Niña years related to changes of SST and vertical mixing.

  17. On the downscaling of actual evapotranspiration maps based on combination of MODIS and landsat-based actual evapotranspiration estimates

    USGS Publications Warehouse

    Singh, Ramesh K.; Senay, Gabriel B.; Velpuri, Naga Manohar; Bohms, Stefanie; Verdin, James P.

    2014-01-01

     Downscaling is one of the important ways of utilizing the combined benefits of the high temporal resolution of Moderate Resolution Imaging Spectroradiometer (MODIS) images and fine spatial resolution of Landsat images. We have evaluated the output regression with intercept method and developed the Linear with Zero Intercept (LinZI) method for downscaling MODIS-based monthly actual evapotranspiration (AET) maps to the Landsat-scale monthly AET maps for the Colorado River Basin for 2010. We used the 8-day MODIS land surface temperature product (MOD11A2) and 328 cloud-free Landsat images for computing AET maps and downscaling. The regression with intercept method does have limitations in downscaling if the slope and intercept are computed over a large area. A good agreement was obtained between downscaled monthly AET using the LinZI method and the eddy covariance measurements from seven flux sites within the Colorado River Basin. The mean bias ranged from −16 mm (underestimation) to 22 mm (overestimation) per month, and the coefficient of determination varied from 0.52 to 0.88. Some discrepancies between measured and downscaled monthly AET at two flux sites were found to be due to the prevailing flux footprint. A reasonable comparison was also obtained between downscaled monthly AET using LinZI method and the gridded FLUXNET dataset. The downscaled monthly AET nicely captured the temporal variation in sampled land cover classes. The proposed LinZI method can be used at finer temporal resolution (such as 8 days) with further evaluation. The proposed downscaling method will be very useful in advancing the application of remotely sensed images in water resources planning and management.

  18. Separable concatenated codes with iterative map decoding for Rician fading channels

    NASA Technical Reports Server (NTRS)

    Lodge, J. H.; Young, R. J.

    1993-01-01

    Very efficient signalling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. In this paper, powerful codes are obtained by combining comparatively simple convolutional codes to form multi-tiered 'separable' convolutional codes. The decoding of these codes, using separable symbol-by-symbol maximum a posteriori (MAP) 'filters', is described. It is known that this approach yields impressive results in non-fading additive white Gaussian noise channels. Interleaving is an inherent part of the code construction, and consequently, these codes are well suited for fading channel communications. Here, simulation results for communications over Rician fading channels are presented to support this claim.

  19. Estimated Flood-Inundation Mapping for the Upper Blue River, Indian Creek, and Dyke Branch in Kansas City, Missouri, 2006-08

    USGS Publications Warehouse

    Kelly, Brian P.; Huizinga, Richard J.

    2008-01-01

    In the interest of improved public safety during flooding, the U.S. Geological Survey, in cooperation with the city of Kansas City, Missouri, completed a flood-inundation study of the Blue River in Kansas City, Missouri, from the U.S. Geological Survey streamflow gage at Kenneth Road to 63rd Street, of Indian Creek from the Kansas-Missouri border to its mouth, and of Dyke Branch from the Kansas-Missouri border to its mouth, to determine the estimated extent of flood inundation at selected flood stages on the Blue River, Indian Creek, and Dyke Branch. The results of this study spatially interpolate information provided by U.S. Geological Survey gages, Kansas City Automated Local Evaluation in Real Time gages, and the National Weather Service flood-peak prediction service that comprise the Blue River flood-alert system and are a valuable tool for public officials and residents to minimize flood deaths and damage in Kansas City. To provide public access to the information presented in this report, a World Wide Web site (http://mo.water.usgs.gov/indep/kelly/blueriver) was created that displays the results of two-dimensional modeling between Hickman Mills Drive and 63rd Street, estimated flood-inundation maps for 13 flood stages, the latest gage heights, and National Weather Service stage forecasts for each forecast location within the study area. The results of a previous study of flood inundation on the Blue River from 63rd Street to the mouth also are available. In addition the full text of this report, all tables and maps are available for download (http://pubs.usgs.gov/sir/2008/5068). Thirteen flood-inundation maps were produced at 2-foot intervals for water-surface elevations from 763.8 to 787.8 feet referenced to the Blue River at the 63rd Street Automated Local Evaluation in Real Time stream gage operated by the city of Kansas City, Missouri. Each map is associated with gages at Kenneth Road, Blue Ridge Boulevard, Kansas City (at Bannister Road), U.S. Highway 71

  20. Planck intermediate results: XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth

    SciTech Connect

    Aghanim, N.; Ashdown, M.; Aumont, J.

    This study describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducingmore » significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. Finally, in a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.« less

  1. Planck intermediate results: XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth

    DOE PAGES

    Aghanim, N.; Ashdown, M.; Aumont, J.; ...

    2016-12-12

    This study describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducingmore » significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. Finally, in a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.« less

  2. Planck intermediate results. XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battye, R.; Benabed, K.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Carron, J.; Challinor, A.; Chiang, H. C.; Colombo, L. P. L.; Combet, C.; Comis, B.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fantaye, Y.; Finelli, F.; Forastieri, F.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; Ghosh, T.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Helou, G.; Henrot-Versillé, S.; Herranz, D.; Hivon, E.; Huang, Z.; Ilić, S.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knox, L.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Langer, M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Levrier, F.; Liguori, M.; Lilje, P. B.; López-Caniego, M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Matarrese, S.; Mauri, N.; McEwen, J. D.; Meinhold, P. R.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Mottet, S.; Naselsky, P.; Natoli, P.; Oxborrow, C. A.; Pagano, L.; Paoletti, D.; Partridge, B.; Patanchon, G.; Patrizii, L.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Plaszczynski, S.; Polastri, L.; Polenta, G.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Ruiz-Granados, B.; Salvati, L.; Sandri, M.; Savelainen, M.; Scott, D.; Sirri, G.; Sunyaev, R.; Suur-Uski, A.-S.; Tauber, J. A.; Tenti, M.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Valiviita, J.; Van Tent, F.; Vibert, L.; Vielva, P.; Villa, F.; Vittorio, N.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; White, M.; Zacchei, A.; Zonca, A.

    2016-12-01

    This paper describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducing significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. In a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization.

  3. Model Parameter Estimation Using Ensemble Data Assimilation: A Case with the Nonhydrostatic Icosahedral Atmospheric Model NICAM and the Global Satellite Mapping of Precipitation Data

    NASA Astrophysics Data System (ADS)

    Kotsuki, Shunji; Terasaki, Koji; Yashiro, Hasashi; Tomita, Hirofumi; Satoh, Masaki; Miyoshi, Takemasa

    2017-04-01

    This study aims to improve precipitation forecasts from numerical weather prediction (NWP) models through effective use of satellite-derived precipitation data. Kotsuki et al. (2016, JGR-A) successfully improved the precipitation forecasts by assimilating the Japan Aerospace eXploration Agency (JAXA)'s Global Satellite Mapping of Precipitation (GSMaP) data into the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) at 112-km horizontal resolution. Kotsuki et al. mitigated the non-Gaussianity of the precipitation variables by the Gaussian transform method for observed and forecasted precipitation using the previous 30-day precipitation data. This study extends the previous study by Kotsuki et al. and explores an online estimation of model parameters using ensemble data assimilation. We choose two globally-uniform parameters, one is the cloud-to-rain auto-conversion parameter of the Berry's scheme for large scale condensation and the other is the relative humidity threshold of the Arakawa-Schubert cumulus parameterization scheme. We perform the online-estimation of the two model parameters with an ensemble transform Kalman filter by assimilating the GSMaP precipitation data. The estimated parameters improve the analyzed and forecasted mixing ratio in the lower troposphere. Therefore, the parameter estimation would be a useful technique to improve the NWP models and their forecasts. This presentation will include the most recent progress up to the time of the symposium.

  4. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  5. An Evaluation of Population Density Mapping and Built up Area Estimates in Sri Lanka Using Multiple Methodologies

    NASA Astrophysics Data System (ADS)

    Engstrom, R.; Soundararajan, V.; Newhouse, D.

    2017-12-01

    In this study we examine how well multiple population density and built up estimates that utilize satellite data compare in Sri Lanka. The population relationship is examined at the Gram Niladhari (GN) level, the lowest administrative unit in Sri Lanka from the 2011 census. For this study we have two spatial domains, the whole country and a 3,500km2 sub-sample, for which we have complete high spatial resolution imagery coverage. For both the entire country and the sub-sample we examine how consistent are the existing publicly available measures of population constructed from satellite imagery at predicting population density? For just the sub-sample we examine how well do a suite of values derived from high spatial resolution satellite imagery predict population density and how does our built up area estimate compare to other publicly available estimates. Population measures were obtained from the Sri Lankan census, and were downloaded from Facebook, WorldPoP, GPW, and Landscan. Percentage built-up area at the GN level was calculated from three sources: Facebook, Global Urban Footprint (GUF), and the Global Human Settlement Layer (GHSL). For the sub-sample we have derived a variety of indicators from the high spatial resolution imagery. Using deep learning convolutional neural networks, an object oriented, and a non-overlapping block, spatial feature approach. Variables calculated include: cars, shadows (a proxy for building height), built up area, and buildings, roof types, roads, type of agriculture, NDVI, Pantex, and Histogram of Oriented Gradients (HOG) and others. Results indicate that population estimates are accurate at the higher, DS Division level but not necessarily at the GN level. Estimates from Facebook correlated well with census population (GN correlation of 0.91) but measures from GPW and WorldPop are more weakly correlated (0.64 and 0.34). Estimates of built-up area appear to be reliable. In the 32 DSD-subsample, Facebook's built- up area measure

  6. Mapping and estimating land change between 2001 and 2013 in a heterogeneous landscape in West Africa: Loss of forestlands and capacity building opportunities

    NASA Astrophysics Data System (ADS)

    Badjana, Hèou Maléki; Olofsson, Pontus; Woodcock, Curtis E.; Helmschrot, Joerg; Wala, Kpérkouma; Akpagana, Koffi

    2017-12-01

    In West Africa, accurate classification of land cover and land change remains a big challenge due to the patchy and heterogeneous nature of the landscape. Limited data availability, human resources and technical capacities, further exacerbate the challenge. The result is a region that is among the more understudied areas in the world, which in turn has resulted in a lack of appropriate information required for sustainable natural resources management. The objective of this paper is to explore open source software and easy-to-implement approaches to mapping and estimation of land change that are transferrable to local institutions to increase capacity in the region, and to provide updated information on the regional land surface dynamics. To achieve these objectives, stable land cover and land change between 2001 and 2013 in the Kara River Basin in Togo and Benin were mapped by direct multitemporal classification of Landsat data by parameterization and evaluation of two machine-learning algorithms. Areas of land cover and change were estimated by application of an unbiased estimator to sample data following international guidelines. A prerequisite for all tools and methods was implementation in an open source environment, and adherence to international guidelines for reporting land surface activities. Findings include a recommendation of the Random Forests algorithm as implemented in Orfeo Toolbox, and a stratified estimation protocol - all executed in the QGIS graphical use interface. It was found that despite an estimated reforestation of 10,0727 ± 3480 ha (95% confidence interval), the combined rate of forest and savannah loss amounted to 56,271 ± 9405 ha (representing a 16% loss of the forestlands present in 2001), resulting in a rather sharp net loss of forestlands in the study area. These dynamics had not been estimated prior to this study, and the results will provide useful information for decision making pertaining to natural resources management, land

  7. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  8. Estimating temporal and spatial variation of ocean surface pCO2 in the North Pacific using a self-organizing map neural network technique

    NASA Astrophysics Data System (ADS)

    Nakaoka, S.; Telszewski, M.; Nojiri, Y.; Yasunaka, S.; Miyazaki, C.; Mukai, H.; Usui, N.

    2013-09-01

    This study uses a neural network technique to produce maps of the partial pressure of oceanic carbon dioxide (pCO2sea) in the North Pacific on a 0.25° latitude × 0.25° longitude grid from 2002 to 2008. The pCO2sea distribution was computed using a self-organizing map (SOM) originally utilized to map the pCO2sea in the North Atlantic. Four proxy parameters - sea surface temperature (SST), mixed layer depth, chlorophyll a concentration, and sea surface salinity (SSS) - are used during the training phase to enable the network to resolve the nonlinear relationships between the pCO2sea distribution and biogeochemistry of the basin. The observed pCO2sea data were obtained from an extensive dataset generated by the volunteer observation ship program operated by the National Institute for Environmental Studies (NIES). The reconstructed pCO2sea values agreed well with the pCO2sea measurements, with the root-mean-square error ranging from 17.6 μatm (for the NIES dataset used in the SOM) to 20.2 μatm (for independent dataset). We confirmed that the pCO2sea estimates could be improved by including SSS as one of the training parameters and by taking into account secular increases of pCO2sea that have tracked increases in atmospheric CO2. Estimated pCO2sea values accurately reproduced pCO2sea data at several time series locations in the North Pacific. The distributions of pCO2sea revealed by 7 yr averaged monthly pCO2sea maps were similar to Lamont-Doherty Earth Observatory pCO2sea climatology, allowing, however, for a more detailed analysis of biogeochemical conditions. The distributions of pCO2sea anomalies over the North Pacific during the winter clearly showed regional contrasts between El Niño and La Niña years related to changes of SST and vertical mixing.

  9. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    PubMed

    Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X

    2016-05-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  10. Estimation and analysis of the short-term variations of multi-GNSS receiver differential code biases using global ionosphere maps

    NASA Astrophysics Data System (ADS)

    Li, Min; Yuan, Yunbin; Wang, Ningbo; Liu, Teng; Chen, Yongchang

    2017-12-01

    Care should be taken to minimize the adverse impact of differential code biases (DCBs) on global navigation satellite systems (GNSS)-derived ionospheric information determinations. For the sake of convenience, satellite and receiver DCB products provided by the International GNSS Service (IGS) are treated as constants over a period of 24 h (Li et al. (2014)). However, if DCB estimates show remarkable intra-day variability, the DCBs estimated as constants over 1-day period will partially account for ionospheric modeling error; in this case DCBs will be required to be estimated over shorter time period. Therefore, it is important to further gain insight into the short-term variation characteristics of receiver DCBs. In this contribution, the IGS combined global ionospheric maps and the German Aerospace Center (DLR)-provided satellite DCBs are used in the improved method to determine the multi-GNSS receiver DCBs with an hourly time resolution. The intra-day stability of the receiver DCBs is thereby analyzed in detail. Based on 1 month of data collected within the multi-GNSS experiment of the IGS, a good agreement within the receiver DCBs is found between the resulting receiver DCB estimates and multi-GNSS DCB products from the DLR at a level of 0.24 ns for GPS, 0.28 ns for GLONASS, 0.28 ns for BDS, and 0.30 ns for Galileo. Although most of the receiver DCBs are relatively stable over a 1-day period, large fluctuations (more than 9 ns between two consecutive hours) within the receiver DCBs can be found. We also demonstrate the impact of the significant short-term variations in receiver DCBs on the extraction of ionospheric total electron content (TEC), at a level of 12.96 TECu (TEC unit). Compared to daily receiver DCB estimates, the hourly DCB estimates obtained from this study can reflect the short-term variations of the DCB estimates more dedicatedly. The main conclusion is that preliminary analysis of characteristics of receiver DCB variations over short

  11. Scale Estimation and Correction of the Monocular Simultaneous Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data.

    PubMed

    Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo

    2018-06-15

    This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.

  12. Clavulanic acid production estimation based on color and structural features of Streptomyces clavuligerus bacteria using self-organizing map and genetic algorithm.

    PubMed

    Nurmohamadi, Maryam; Pourghassem, Hossein

    2014-05-01

    The utilization of antibiotics produced by Clavulanic acid (CA) is an increasing need in medicine and industry. Usually, the CA is created from the fermentation of Streptomycen Clavuligerus (SC) bacteria. Analysis of visual and morphological features of SC bacteria is an appropriate measure to estimate the growth of CA. In this paper, an automatic and fast CA production level estimation algorithm based on visual and structural features of SC bacteria instead of statistical methods and experimental evaluation by microbiologist is proposed. In this algorithm, structural features such as the number of newborn branches, thickness of hyphal and bacterial density and also color features such as acceptance color levels are extracted from the SC bacteria. Moreover, PH and biomass of the medium provided by microbiologists are considered as specified features. The level of CA production is estimated by using a new application of Self-Organizing Map (SOM), and a hybrid model of genetic algorithm with back propagation network (GA-BPN). The proposed algorithm is evaluated on four carbonic resources including malt, starch, wheat flour and glycerol that had used as different mediums of bacterial growth. Then, the obtained results are compared and evaluated with observation of specialist. Finally, the Relative Error (RE) for the SOM and GA-BPN are achieved 14.97% and 16.63%, respectively. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Cerebral Hyperperfusion Syndrome After Revascularization Surgery in Moyamoya Disease: Region-Symptom Mapping and Estimating a Critical Threshold.

    PubMed

    Kazumata, Ken; Uchino, Haruto; Tokairin, Kikutaro; Ito, Masaki; Shiga, Tohru; Osanai, Toshiya; Kawabori, Masahito

    2018-06-01

    Cerebral hyperperfusion complicates the postoperative course of patients with moyamoya disease after direct revascularization surgery. There is no clear distinction between cerebral hyperperfusion syndrome and benign postoperative increase in regional cerebral blood flow (rCBF). The present study aimed to determine clinically relevant changes in rCBF, anatomical correlations, and factors associated with transient neurologic symptoms after revascularization surgery in moyamoya disease. Whole-brain voxel-based perfusion mapping was used to identify regions involved in cerebral hyperperfusion and quantify the changes in 105 hemispheric surgeries with the use of single-photon computed tomography acquired on postoperative day 7. The changes in rCBF were quantitatively analyzed, and associations with cerebral hyperperfusion syndrome were determined. Transient neurologic symptoms appeared with rCBF increase in 37.9% of adults. Speech impairments were associated with an increase in rCBF in the operculo-insula region. Cheiro-oral syndrome was associated with the posterior insula as well as the prefrontal region. A receiver operating curve analysis yielded transient neurologic symptoms with maximum accuracy at >15.5% increase from baseline. Age and preoperative rCBF were independently associated with transient neurologic symptoms (P < 0.001). Areas showing rCBF increase during the experience of transient neurologic symptoms were spatially compatible with the known functional anatomy of the brain. An increase of approximately 15% from baseline was found to be critical, which is a far lower threshold than what has been reported previously. Increasing age was significantly associated with the occurrence of symptomatic hyperperfusion. Furthermore, patients with preserved rCBF also showed symptomatic hyperperfusion. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. ­­Estimating Forest Management Units from Road Network Maps in the Southeastern U.S.

    NASA Astrophysics Data System (ADS)

    Yang, D.; Hall, J.; Fu, C. S.; Binford, M. W.

    2015-12-01

    The most important factor affecting forest structure and function is the type of management undertaken in forest stands. Owners manage forests using appropriately sized areas to meet management objectives, which include economic return, sustainability, recreation, or esthetic enjoyment. Thus, the socio-environmental unit of study for forests should be the management unit. To study the ecological effects of different kinds of management activities, we must identify individual management units. Road networks, which provide access for human activities, are widely used in managing forests in the southeastern U.S. Coastal Plain and Piedmont (SEUS). Our research question in this study is: How can we identify individual forest management units in an entire region? To answer it, we hypothesize that the road network defines management units on the landscape. Road-caused canopy openings are not always captured by satellite sensors, so it is difficult to delineate ecologically relevant patches based only on remote sensing data. We used a reliable, accurate and freely available road network data, OpenStreetMap (OSM), and the National Land Cover Database (NLCD) to delineate management units in a section of the SEUS defined by Landsat Wprldwide Reference System (WRS) II footprint path 17 row 39. The spatial frequency distributions of forest management units indicate that while units < 0.5 Ha comprised 64% of the units, these small units covered only 0.98% of the total forest area. Management units ≥ 0.5 Ha ranged from 0.5 to 160,770 Ha (the Okefenokee National Wildlife Refuge). We compared the size-frequency distributions of management units with four independently derived management types: production, ecological, preservation, and passive management. Preservation and production management had the largest units, at 40.5 ± 2196.7 (s.d.) and 41.3 ± 273.5 Ha, respectively. Ecological and passive averaged about half as large at 19.2 ± 91.5 and 22.4 ± 96.0 Ha, respectively

  15. Motion corrected DWI with integrated T2-mapping for simultaneous estimation of ADC, T2-relaxation and perfusion in prostate cancer.

    PubMed

    Skorpil, M; Brynolfsson, P; Engström, M

    2017-06-01

    Multiparametric magnetic resonance imaging (MRI) and PI-RADS (Prostate Imaging - Reporting and Data System) has become the standard to determine a probability score for a lesion being a clinically significant prostate cancer. T2-weighted and diffusion-weighted imaging (DWI) are essential in PI-RADS, depending partly on visual assessment of signal intensity, while dynamic-contrast enhanced imaging is less important. To decrease inter-rater variability and further standardize image evaluation, complementary objective measures are in need. We here demonstrate a sequence enabling simultaneous quantification of apparent diffusion coefficient (ADC) and T2-relaxation, as well as calculation of the perfusion fraction f from low b-value intravoxel incoherent motion data. Expandable wait pulses were added to a FOCUS DW SE-EPI sequence, allowing the effective echo time to change at run time. To calculate both ADC and f, b-values 200s/mm 2 and 600s/mm 2 were chosen, and for T2-estimation 6 echo times between 64.9ms and 114.9ms were used. Three patients with prostate cancer were examined and all had significantly decreased ADC and T2-values, while f was significantly increased in 2 of 3 tumors. T2 maps obtained in phantom measurements and in a healthy volunteer were compared to T2 maps from a SE sequence with consecutive scans, showing good agreement. In addition, a motion correction procedure was implemented to reduce the effects of prostate motion, which improved T2-estimation. This sequence could potentially enable more objective tumor grading, and decrease the inter-rater variability in the PI-RADS classification. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. In situ Biological Dose Mapping Estimates the Radiation Burden Delivered to ‘Spared’ Tissue between Synchrotron X-Ray Microbeam Radiotherapy Tracks

    PubMed Central

    Rothkamm, Kai; Crosbie, Jeffrey C.; Daley, Frances; Bourne, Sarah; Barber, Paul R.; Vojnovic, Borivoj; Cann, Leonie; Rogers, Peter A. W.

    2012-01-01

    Microbeam radiation therapy (MRT) using high doses of synchrotron X-rays can destroy tumours in animal models whilst causing little damage to normal tissues. Determining the spatial distribution of radiation doses delivered during MRT at a microscopic scale is a major challenge. Film and semiconductor dosimetry as well as Monte Carlo methods struggle to provide accurate estimates of dose profiles and peak-to-valley dose ratios at the position of the targeted and traversed tissues whose biological responses determine treatment outcome. The purpose of this study was to utilise γ-H2AX immunostaining as a biodosimetric tool that enables in situ biological dose mapping within an irradiated tissue to provide direct biological evidence for the scale of the radiation burden to ‘spared’ tissue regions between MRT tracks. Γ-H2AX analysis allowed microbeams to be traced and DNA damage foci to be quantified in valleys between beams following MRT treatment of fibroblast cultures and murine skin where foci yields per unit dose were approximately five-fold lower than in fibroblast cultures. Foci levels in cells located in valleys were compared with calibration curves using known broadbeam synchrotron X-ray doses to generate spatial dose profiles and calculate peak-to-valley dose ratios of 30–40 for cell cultures and approximately 60 for murine skin, consistent with the range obtained with conventional dosimetry methods. This biological dose mapping approach could find several applications both in optimising MRT or other radiotherapeutic treatments and in estimating localised doses following accidental radiation exposure using skin punch biopsies. PMID:22238667

  17. Extension of the Optimized Virtual Fields Method to estimate viscoelastic material parameters from 3D dynamic displacement fields

    PubMed Central

    Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.

    2015-01-01

    In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416

  18. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  19. From Complex B1 Mapping to Local SAR Estimation for Human Brain MR Imaging Using Multi-channel Transceiver Coil at 7T

    PubMed Central

    Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortel, Pierre-François; Liu, Jiaen

    2014-01-01

    Elevated Specific Absorption Rate (SAR) associated with increased main magnetic field strength remains as a major safety concern in ultra-high-field (UHF) Magnetic Resonance Imaging (MRI) applications. The calculation of local SAR requires the knowledge of the electric field induced by radiofrequency (RF) excitation, and the local electrical properties of tissues. Since electric field distribution cannot be directly mapped in conventional MR measurements, SAR estimation is usually performed using numerical model-based electromagnetic simulations which, however, are highly time consuming and cannot account for the specific anatomy and tissue properties of the subject undergoing a scan. In the present study, starting from the measurable RF magnetic fields (B1) in MRI, we conducted a series of mathematical deduction to estimate the local, voxel-wise and subject-specific SAR for each single coil element using a multi-channel transceiver array coil. We first evaluated the feasibility of this approach in numerical simulations including two different human head models. We further conducted experimental study in a physical phantom and in two human subjects at 7T using a multi-channel transceiver head coil. Accuracy of the results is discussed in the context of predicting local SAR in the human brain at UHF MRI using multi-channel RF transmission. PMID:23508259

  20. Estimation of River Discharge at Ungauged Catchment using GIS Map Correlation Method as Applied in Sta. Lucia River in Mauban, Quezon, Philippines

    NASA Astrophysics Data System (ADS)

    Monjardin, Cris Edward F.; Uy, Francis Aldrine A.; Tan, Fibor J.

    2017-06-01

    This paper presents use of GIS Map Correlation Method, a novel method of Prediction of Ungauged Basin, which is used to estimate the river flow at an ungauged catchment. The PUB Method used here intends to reduce the time and costs of data gathering procedure since it will just rely on a reference calibrated watershed that has almost the same characteristics in terms of slope, curve number, land cover, climatic condition, and average basin elevation. Furthermore, this utilized a set of modelling software which used digital elevation models (DEM), rainfall and discharge data. The researchers estimated the river flow of Sta. Lucia River in Quezon province, which is the ungauged catchment. The researchers assessed 11 gauged catchments and determined which basin could be correlated to Sta. Lucia. After finding the most correlated basin, the researchers used the data considering adjusted parameters of the gauged catchment. In evaluating the accuracy of the method, the researchers simulated a rainfall event in the said catchment and compared the actual discharge and the generated discharge from HEC-HMS. The researchers found out that method showed a good fit in the compared results, proving GMC Method is effective for use in the calibration of ungauged catchments.

  1. A Data Centred Method to Estimate and Map Changes in the Full Distribution of Daily Precipitation and Its Exceedances

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily temperature or precipitation. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by 'heavy tailed' distributed variables such as daily precipitation. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those extreme precipitation days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results identify regionally consistent patterns which, dependent on location, show systematic increase in precipitation on the wettest days, shifts in precipitation patterns to less moderate days and more heavy days, and drying

  2. High resolution spatio-temporal mapping of NO2 pollution for estimating personal exposures of the Dutch population

    NASA Astrophysics Data System (ADS)

    Soenario, Ivan; Helbich, Marco; Schmitz, Oliver; Strak, Maciek; Hoek, Gerard; Karssenberg, Derek

    2017-04-01

    Air pollution has been associated with adverse health effects (e.g., cardiovascular and respiration diseases) in the urban environments. Therefore, the assessment of people's exposure to air pollution is central in epidemiological studies. The estimation of exposures on an individual level can be done by combining location information across space and over time with spatio-temporal data on air pollution concentrations. When detailed information on peoples' space-time paths (e.g. commuting patterns calculated by means of spatial routing algorithms or tracked through GPS) and peoples' major activity locations (e.g. home location, work location) are available, it is possible to calculate more precise personal exposure levels depending on peoples' individual space-time mobility patterns. This requires air pollution values not only at a high level of spatial accuracy and high temporal granularity but such data also needs to be available on a nation-wide scale. As current data is seriously limited in this respect, we introduce a novel data set of NO2 levels across the Netherlands. The provided NO2 concentrations are accessible on hourly timestamps on a 5 meter grid cell resolution for weekdays and weekends, and each month of the year. We modeled a single Land Use Regression model using a five year average of NO2 data from the Dutch NO2 measurement network consisting of N=46 sampling locations distributed over the country. Predictor variables for this model were selected in a data-driven manner using an Elastic Net and Best Subset Selection procedure from 70 candidate predictors including traffic, industry, infrastructure and population-based variables. Subsequently, to model NO2 for each time scale (hour, week, month), the LUR coefficients were fitted using the NO2 data, aggregated per time scale. Model validation was grounded on independent data collected in an ad hoc measurement campaign. Our results show a considerable difference in urban concentrations between

  3. A cost effective and operational methodology for wall to wall Above Ground Biomass (AGB) and carbon stocks estimation and mapping: Nepal REDD+

    NASA Astrophysics Data System (ADS)

    Gilani, H., Sr.; Ganguly, S.; Zhang, G.; Koju, U. A.; Murthy, M. S. R.; Nemani, R. R.; Manandhar, U.; Thapa, G. J.

    2015-12-01

    Nepal is a landlocked country with 39% forest cover of the total land area (147,181 km2). Under the Forest Carbon Partnership Facility (FCPF) and implemented by the World Bank (WB), Nepal chosen as one of four countries best suitable for results-based payment system for Reducing Emissions from Deforestation and Forest Degradation (REDD and REDD+) scheme. At the national level Landsat based, from 1990 to 2000 the forest area has declined by 2%, i.e. by 1467 km2, whereas from 2000 to 2010 it has declined only by 0.12% i.e. 176 km2. A cost effective monitoring and evaluation system for REDD+ requires a balanced approach of remote sensing and ground measurements. This paper provides, for Nepal a cost effective and operational 30 m Above Ground Biomass (AGB) estimation and mapping methodology using freely available satellite data integrated with field inventory. Leaf Area Index (LAI) generated based on propose methodology by Ganguly et al. (2012) using Landsat-8 the OLI cloud free images. To generate tree canopy height map, a density scatter graph between the Geoscience Laser Altimeter System (GLAS) on the Ice, Cloud, and Land Elevation Satellite (ICESat) estimated maximum height and Landsat LAI nearest to the center coordinates of the GLAS shots show a moderate but significant exponential correlation (31.211*LAI0.4593, R2= 0.33, RMSE=13.25 m). From the field well distributed circular (750m2 and 500m2), 1124 field plots (0.001% representation of forest cover) measured which were used for estimation AGB (ton/ha) using Sharma et al. (1990) proposed equations for all tree species of Nepal. A satisfactory linear relationship (AGB = 8.7018*Hmax-101.24, R2=0.67, RMSE=7.2 ton/ha) achieved between maximum canopy height (Hmax) and AGB (ton/ha). This cost effective and operational methodology is replicable, over 5-10 years with minimum ground samples through integration of satellite images. Developed AGB used to produce optimum fuel wood scenarios using population and road

  4. Using GIS mapping of the extent of nearshore rocky reefs to estimate the abundance and reproductive output of important fishery species.

    PubMed

    Claisse, Jeremy T; Pondella, Daniel J; Williams, Jonathan P; Sadd, James

    2012-01-01

    Kelp Bass (Paralabrax clathratus) and California Sheephead (Semicossyphus pulcher) are economically and ecologically valuable rocky reef fishes in southern California, making them likely indicator species for evaluating resource management actions. Multiple spatial datasets, aerial and satellite photography, underwater observations and expert judgment were used to produce a comprehensive map of nearshore natural rocky reef habitat for the Santa Monica Bay region (California, USA). It was then used to examine the relative contribution of individual reefs to a regional estimate of abundance and reproductive potential of the focal species. For the reefs surveyed for fishes (i.e. 18 out of the 22 in the region, comprising 82% the natural rocky reef habitat <30 m depth, with a total area of 1850 ha), total abundance and annual egg production of California Sheephead were 451 thousand fish (95% CI: 369 to 533 thousand) and 203 billion eggs (95% CI: 135 to 272 billion). For Kelp Bass, estimates were 805 thousand fish (95% CI: 669 to 941 thousand) and 512 billion eggs (95% CI: 414 to 610 billion). Size structure and reef area were key factors in reef-specific contributions to the regional egg production. The size structures of both species illustrated impacts from fishing, and results demonstrate the potential that relatively small increases in the proportion of large females on larger reefs could have on regional egg production. For California Sheephead, a substantial proportion of the regional egg production estimate (>30%) was produced from a relatively small proportion of the regional reef area (c. 10%). Natural nearshore rocky reefs make up only 11% of the area in the newly designated MPAs in this region, but results provide some optimism that regional fisheries could benefit through an increase in overall reproductive output, if adequate increases in size structure of targeted species are realized.

  5. Using GIS Mapping of the Extent of Nearshore Rocky Reefs to Estimate the Abundance and Reproductive Output of Important Fishery Species

    PubMed Central

    Claisse, Jeremy T.; Pondella, Daniel J.; Williams, Jonathan P.; Sadd, James

    2012-01-01

    Kelp Bass (Paralabrax clathratus) and California Sheephead (Semicossyphus pulcher) are economically and ecologically valuable rocky reef fishes in southern California, making them likely indicator species for evaluating resource management actions. Multiple spatial datasets, aerial and satellite photography, underwater observations and expert judgment were used to produce a comprehensive map of nearshore natural rocky reef habitat for the Santa Monica Bay region (California, USA). It was then used to examine the relative contribution of individual reefs to a regional estimate of abundance and reproductive potential of the focal species. For the reefs surveyed for fishes (i.e. 18 out of the 22 in the region, comprising 82% the natural rocky reef habitat <30 m depth, with a total area of 1850 ha), total abundance and annual egg production of California Sheephead were 451 thousand fish (95% CI: 369 to 533 thousand) and 203 billion eggs (95% CI: 135 to 272 billion). For Kelp Bass, estimates were 805 thousand fish (95% CI: 669 to 941thousand) and 512 billion eggs (95% CI: 414 to 610 billion). Size structure and reef area were key factors in reef-specific contributions to the regional egg production. The size structures of both species illustrated impacts from fishing, and results demonstrate the potential that relatively small increases in the proportion of large females on larger reefs could have on regional egg production. For California Sheephead, a substantial proportion of the regional egg production estimate (>30%) was produced from a relatively small proportion of the regional reef area (c. 10%). Natural nearshore rocky reefs make up only 11% of the area in the newly designated MPAs in this region, but results provide some optimism that regional fisheries could benefit through an increase in overall reproductive output, if adequate increases in size structure of targeted species are realized. PMID:22272326

  6. Insight From the Statistics of Nothing: Estimating Limits of Change Detection Using Inferred No-Change Areas in DEM Difference Maps and Application to Landslide Hazard Studies

    NASA Astrophysics Data System (ADS)

    Haneberg, W. C.

    2017-12-01

    Remote characterization of new landslides or areas of ongoing movement using differences in high resolution digital elevation models (DEMs) created through time, for example before and after major rains or earthquakes, is an attractive proposition. In the case of large catastrophic landslides, changes may be apparent enough that simple subtraction suffices. In other cases, statistical noise can obscure landslide signatures and place practical limits on detection. In ideal cases on land, GPS surveys of representative areas at the time of DEM creation can quantify the inherent errors. In less-than-ideal terrestrial cases and virtually all submarine cases, it may be impractical or impossible to independently estimate the DEM errors. Examining DEM difference statistics for areas reasonably inferred to have no change, however, can provide insight into the limits of detectability. Data from inferred no-change areas of airborne LiDAR DEM difference maps of the 2014 Oso, Washington landslide and landslide-prone colluvium slopes along the Ohio River valley in northern Kentucky, show that DEM difference maps can have non-zero mean and slope dependent error components consistent with published studies of DEM errors. Statistical thresholds derived from DEM difference error and slope data can help to distinguish between DEM differences that are likely real—and which may indicate landsliding—from those that are likely spurious or irrelevant. This presentation describes and compares two different approaches, one based upon a heuristic assumption about the proportion of the study area likely covered by new landslides and another based upon the amount of change necessary to ensure difference at a specified level of probability.

  7. On the need for a time- and location-dependent estimation of the NDSI threshold value for reducing existing uncertainties in snow cover maps at different scales

    NASA Astrophysics Data System (ADS)

    Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten

    2018-05-01

    Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.

  8. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients

    PubMed Central

    Urgesi, Cosimo; Candidi, Matteo; Avenanti, Alessio

    2014-01-01

    Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others’ actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal, and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others’ actions. The specific anatomical substrates of such neuropsychological deficits however, are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding. PMID:24910603

  9. Three-dimensional feature extraction and geometric mappings for improved parameter estimation in forested terrain using airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Lee, Heezin

    Scanning laser ranging technology is well suited for measuring point-to-point distances because of its ability to generate small beam divergences. As a result, many of the laser pulses emitted from airborne light detection and ranging (LiDAR) systems are able to reach the ground underneath tree canopies through small (10 cm scale) gaps in the foliage. Using high pulse rate lasers and fast optical scanners, airborne LiDAR systems can provide both high spatial resolution and canopy penetration, and these data have become more widely available in recent years for use in environmental and forestry applications. The small-footprint, discrete-return Airborne Laser Swath Mapping (ALSM) system at the University of Florida (UF) is used to directly measure ground surface elevations and the three-dimensional (3D) distribution of the vegetative material above the soil surface. Field of view geometric mappings are explored to find optical gaps inside forests. First, a method is developed to detect walking trails in natural forests that are obscured from above by the canopy. Several features are derived from the ALSM data and used to constrain the search space and infer the location of trails. Second, a robust and simple procedure for estimating intercepted photosynthetically active radiation (IPAR), which is an important measure of forest timber productivity and of daylight visibility in forested terrain, is presented. Simple scope functions that isolate the relevant LiDAR reflections between observer locations and the sun are defined and shown to give good agreement between the LiDAR-derived estimates and values of IPAR measured in situ. A conical scope function with an angular divergence from the centerline of +/-7° provided the best agreement with the in situ measurements. This scope function yielded remarkably consistent IPAR estimates for different pine species and growing conditions. The developed idea could be extended, through potential future work, to characterize the

  10. Mapping patient pathways and estimating resource use for point of care versus standard testing and treatment of chlamydia and gonorrhoea in genitourinary medicine clinics in the UK.

    PubMed

    Adams, Elisabeth J; Ehrlich, Alice; Turner, Katherine M E; Shah, Kunj; Macleod, John; Goldenberg, Simon; Meray, Robin K; Pearce, Vikki; Horner, Patrick

    2014-07-23

    We aimed to explore patient pathways using a chlamydia/gonorrhoea point-of-care (POC) nucleic acid amplification test (NAAT), and estimate and compare the costs of the proposed POC pathways with the current pathways using standard laboratory-based NAAT testing. Workshops were conducted with healthcare professionals at four sexual health clinics representing diverse models of care in the UK. They mapped out current pathways that used chlamydia/gonorrhoea tests, and constructed new pathways using a POC NAAT. Healthcare professionals' time was assessed in each pathway. The proposed POC pathways were then priced using a model built in Microsoft Excel, and compared to previously published costs for pathways using standard NAAT-based testing in an off-site laboratory. Pathways using a POC NAAT for asymptomatic and symptomatic patients and chlamydia/gonorrhoea-only tests were shorter and less expensive than most of the current pathways. Notably, we estimate that POC testing as part of a sexual health screen for symptomatic patients, or as stand-alone chlamydia/gonorrhoea testing, could reduce costs per patient by as much as £16 or £6, respectively. In both cases, healthcare professionals' time would be reduced by approximately 10 min per patient. POC testing for chlamydia/gonorrhoea in a clinical setting may reduce costs and clinician time, and may lead to more appropriate and quicker care for patients. Further study is warranted on how to best implement POC testing in clinics, and on the broader clinical and cost implications of this technology. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Mapping of road-salt-contaminated groundwater discharge and estimation of chloride load to a small stream in southern New Hampshire, USA

    USGS Publications Warehouse

    Harte, P.T.; Trowbridge, P.R.

    2010-01-01

    Concentrations of chloride in excess of State of New Hampshire water-quality standards (230 mg/l) have been measured in watersheds adjacent to an interstate highway (I-93) in southern New Hampshire. A proposed widening plan for I-93 has raised concerns over further increases in chloride. As part of this effort, road-salt-contaminated groundwater discharge was mapped with terrain electrical conductivity (EC) electromagnetic (EM) methods in the fall of 2006 to identify potential sources of chloride during base-flow conditions to a small stream, Policy Brook. Three different EM meters were used to measure different depths below the streambed (ranging from 0 to 3 m). Results from the three meters showed similar patterns and identified several reaches where high EC groundwater may have been discharging. Based on the delineation of high (up to 350 mmhos/m) apparent terrain EC, seven-streambed piezometers were installed to sample shallow groundwater. Locations with high specific conductance in shallow groundwater (up to 2630 mmhos/m) generally matched locations with high streambed (shallow subsurface) terrain EC. A regression equation was used to convert the terrain EC of the streambed to an equivalent chloride concentration in shallow groundwater unique for this site. Utilizing the regression equation and estimates of onedimensional Darcian flow through the streambed, a maximum potential groundwater chloride load was estimated at 188 Mg of chloride per year. Changes in chloride concentration in stream water during streamflow recessions showed a linear response that indicates the dominant process affecting chloride is advective flow of chloride-enriched groundwater discharge. Published in 2010 by John Wiley & Sons, Ltd.

  12. Estimation of the radon production rate in granite rocks and evaluation of the implications for geogenic radon potential maps: A case study in Central Portugal.

    PubMed

    Pereira, A; Lamas, R; Miranda, M; Domingos, F; Neves, L; Ferreira, N; Costa, L

    2017-01-01

    The goal of this study was to estimate radon gas production rate in granitic rocks and identify the factors responsible for the observed variability. For this purpose, 180 samples were collected from pre-Hercynian and Hercynian rocks in north and central Portugal and analysed for a) 226 Ra activity, b) radon ( 222 Rn) per unit mass activity, and c) radon gas emanation coefficient. On a subset of representative samples from the same rock types were also measured d) apparent porosity and e) apparent density. For each of these variables, the values ranged as follows: a) 15 to 587 Bq kg -1 , b) 2 to 73 Bq kg -1 , c) 0.01 to 0.80, d) 0.3 to 11.4 % and e) 2530 to 2850 kg m -3 . Radon production rate varied between 40 to 1386 Bq m -3  h -1 . The variability observed was associated with geologically late processes of low and high temperature which led to the alteration of the granitic rock with mobilization of U and increase in radon 222 Rn gas emanation. It is suggested that, when developing geogenic radon potential maps, data on uranium concentration in soils/altered rock should be used, rather than data obtained from unaltered rock. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Place of work and residential exposure to ambient air pollution and birth outcomes in Scotland, using geographically fine pollution climate mapping estimates.

    PubMed

    Dibben, Chris; Clemens, Tom

    2015-07-01

    A relationship between ambient air pollution and adverse birth outcomes has been found in a large number of studies that have mainly used a nearest monitor methodology. Recent research has suggested that the effect size may have been underestimated in these studies. This paper examines associations between birth outcomes and ambient levels of residential and workplace sulphur dioxide, particulates and Nitrogen Dioxide estimated using an alternative method - pollution climate mapping. Risk of low birthweight and mean birthweight (for n=21,843 term births) and risk of preterm birth (for n=23,086 births) were modelled against small area annual mean ambient air pollution concentrations at work and residence location adjusting for potential confounding factors for singleton live births (1994-2008) across Scotland. Odds ratios of low birthweight of 1.02 (95% CI, 1.01-1.03) and 1.07 (95% CI, 1.01-1.12) with concentration increases of 1 µg/m(3) for NO2 and PM10 respectively. Raised but insignificant risks of very preterm birth were found with PM10 (relative risk ratio=1.08; 95% CI, 1.00 to 1.17 per 1 µg/m(3)) and NO2 (relative risk ratio=1.01; 95% CI, 1.00 to 1.03 per 1 µg/m(3)). An inverse association between mean birthweight and mean annual NO2(-1.24 g; 95% CI, -2.02 to -0.46 per 1 µg/m(3)) and PM10 (-5.67 g; 95% CI, -9.47 to -1.87 per 1 µg/m(3)). SO2 showed no significant associations. This study highlights the association between air pollution exposure and reduced newborn size at birth. Together with other recent work it also suggests that exposure estimation based on the nearest monitor method may have led to an under-estimation of the effect size of pollutants on birth outcomes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Mapping snow depth in complex alpine terrain with close range aerial imagery - estimating the spatial uncertainties of repeat autonomous aerial surveys over an active rock glacier

    NASA Astrophysics Data System (ADS)

    Goetz, Jason; Marcer, Marco; Bodin, Xavier; Brenning, Alexander

    2017-04-01

    Snow depth mapping in open areas using close range aerial imagery is just one of the many cases where developments in structure-from-motion and multi-view-stereo (SfM-MVS) 3D reconstruction techniques have been applied for geosciences - and with good reason. Our ability to increase the spatial resolution and frequency of observations may allow us to improve our understanding of how snow depth distribution varies through space and time. However, to ensure accurate snow depth observations from close range sensing we must adequately characterize the uncertainty related to our measurement techniques. In this study, we explore the spatial uncertainties of snow elevation models for estimation of snow depth in a complex alpine terrain from close range aerial imagery. We accomplish this by conducting repeat autonomous aerial surveys over a snow-covered active-rock glacier located in the French Alps. The imagery obtained from each flight of an unmanned aerial vehicle (UAV) is used to create an individual digital elevation model (DEM) of the snow surface. As result, we obtain multiple DEMs of the snow surface for the same site. These DEMs are obtained from processing the imagery with the photogrammetry software Agisoft Photoscan. The elevation models are also georeferenced within Photoscan using the geotagged imagery from an onboard GNSS in combination with ground targets placed around the rock glacier, which have been surveyed with highly accurate RTK-GNSS equipment. The random error associated with multi-temporal DEMs of the snow surface is estimated from the repeat aerial survey data. The multiple flights are designed to follow the same flight path and altitude above the ground to simulate the optimal conditions of repeat survey of the site, and thus try to estimate the maximum precision associated with our snow-elevation measurement technique. The bias of the DEMs is assessed with RTK-GNSS survey observations of the snow surface elevation of the area on and surrounding

  15. Three-dimensional (3D) coseismic deformation map produced by the 2014 South Napa Earthquake estimated and modeled by SAR and GPS data integration

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Albano, Matteo; Fernández, José; Palano, Mimmo; Samsonov, Sergey; Stramondo, Salvatore; Zerbini, Susanna

    2016-04-01

    In this work we present a 3D map of coseismic displacements due to the 2014 Mw 6.0 South Napa earthquake, California, obtained by integrating displacement information data from SAR Interferometry (InSAR), Multiple Aperture Interferometry (MAI), Pixel Offset Tracking (POT) and GPS data acquired by both permanent stations and campaigns sites. This seismic event produced significant surface deformation along the 3D components causing several damages to vineyards, roads and houses. The remote sensing results, i.e. InSAR, MAI and POT, were obtained from the pair of SAR images provided by the Sentinel-1 satellite, launched on April 3rd, 2014. They were acquired on August 7th and 31st along descending orbits with an incidence angle of about 23°. The GPS dataset includes measurements from 32 stations belonging to the Bay Area Regional Deformation Network (BARDN), 301 continuous stations available from the UNAVCO and the CDDIS archives, and 13 additional campaign sites from Barnhart et al, 2014 [1]. These data constrain the horizontal and vertical displacement components proving to be helpful for the adopted integration method. We exploit the Bayes theory to search for the 3D coseismic displacement components. In particular, for each point, we construct an energy function and solve the problem to find a global minimum. Experimental results are consistent with a strike-slip fault mechanism with an approximately NW-SE fault plane. Indeed, the 3D displacement map shows a strong North-South (NS) component, peaking at about 15 cm, a few kilometers far from the epicenter. The East-West (EW) displacement component reaches its maximum (~10 cm) south of the city of Napa, whereas the vertical one (UP) is smaller, although a subsidence in the order of 8 cm on the east side of the fault can be observed. A source modelling was performed by inverting the estimated displacement components. The best fitting model is given by a ~N330° E-oriented and ~70° dipping fault with a prevailing

  16. Use of high-resolution imagery acquired from an unmanned aircraft system for fluvial mapping and estimating water-surface velocity in rivers

    NASA Astrophysics Data System (ADS)

    Kinzel, P. J.; Bauer, M.; Feller, M.; Holmquist-Johnson, C.; Preston, T.

    2013-12-01

    The use of unmanned aircraft systems (UAS) for environmental monitoring in the United States is anticipated to increase in the coming years as the Federal Aviation Administration (FAA) further develops guidelines to permit their integration into the National Airspace System. The U.S. Geological Survey's (USGS) National Unmanned Aircraft Systems Project Office routinely obtains Certificates of Authorization from the FAA for utilizing UAS technology for a variety of natural resource applications for the U.S. Department of the Interior (DOI). We evaluated the use of a small UAS along two reaches of the Platte River near Overton Nebraska, USA, to determine the accuracy of the system for mapping the extent and elevation of emergent sandbars and to test the ability of a hovering UAS to identify and track tracers to estimate water-surface velocity. The UAS used in our study is the Honeywell Tarantula Hawk RQ16 (T-Hawk), developed for the U.S. Army as a reconnaissance and surveillance platform. The T-Hawk has been recently modified by USGS, and certified for airworthiness by the DOI - Office of Aviation Services, to accommodate a higher-resolution imaging payload than was originally deployed with the system. The T-Hawk is currently outfitted with a Canon PowerShot SX230 HS with a 12.1 megapixel resolution and intervalometer to record images at a user defined time step. To increase the accuracy of photogrammetric products, orthoimagery and DEMs using structure-from-motion (SFM) software, we utilized ground control points in the study reaches and acquired imagery using flight lines at various altitudes (200-400 feet above ground level) and oriented both parallel and perpendicular to the river. Our results show that the mean error in the elevations derived from SFM in the upstream reach was 17 centimeters and horizontal accuracy was 6 centimeters when compared to 4 randomly distributed targets surveyed on emergent sandbars. In addition to the targets, multiple transects were

  17. Mapping Land Cover Types in Amazon Basin Using 1km JERS-1 Mosaic

    NASA Technical Reports Server (NTRS)

    Saatchi, Sassan S.; Nelson, Bruce; Podest, Erika; Holt, John

    2000-01-01

    In this paper, the 100 meter JERS-1 Amazon mosaic image was used in a new classifier to generate a I km resolution land cover map. The inputs to the classifier were 1 km resolution mean backscatter and seven first order texture measures derived from the 100 m data by using a 10 x 10 independent sampling window. The classification approach included two interdependent stages: 1) a supervised maximum a posteriori Bayesian approach to classify the mean backscatter image into 5 general land cover categories of forest, savannah, inundated, white sand, and anthropogenic vegetation classes, and 2) a texture measure decision rule approach to further discriminate subcategory classes based on taxonomic information and biomass levels. Fourteen classes were successfully separated at 1 km scale. The results were verified by examining the accuracy of the approach by comparison with the IBGE and the AVHRR 1 km resolution land cover maps.

  18. Spatial accessibility to healthcare services in Shenzhen, China: improving the multi-modal two-step floating catchment area method by estimating travel time via online map APIs.

    PubMed

    Tao, Zhuolin; Yao, Zaoxing; Kong, Hui; Duan, Fei; Li, Guicai

    2018-05-09

    Shenzhen has rapidly grown into a megacity in the recent decades. It is a challenging task for the Shenzhen government to provide sufficient healthcare services. The spatial configuration of healthcare services can influence the convenience for the consumers to obtain healthcare services. Spatial accessibility has been widely adopted as a scientific measurement for evaluating the rationality of the spatial configuration of healthcare services. The multi-modal two-step floating catchment area (2SFCA) method is an important advance in the field of healthcare accessibility modelling, which enables the simultaneous assessment of spatial accessibility via multiple transport modes. This study further develops the multi-modal 2SFCA method by introducing online map APIs to improve the estimation of travel time by public transit or by car respectively. As the results show, the distribution of healthcare accessibility by multi-modal 2SFCA shows significant spatial disparity. Moreover, by dividing the multi-modal accessibility into car-mode and transit-mode accessibility, this study discovers that the transit-mode subgroup is disadvantaged in the competition for healthcare services with the car-mode subgroup. The disparity in transit-mode accessibility is the main reason of the uneven pattern of healthcare accessibility in Shenzhen. The findings suggest improving the public transit conditions for accessing healthcare services to reduce the disparity of healthcare accessibility. More healthcare services should be allocated in the eastern and western Shenzhen, especially sub-districts in Dapeng District and western Bao'an District. As these findings cannot be drawn by the traditional single-modal 2SFCA method, the advantage of the multi-modal 2SFCA method is significant to both healthcare studies and healthcare system planning.

  19. Mapping the “What” and “Where” Visual Cortices and Their Atrophy in Alzheimer's Disease: Combined Activation Likelihood Estimation with Voxel-Based Morphometry

    PubMed Central

    Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng

    2016-01-01

    The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770

  20. Mapping of past stand-level forest disturbances and estimation of time since disturbance using simulated spaceborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Sanchez Lopez, N.; Hudak, A. T.; Boschetti, L.

    2017-12-01

    Explicit information on the location, the size or the time since disturbance (TSD) at the forest stand level complements field inventories, improves the monitoring of forest attributes and the estimation of biomass and carbon stocks. Even-aged stands display homogenous structural parameters that have often been used as a proxy of stand age. Consequently, performing object-oriented analysis on Light Detection and Ranging (LiDAR) data has potential to detect historical stand-replacing disturbances. Recent research has shown good results in the delineation of forest stands as well as in the prediction of disturbance occurrence and TSD using airborne LiDAR data. Nevertheless, the use of airborne LiDAR for systematic monitoring of forest stands is limited by the sporadic availability of data and its high cost compared to satellite instruments. NASA's forthcoming Global Ecosystem Dynamics Investigations (GEDI) mission will provide systematically data on the vertical structure of the vegetation, but its use presents some challenges compared to the common discrete-return airborne LiDAR. GEDI will be a waveform instrument, hence the summary metrics will be different to those obtained with airborne LiDAR, and the sampling configuration could limit the utility of the data, especially on heterogeneous landscapes. The potential use of GEDI data for forest characterization at the stand level would therefore depend on the predictive power of the GEDI footprint metrics, and on the density of point samples relative to forest stand size (i.e. the number of observation/footprints per stand).In this study, we assess the performance of simulated GEDI-derived metrics for stand characterization and estimation of TSD, and the point density needed to adequately identify forest stands, which translates - due to the fixed sampling configuration - into the minimum temporal interval needed to collect a sufficient number of points. The study area was located in the Clear Creek, Selway River

  1. Estimating Earthquake Magnitude from the Kentucky Bend Scarp in the New Madrid Seismic Zone Using Field Geomorphic Mapping and High-Resolution LiDAR Topography

    NASA Astrophysics Data System (ADS)

    Kelson, K. I.; Kirkendall, W. G.

    2014-12-01

    Recent suggestions that the 1811-1812 earthquakes in the New Madrid Seismic Zone (NMSZ) ranged from M6.8-7.0 versus M8.0 have implications for seismic hazard estimation in the central US. We more accurately identify the location of the NW-striking, NE-facing Kentucky Bend scarp along the northern Reelfoot fault, which is spatially associated with the Lake County uplift, contemporary seismicity, and changes in the Mississippi River from the February 1812 earthquake. We use 1m-resolution LiDAR hillshades and slope surfaces, aerial photography, soil surveys, and field geomorphic mapping to estimate the location, pattern, and amount of late Holocene coseismic surface deformation. We define eight late Holocene to historic fluvial deposits, and delineate younger alluvia that are progressively inset into older deposits on the upthrown, western side of the fault. Some younger, clayey deposits indicate past ponding against the scarp, perhaps following surface deformational events. The Reelfoot fault is represented by sinuous breaks-in-slope cutting across these fluvial deposits, locally coinciding with shallow faults identified via seismic reflection data (Woolery et al., 1999). The deformation pattern is consistent with NE-directed reverse faulting along single or multiple SW-dipping fault planes, and the complex pattern of fluvial deposition appears partially controlled by intermittent uplift. Six localities contain scarps across correlative deposits and allow evaluation of cumulative surface deformation from LiDAR-derived topographic profiles. Displacements range from 3.4±0.2 m, to 2.2±0.2 m, 1.4±0.3 m, and 0.6±0.1 m across four progressively younger surfaces. The spatial distribution of the profiles argues against the differences being a result of along-strike uplift variability. We attribute the lesser displacements of progressively younger deposits to recurrent surface deformation, but do not yet interpret these initial data with respect to possible earthquake

  2. Comparisons Between Ground Measurements of Broadband UV Irradiance (300-380 nm) and TOMS UV Estimates at Moscow for 1979-2000

    NASA Technical Reports Server (NTRS)

    Yurova, Alla Y.; Krotkov, Nicholay A.; Herman, Jay R.; Bhartia, P. K. (Technical Monitor)

    2002-01-01

    We show the comparisons between ground-based measurements of spectrally integrated (300 nm to 380 nm) ultraviolet (UV) irradiance with satellite estimates from the Total Ozone Mapping Spectrometer (TOMS) total ozone and reflectivity data for the whole period of TOMS measurements (1979-2000) over the Meteorological Observatory of Moscow State University (MO MSU), Moscow, Russia. Several aspects of the comparisons are analyzed, including effects of cloudiness, aerosol, and snow cover. Special emphasis is given to the effect of different spatial and temporal averaging of ground-based data when comparing with low-resolution satellite measurements (TOMS footprint area 50-200 sq km). The comparisons in cloudless scenes with different aerosol loading have revealed TOMS irradiance overestimates from +5% to +20%. A-posteriori correction of the TOMS data accounting for boundary layer aerosol absorption (single scattering albedo of 0.92) eliminates the bias for cloud-free conditions. The single scattering albedo was independently verified using CIMEL sun and sky-radiance measurements at MO MSU in September 2001. The mean relative difference between TOMS UV estimates and ground UV measurements mainly lies within 1 10% for both snow-free and snow period with a tendency to TOMS overestimation in snow-free period especially at overcast conditions when the positive bias reaches 15-17%. The analysis of interannual UV variations shows quite similar behavior for both TOMS and ground measurements (correlation coefficient r=0.8). No long-term trend in the annual mean bias was found for both clear-sky and all-sky conditions with snow and without snow. Both TOMS and ground data show positive trend in UV irradiance between 1979 and 2000. The UV trend is attributed to decreases in both cloudiness and aerosol optical thickness during the late 1990's over Moscow region. However, if the analyzed period is extended to include pre-TOMS era (1968-2000 period), no trend in ground UV irradiance is

  3. A Bayesian Approach to Estimating Coupling Between Neural Components: Evaluation of the Multiple Component, Event-Related Potential (mcERP) Algorithm

    NASA Technical Reports Server (NTRS)

    Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.

  4. Noniterative MAP reconstruction using sparse matrix representations.

    PubMed

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  5. Estimating Curie Point Depth and Heat Flow Map for Northern Red Sea Rift of Egypt and Its Surroundings, from Aeromagnetic Data

    NASA Astrophysics Data System (ADS)

    Saleh, Salah; Salk, Müjgan; Pamukçu, Oya

    2013-05-01

    In this study, we aim to map the Curie point depth surface for the northern Red Sea rift region and its surroundings based on the spectral analysis of aeromagnetic data. Spectral analysis technique was used to estimate the boundaries (top and bottom) of the magnetized crust. The Curie point depth (CPD) estimates of the Red Sea rift from 112 overlapping blocks vary from 5 to 20 km. The depths obtained for the bottom of the magnetized crust are assumed to correspond to Curie point depths where the magnetic layer loses its magnetization. Intermediate to deep Curie point depth anomalies (10-16 km) were observed in southern and central Sinai and the Gulf of Suez (intermediate heat flow) due to the uplifted basement rocks. The shallowest CPD of 5 km (associated with very high heat flow, ~235 mW m-2) is located at/around the axial trough of the Red Sea rift region especially at Brothers Island and Conrad Deep due to its association with both the concentration of rifting to the axial depression and the magmatic activity, whereas, beneath the Gulf of Aqaba, three Curie point depth anomalies belonging to three major basins vary from 10 km in the north to about 14 km in the south (with a mean heat flow of about 85 mW m-2). Moreover, low CPD anomalies (high heat flow) were also observed beneath some localities in the northern part of the Gulf of Suez at Hammam Fraun, at Esna city along River Nile, at west Ras Gharib in the eastern desert and at Safaga along the western shore line of the Red Sea rift. These resulted from deviatoric tensional stresses developing in the lithosphere which contribute to its further extension and may be due to the opening of the Gulf of Suez and/or the Red Sea rift. Furthermore, low CPD (with high heat flow anomaly) was observed in the eastern border of the study area, beneath northern Arabia, due to the quasi-vertical low-velocity anomaly which extends into the lower mantle and may be related to volcanism in northern Arabia. Dense microearthquakes

  6. Tumor response estimation in radar-based microwave breast cancer detection.

    PubMed

    Kurrant, Douglas J; Fear, Elise C; Westwick, David T

    2008-12-01

    Radar-based microwave imaging techniques have been proposed for early stage breast cancer detection. A considerable challenge for the successful implementation of these techniques is the reduction of clutter, or components of the signal originating from objects other than the tumor. In particular, the reduction of clutter from the late-time scattered fields is required in order to detect small (subcentimeter diameter) tumors. In this paper, a method to estimate the tumor response contained in the late-time scattered fields is presented. The method uses a parametric function to model the tumor response. A maximum a posteriori estimation approach is used to evaluate the optimal values for the estimates of the parameters. A pattern classification technique is then used to validate the estimation. The ability of the algorithm to estimate a tumor response is demonstrated by using both experimental and simulated data obtained with a tissue sensing adaptive radar system.

  7. BenMAP Downloads

    EPA Pesticide Factsheets

    Download the current and legacy versions of the BenMAP program. Download configuration and aggregation/pooling/valuation files to estimate benefits. BenMAP-CE is free and open source software, and the source code is available upon request.

  8. Maps to estimate average streamflow and headwater limits for streams in U.S. Army Corps of Engineers, Mobile District, Alabama and adjacent states

    USGS Publications Warehouse

    Nelson, George H.

    1984-01-01

    U.S. Army Corps of Engineers permits are required for discharges of dredged or fill-material downstream from the ' headwaters ' of specified streams. The term ' headwaters ' is defined as the point of a freshwater (non-tidal) stream above which the average flow is less than 5 cu ft/s. Maps of the Mobile District area showing (1) lines of equal average streamflow, and (2) lines of equal drainage areas required to produce an average flow of 5 cu ft/s are contained in this report. These maps are for use by the Corps of Engineers in their permitting program. (USGS)

  9. Top-down Estimates of Biomass Burning Emissions of Black Carbon in the Western United States

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Li, Q.; Randerson, J. T.; Liou, K.

    2011-12-01

    We apply a Bayesian linear inversion to derive top-down estimates of biomass burning emissions of black carbon (BC) in the western United States (WUS) for May-November 2006 by inverting surface BC concentrations from the IMPROVE network using the GEOS-Chem chemical transport model. Model simulations are conducted at both 2°×2.5° (globally) and 0.55°×0.66° (nested over North America) horizontal resolutions. We first improve the spatial distributions and seasonal and interannual variations of the BC emissions from the Global Fire Emissions Database (GFEDv2) using MODIS 8-day active fire counts from 2005-2007. The GFEDv2 emissions in N. America are adjusted for three zones: boreal N. America, temperate N. America, and Mexico plus Central America. The resulting emissions are then used as a priori for the inversion. The a posteriori emissions are 2-5 times higher than the a priori in California and the Rockies. Model surface BC concentrations using the a posteriori estimate provide better agreement with IMPROVE observations (~20% increase in the Taylor skill score), including improved ability to capture the observed variability especially during June-July. However, model surface BC concentrations are still biased low by ~30%. Comparisons with the Fire Locating and Modeling of Burning Emissions (FLAMBE) are included.

  10. Top-down Estimates of Biomass Burning Emissions of Black Carbon in the Western United States

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Li, Q.; Randerson, J. T.; CHEN, D.; Zhang, L.; Liou, K.

    2012-12-01

    We apply a Bayesian linear inversion to derive top-down estimates of biomass burning emissions of black carbon (BC) in the western United States (WUS) for May-November 2006 by inverting surface BC concentrations from the IMPROVE network using the GEOS-Chem chemical transport model. Model simulations are conducted at both 2°×2.5° (globally) and 0.5°×0.667° (nested over North America) horizontal resolutions. We first improve the spatial distributions and seasonal and interannual variations of the BC emissions from the Global Fire Emissions Database (GFEDv2) using MODIS 8-day active fire counts from 2005-2007. The GFEDv2 emissions in N. America are adjusted for three zones: boreal N. America, temperate N. America, and Mexico plus Central America. The resulting emissions are then used as a priori for the inversion. The a posteriori emissions are 2-5 times higher than the a priori in California and the Rockies. Model surface BC concentrations using the a posteriori estimate provide better agreement with IMPROVE observations (~50% increase in the Taylor skill score), including improved ability to capture the observed variability especially during June-September. However, model surface BC concentrations are still biased low by ~30%. Comparisons with the Fire Locating and Modeling of Burning Emissions (FLAMBE) are included.

  11. A Posteriori Error Bounds for the Empirical Interpolation Method

    DTIC Science & Technology

    2010-03-18

    paramètres (x̄1, x̄2) ≡ µ ∈ DII ≡ [0.4, 0.6]2 et α = 0.1 fixé, les résultats sont similaires au cas d’un seul paramètre (Fig. 2). 1. Introduction...and denote the set of all distinct multi-indices β of dimension P of length I by MPI . The cardinality of MPI is given by card (MPI ) = ( P+I−1 I...operations, and we compute the interpolation errors ‖F (β)(·; τ) − F (β)M (·; τ)‖L∞(Ω), 0 < |β| < p − 1, for all τ ∈ Φ, in O(nΦMN ) ∑p−1 j=0 card (MPj

  12. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure

  13. Planetary maps

    USGS Publications Warehouse

    ,

    1992-01-01

    An important goal of the USGS planetary mapping program is to systematically map the geology of the Moon, Mars, Venus, and Mercury, and the satellites of the outer planets. These geologic maps are published in the USGS Miscellaneous Investigations (I) Series. Planetary maps on sale at the USGS include shaded-relief maps, topographic maps, geologic maps, and controlled photomosaics. Controlled photomosaics are assembled from two or more photographs or images using a network of points of known latitude and longitude. The images used for most of these planetary maps are electronic images, obtained from orbiting television cameras, various optical-mechanical systems. Photographic film was only used to map Earth's Moon.

  14. Estimation of Tree Position and STEM Diameter Using Simultaneous Localization and Mapping with Data from a Backpack-Mounted Laser Scanner

    NASA Astrophysics Data System (ADS)

    Holmgren, J.; Tulldahl, H. M.; Nordlöf, J.; Nyström, M.; Olofsson, K.; Rydell, J.; Willén, E.

    2017-10-01

    A system was developed for automatic estimations of tree positions and stem diameters. The sensor trajectory was first estimated using a positioning system that consists of a low precision inertial measurement unit supported by image matching with data from a stereo-camera. The initial estimation of the sensor trajectory was then calibrated by adjustments of the sensor pose using the laser scanner data. Special features suitable for forest environments were used to solve the correspondence and matching problems. Tree stem diameters were estimated for stem sections using laser data from individual scanner rotations and were then used for calibration of the sensor pose. A segmentation algorithm was used to associate stem sections to individual tree stems. The stem diameter estimates of all stem sections associated to the same tree stem were then combined for estimation of stem diameter at breast height (DBH). The system was validated on four 20 m radius circular plots and manual measured trees were automatically linked to trees detected in laser data. The DBH could be estimated with a RMSE of 19 mm (6 %) and a bias of 8 mm (3 %). The calibrated sensor trajectory and the combined use of circle fits from individual scanner rotations made it possible to obtain reliable DBH estimates also with a low precision positioning system.

  15. Linkage mapping in tetraploid willows: segregation of molecular markers and estimation of linkage phases support an allotetraploid structure for Salix alba x Salix fragilis interspecific hybrids.

    PubMed

    Barcaccia, G; Meneghetti, S; Albertini, E; Triest, L; Lucchin, M

    2003-02-01

    Salix alba-Salix fragilis complex includes closely related dioecious polyploid species, which are obligate outcrossers. Natural populations of these willows and their hybrids are represented by a mixture of highly heterozygous genotypes sharing a common gene pool. Since nothing is known about their genomic constitution, tetraploidy (2n=4x=76) in willow species makes basic and applied genetic studies difficult. We have used a two-way pseudotestcross strategy and single-dose markers (SDMs) to construct the first linkage maps for both pistillate and staminate willows. A total of 242 amplified fragment length polymorphisms (AFLPs) and 50 selective amplifications of microsatellite polymorphic loci (SAMPL) markers, which showed 1:1 segregation in the F(1) mapping populations, were used in linkage analysis. In S. alba, 73 maternal and 48 paternal SDMs were mapped to 19 and 16 linkage groups covering 708 and 339 cM, respectively. In S. fragilis, 13 maternal and 33 paternal SDMs were mapped in six and 14 linkage groups covering 98 and 321 cM, respectively. For most cosegregation groups, a comparable number of markers linked in coupling and repulsion was identified. This finding suggests that most of chromosomes pair preferentially as occurs in allotetraploid species exhibiting disomic inheritance. The detection of 10 pairs of marker alleles from single parents showing codominant inheritance strengthens this hypothesis. The fact that, of the 1122 marker loci identified in the two male and female parents, the vast majority (77.5%) were polymorphic and as few as 22.5% were shared between parental species highlight that S. alba and S. fragilis genotypes are differentiated. The highly difference between S. alba- and S. fragilis-specific markers found in both parental combinations (on average, 65.3 vs 34.7%, respectively) supports the (phylogenetic) hypothesis that S. fragilis is derived from S. alba-like progenitors.

  16. Slope Estimation in Noisy Piecewise Linear Functions✩

    PubMed Central

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2014-01-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020

  17. Estimating and Separating Noise from AIA Images

    NASA Astrophysics Data System (ADS)

    Kirk, Michael S.; Ireland, Jack; Young, C. Alex; Pesnell, W. Dean

    2016-10-01

    All digital images are corrupted by noise and SDO AIA is no different. In most solar imaging, we have the luxury of high photon counts and low background contamination, which when combined with carful calibration, minimize much of the impact noise has on the measurement. Outside high-intensity regions, such as in coronal holes, the noise component can become significant and complicate feature recognition and segmentation. We create a practical estimate of noise in the high-resolution AIA images across the detector CCD in all seven EUV wavelengths. A mixture of Poisson and Gaussian noise is well suited in the digital imaging environment due to the statistical distributions of photons and the characteristics of the CCD. Using state-of-the-art noise estimation techniques, the publicly available solar images, and coronal loop simulations; we construct a maximum-a-posteriori assessment of the error in these images. The estimation and mitigation of noise not only provides a clearer view of large-scale solar structure in the solar corona, but also provides physical constraints on fleeting EUV features observed with AIA.

  18. Slope Estimation in Noisy Piecewise Linear Functions.

    PubMed

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  19. Mapping Variables.

    ERIC Educational Resources Information Center

    Stone, Mark H.; Wright, Benjamin D.; Stenner, A. Jackson

    1999-01-01

    Describes mapping variables, the principal technique for planning and constructing a test or rating instrument. A variable map is also useful for interpreting results. Provides several maps to show the importance and value of mapping a variable by person and item data. (Author/SLD)

  20. Estimation and mapping of above-ground biomass of mangrove forests and their replacement land uses in the Philippines using Sentinel imagery

    NASA Astrophysics Data System (ADS)

    Castillo, Jose Alan A.; Apan, Armando A.; Maraseni, Tek N.; Salmo, Severino G.

    2017-12-01

    The recent launch of the Sentinel-1 (SAR) and Sentinel-2 (multispectral) missions offers a new opportunity for land-based biomass mapping and monitoring especially in the tropics where deforestation is highest. Yet, unlike in agriculture and inland land uses, the use of Sentinel imagery has not been evaluated for biomass retrieval in mangrove forest and the non-forest land uses that replaced mangroves. In this study, we evaluated the ability of Sentinel imagery for the retrieval and predictive mapping of above-ground biomass of mangroves and their replacement land uses. We used Sentinel SAR and multispectral imagery to develop biomass prediction models through the conventional linear regression and novel Machine Learning algorithms. We developed models each from SAR raw polarisation backscatter data, multispectral bands, vegetation indices, and canopy biophysical variables. The results show that the model based on biophysical variable Leaf Area Index (LAI) derived from Sentinel-2 was more accurate in predicting the overall above-ground biomass. In contrast, the model which utilised optical bands had the lowest accuracy. However, the SAR-based model was more accurate in predicting the biomass in the usually deficient to low vegetation cover non-forest replacement land uses such as abandoned aquaculture pond, cleared mangrove and abandoned salt pond. These models had 0.82-0.83 correlation/agreement of observed and predicted value, and root mean square error of 27.8-28.5 Mg ha-1. Among the Sentinel-2 multispectral bands, the red and red edge bands (bands 4, 5 and 7), combined with elevation data, were the best variable set combination for biomass prediction. The red edge-based Inverted Red-Edge Chlorophyll Index had the highest prediction accuracy among the vegetation indices. Overall, Sentinel-1 SAR and Sentinel-2 multispectral imagery can provide satisfactory results in the retrieval and predictive mapping of the above-ground biomass of mangroves and the replacement

  1. Contour Mapping

    NASA Technical Reports Server (NTRS)

    1995-01-01

    In the early 1990s, the Ohio State University Center for Mapping, a NASA Center for the Commercial Development of Space (CCDS), developed a system for mobile mapping called the GPSVan. While driving, the users can map an area from the sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. George J. Igel and Company and the Ohio State University Center for Mapping advanced the technology for use in determining the contours of a construction site. The new system reduces the time required for mapping and staking, and can monitor the amount of soil moved.

  2. On estimating the phase of periodic waveform in additive Gaussian noise, part 2

    NASA Astrophysics Data System (ADS)

    Rauch, L. L.

    1984-11-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  3. On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2

    NASA Technical Reports Server (NTRS)

    Rauch, L. L.

    1984-01-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  4. Real-Time Radar-Based Tracking and State Estimation of Multiple Non-Conformant Aircraft

    NASA Technical Reports Server (NTRS)

    Cook, Brandon; Arnett, Timothy; Macmann, Owen; Kumar, Manish

    2017-01-01

    In this study, a novel solution for automated tracking of multiple unknown aircraft is proposed. Many current methods use transponders to self-report state information and augment track identification. While conformant aircraft typically report transponder information to alert surrounding aircraft of its state, vehicles may exist in the airspace that are non-compliant and need to be accurately tracked using alternative methods. In this study, a multi-agent tracking solution is presented that solely utilizes primary surveillance radar data to estimate aircraft state information. Main research challenges include state estimation, track management, data association, and establishing persistent track validity. In an effort to realize these challenges, techniques such as Maximum a Posteriori estimation, Kalman filtering, degree of membership data association, and Nearest Neighbor Spanning Tree clustering are implemented for this application.

  5. Myocardial T1 mapping at 3.0 tesla using an inversion recovery spoiled gradient echo readout and bloch equation simulation with slice profile correction (BLESSPC) T1 estimation algorithm.

    PubMed

    Shao, Jiaxin; Rapacchi, Stanislas; Nguyen, Kim-Lien; Hu, Peng

    2016-02-01

    To develop an accurate and precise myocardial T1 mapping technique using an inversion recovery spoiled gradient echo readout at 3.0 Tesla (T). The modified Look-Locker inversion-recovery (MOLLI) sequence was modified to use fast low angle shot (FLASH) readout, incorporating a BLESSPC (Bloch Equation Simulation with Slice Profile Correction) T1 estimation algorithm, for accurate myocardial T1 mapping. The FLASH-MOLLI with BLESSPC fitting was compared with different approaches and sequences with regards to T1 estimation accuracy, precision and image artifact based on simulation, phantom studies, and in vivo studies of 10 healthy volunteers and three patients at 3.0 Tesla. The FLASH-MOLLI with BLESSPC fitting yields accurate T1 estimation (average error = -5.4 ± 15.1 ms, percentage error = -0.5% ± 1.2%) for T1 from 236-1852 ms and heart rate from 40-100 bpm in phantom studies. The FLASH-MOLLI sequence prevented off-resonance artifacts in all 10 healthy volunteers at 3.0T. In vivo, there was no significant difference between FLASH-MOLLI-derived myocardial T1 values and "ShMOLLI+IE" derived values (1458.9 ± 20.9 ms versus 1464.1 ± 6.8 ms, P = 0.50); However, the average precision by FLASH-MOLLI was significantly better than that generated by "ShMOLLI+IE" (1.84 ± 0.36% variance versus 3.57 ± 0.94%, P < 0.001). The FLASH-MOLLI with BLESSPC fitting yields accurate and precise T1 estimation, and eliminates banding artifacts associated with bSSFP at 3.0T. © 2015 Wiley Periodicals, Inc.

  6. U.S. Geological Survey groundwater toolbox, a graphical and mapping interface for analysis of hydrologic data (version 1.0): user guide for estimation of base flow, runoff, and groundwater recharge from streamflow data

    USGS Publications Warehouse

    Barlow, Paul M.; Cunningham, William L.; Zhai, Tong; Gray, Mark

    2015-01-01

    This report is a user guide for the streamflow-hydrograph analysis methods provided with version 1.0 of the U.S. Geological Survey (USGS) Groundwater Toolbox computer program. These include six hydrograph-separation methods to determine the groundwater-discharge (base-flow) and surface-runoff components of streamflow—the Base-Flow Index (BFI; Standard and Modified), HYSEP (Fixed Interval, Sliding Interval, and Local Minimum), and PART methods—and the RORA recession-curve displacement method and associated RECESS program to estimate groundwater recharge from streamflow data. The Groundwater Toolbox is a customized interface built on the nonproprietary, open source MapWindow geographic information system software. The program provides graphing, mapping, and analysis capabilities in a Microsoft Windows computing environment. In addition to the four hydrograph-analysis methods, the Groundwater Toolbox allows for the retrieval of hydrologic time-series data (streamflow, groundwater levels, and precipitation) from the USGS National Water Information System, downloading of a suite of preprocessed geographic information system coverages and meteorological data from the National Oceanic and Atmospheric Administration National Climatic Data Center, and analysis of data with several preprocessing and postprocessing utilities. With its data retrieval and analysis tools, the Groundwater Toolbox provides methods to estimate many of the components of the water budget for a hydrologic basin, including precipitation; streamflow; base flow; runoff; groundwater recharge; and total, groundwater, and near-surface evapotranspiration.

  7. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  8. Smarter Balanced Preliminary Performance Levels: Estimated MAP Scores Corresponding to the Preliminary Performance Levels of the Smarter Balanced Assessment Consortium (Smarter Balanced)

    ERIC Educational Resources Information Center

    Northwest Evaluation Association, 2015

    2015-01-01

    Recently, the Smarter Balanced Assessment Consortium (Smarter Balanced) released a document that established initial performance levels and the associated threshold scale scores for the Smarter Balanced assessment. The report included estimated percentages of students expected to perform at each of the four performance levels, reported by grade…

  9. Mapping soil-landscape elements and the wetland in dambos and estimating CH4 and CO2 emissions from a dambo-terminated catena

    NASA Astrophysics Data System (ADS)

    Sebadduka, Jerome

    elevation model (DEM) were used to classify dambo catenary units, were accurate, but only slightly better than the method which made no use of gamma-ray (e.g., conditional inference tree) . It was concluded that dambo landscape elements can be mapped by using these two data sources; although terrain data provides more information. Based upon a combination of hydrology and soil properties, dambo bottoms were the only element shown to constitute the dambo wetland. This zone is inundated for at least three-quarters of the main rainfall season and soils are hydric. Using the landscape map created by Hansen et al. (2009), the wetland was found to constitute only ˜15% of the dambo. This is smaller than what was mapped by FAO-Africover and the Department of Survey and Mapping, Uganda (DSM). The wetland was also found to be the main source of CH 4 and sink of CO2, with additional strengths attributed to the neighboring floor. Given that these constitute less than 20% of the landscape, dambo net contribution to the regional CH4 budget is insignificant because 80% of the landscape is a sink. The worry, though, is the ongoing degradation, with the impact this has on the release of CO2.

  10. USGS maps

    USGS Publications Warehouse

    ,

    2005-01-01

    Discover a small sample of the millions of maps produced by the U.S. Geological Survey (USGS) in its mission to map the Nation and survey its resources. This booklet gives a brief overview of the types of maps sold and distributed by the USGS through its Earth Science Information Centers (ESIC) and also available from business partners located in most States. The USGS provides a wide variety of maps, from topographic maps showing the geographic relief and thematic maps displaying the geology and water resources of the United States, to special studies of the moon and planets.

  11. RICH MAPS

    EPA Science Inventory

    Michael Goodchild recently gave eight reasons why traditional maps are limited as communication devices, and how interactive internet mapping can overcome these limitations. In the past, many authorities in cartography, from Jenks to Bertin, have emphasized the importance of sim...

  12. Mapping Air Population

    NASA Astrophysics Data System (ADS)

    Peterson, Michael P.; Hunt, Paul; Weiß, Konrad

    2018-05-01

    "Air population" refers to the total number of people flying above the earth at any point in time. The total number of passengers can then be estimated by multiplying the number of seats for each aircraft by the current seat occupancy rate. Using this method, the estimated air population is determined by state for the airspace over the United States. In the interactive, real-time mapping system, maps are provided to show total air population, the density of air population (air population / area of state), and the ratio of air population to ground population.

  13. Estimation of Bridge Height over Water from Polarimetric SAR Image Data Using Mapping and Projection Algorithm and De-Orientation Theory

    NASA Astrophysics Data System (ADS)

    Wang, Haipeng; Xu, Feng; Jin, Ya-Qiu; Ouchi, Kazuo

    An inversion method of bridge height over water by polarimetric synthetic aperture radar (SAR) is developed. A geometric ray description to illustrate scattering mechanism of a bridge over water surface is identified by polarimetric image analysis. Using the mapping and projecting algorithm, a polarimetric SAR image of a bridge model is first simulated and shows that scattering from a bridge over water can be identified by three strip lines corresponding to single-, double-, and triple-order scattering, respectively. A set of polarimetric parameters based on the de-orientation theory is applied to analysis of three types scattering, and the thinning-clustering algorithm and Hough transform are then employed to locate the image positions of these strip lines. These lines are used to invert the bridge height. Fully polarimetric image data of airborne Pi-SAR at X-band are applied to inversion of the height and width of the Naruto Bridge in Japan. Based on the same principle, this approach is also applicable to spaceborne ALOSPALSAR single-polarization data of the Eastern Ocean Bridge in China. The results show good feasibility to realize the bridge height inversion.

  14. USGS Maps

    USGS Publications Warehouse

    ,

    1994-01-01

    Most USGS topographic maps use brown contours to show the shape and elevation of the terrain. Elevations are usually shown in feet, but on some maps they are in meters. Contour intervals vary, depending mainly on the scale of the map and the type of terrain.

  15. A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Dambach, M.

    1998-01-01

    A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.

  16. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  17. Maps for the nation: The current federal mapping establishment

    USGS Publications Warehouse

    North, G.W.

    1983-01-01

    The U.S. Government annually produces an estimated 53,000 new maps and charts and distributes about 160 million copies. A large number of these maps are produced under the national mapping program, a decentralized Federal/State cooperative approach to mapping the country at standard scales. Circular A-16, issued by the Office of Management and Budget in 1953 and revised in 1967, delegates the mapping responsibilities to various federal agencies. The U.S. Department of the Interior's Geological Survey is the principal federal agency responsible for implementing the national mapping program. Other major federal map producing agencies include the Departments of Agriculture, Commerce, Defense, Housing and Urban Development, and Transportation, and the Tennessee Valley Authority. To make maps and mapping information more readily available, the National Cartographic Information Center was established in 1974 and an expanded National Map Library Depository Program in 1981. The most recent of many technological advances made under the mapping program are in the areas of digital cartography and video disc and optical disc information storage systems. Future trends and changes in the federal mapping program will involve expanded information and customer service operations, further developments in the production and use of digital cartographic data, and consideration of a Federal Mapping Agency. ?? 1983.

  18. Topographic mapping

    USGS Publications Warehouse

    ,

    2008-01-01

    The U.S. Geological Survey (USGS) produced its first topographic map in 1879, the same year it was established. Today, more than 100 years and millions of map copies later, topographic mapping is still a central activity for the USGS. The topographic map remains an indispensable tool for government, science, industry, and leisure. Much has changed since early topographers traveled the unsettled West and carefully plotted the first USGS maps by hand. Advances in survey techniques, instrumentation, and design and printing technologies, as well as the use of aerial photography and satellite data, have dramatically improved mapping coverage, accuracy, and efficiency. Yet cartography, the art and science of mapping, may never before have undergone change more profound than today.

  19. A multi-temporal fusion-based approach for land cover mapping in support of nuclear incident response

    NASA Astrophysics Data System (ADS)

    Sah, Shagan

    affected area in the case of a nuclear event. This imagery served as a second source of data to augment results from the time series approach. The classifications from the two approaches were integrated using an a posteriori probability-based fusion approach. This was done by establishing a relationship between the classes, obtained after classification of the two data sources. Despite the coarse spatial resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion-based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. This fusion thus contributed to classification accuracy refinement, with a few additional advantages, such as correction for cloud cover and providing for an approach that is robust against point-in-time seasonal anomalies, due to the inclusion of multi-temporal data. We concluded that this approach is capable of generating land cover maps of acceptable accuracy and rapid turnaround, which in turn can yield reliable estimates of crop acreage of a region. The final algorithm is part of an automated software tool, which can be used by emergency response personnel to generate a nuclear ingestion pathway information product within a few hours of data collection.

  20. Natural and man-made hexavalent chromium, Cr(VI), in groundwater near a mapped plume, Hinkley, California—study progress as of May 2017, and a summative-scale approach to estimate background Cr(VI) concentrations

    USGS Publications Warehouse

    Izbicki, John A.; Groover, Krishangi D.

    2018-03-22

    This report describes (1) work done between January 2015 and May 2017 as part of the U.S. Geological Survey (USGS) hexavalent chromium, Cr(VI), background study and (2) the summative-scale approach to be used to estimate the extent of anthropogenic (man-made) Cr(VI) and background Cr(VI) concentrations near the Pacific Gas and Electric Company (PG&E) natural gas compressor station in Hinkley, California. Most of the field work for the study was completed by May 2017. The summative-scale approach and calculation of Cr(VI) background were not well-defined at the time the USGS proposal for the background Cr(VI) study was prepared but have since been refined as a result of data collected as part of this study. The proposed summative scale consists of multiple items, formulated as questions to be answered at each sampled well. Questions that compose the summative scale were developed to address geologic, hydrologic, and geochemical constraints on Cr(VI) within the study area. Each question requires a binary (yes or no) answer. A score of 1 will be assigned for an answer that represents data consistent with anthropogenic Cr(VI); a score of –1 will be assigned for an answer that represents data inconsistent with anthropogenic Cr(VI). The areal extent of anthropogenic Cr(VI) estimated from the summative-scale analyses will be compared with the areal extent of anthropogenic Cr(VI) estimated on the basis of numerical groundwater flow model results, along with particle-tracking analyses. On the basis of these combined results, background Cr(VI) values will be estimated for “Mojave-type” deposits, and other deposits, in different parts of the study area outside the summative-scale mapped extent of anthropogenic Cr(VI).

  1. An analysis of carrier phase jitter in an MPSK receiver utilizing map estimation. Ph.D. Thesis Semiannual Status Report, Jul. 1993 - Jan. 1994

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1994-01-01

    The use of 8 and 16 PSK TCM to support satellite communications in an effort to achieve more bandwidth efficiency in a power-limited channel has been proposed. This project addresses the problem of carrier phase jitter in an M-PSK receiver utilizing the high SNR approximation to the maximum aposteriori estimation of carrier phase. In particular, numerical solutions to the 8 and 16 PSK self-noise and phase detector gain in the carrier tracking loop are presented. The effect of changing SNR on the loop noise bandwidth is also discussed. These data are then used to compute variance of phase error as a function of SNR. Simulation and hardware data are used to verify these calculations. The results show that there is a threshold in the variance of phase error versus SNR curves that is a strong function of SNR and a weak function of loop bandwidth. The M-PSK variance thresholds occur at SNR's in the range of practical interest for the use of 8 and 16-PSK TCM. This suggests that phase error variance is an important consideration in the design of these systems.

  2. Mapping Van

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A NASA Center for the Commercial Development of Space (CCDS) - developed system for satellite mapping has been commercialized for the first time. Global Visions, Inc. maps an area while driving along a road in a sophisticated mapping van equipped with satellite signal receivers, video cameras and computer systems for collecting and storing mapping data. Data is fed into a computerized geographic information system (GIS). The resulting amps can be used for tax assessment purposes, emergency dispatch vehicles and fleet delivery companies as well as other applications.

  3. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at

  4. Multiple-hit parameter estimation in monolithic detectors.

    PubMed

    Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S

    2013-02-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.

  5. Multiple-Hit Parameter Estimation in Monolithic Detectors

    PubMed Central

    Barrett, Harrison H.; Lewellen, Tom K.; Miyaoka, Robert S.

    2014-01-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%–12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied. PMID:23193231

  6. Genome mapping

    USDA-ARS?s Scientific Manuscript database

    Genome maps can be thought of much like road maps except that, instead of traversing across land, they traverse across the chromosomes of an organism. Genetic markers serve as landmarks along the chromosome and provide researchers information as to how close they may be to a gene or region of inter...

  7. Map Adventures.

    ERIC Educational Resources Information Center

    Geological Survey (Dept. of Interior), Reston, VA.

    This curriculum packet about maps, with seven accompanying lessons, is appropriate for students in grades K-3. Students learn basic concepts for visualizing objects from different perspectives and how to understand and use maps. Lessons in the packet center on a story about a little girl, Nikki, who rides in a hot-air balloon that gives her, and…

  8. Concept Mapping

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    Concept maps are graphical ways of working with ideas and presenting information. They reveal patterns and relationships and help students to clarify their thinking, and to process, organize and prioritize. Displaying information visually--in concept maps, word webs, or diagrams--stimulates creativity. Being able to think logically teaches…

  9. Incorporating Functional Annotations for Fine-Mapping Causal Variants in a Bayesian Framework Using Summary Statistics.

    PubMed

    Chen, Wenan; McDonnell, Shannon K; Thibodeau, Stephen N; Tillmans, Lori S; Schaid, Daniel J

    2016-11-01

    Functional annotations have been shown to improve both the discovery power and fine-mapping accuracy in genome-wide association studies. However, the optimal strategy to incorporate the large number of existing annotations is still not clear. In this study, we propose a Bayesian framework to incorporate functional annotations in a systematic manner. We compute the maximum a posteriori solution and use cross validation to find the optimal penalty parameters. By extending our previous fine-mapping method CAVIARBF into this framework, we require only summary statistics as input. We also derived an exact calculation of Bayes factors using summary statistics for quantitative traits, which is necessary when a large proportion of trait variance is explained by the variants of interest, such as in fine mapping expression quantitative trait loci (eQTL). We compared the proposed method with PAINTOR using different strategies to combine annotations. Simulation results show that the proposed method achieves the best accuracy in identifying causal variants among the different strategies and methods compared. We also find that for annotations with moderate effects from a large annotation pool, screening annotations individually and then combining the top annotations can produce overly optimistic results. We applied these methods on two real data sets: a meta-analysis result of lipid traits and a cis-eQTL study of normal prostate tissues. For the eQTL data, incorporating annotations significantly increased the number of potential causal variants with high probabilities. Copyright © 2016 by the Genetics Society of America.

  10. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  11. Application of mapped plots for single-owner forest surveys

    Treesearch

    Paul C. Van Deusen; Francis Roesch

    2009-01-01

    Mapped plots are used for the nation forest inventory conducted by the U.S. Forest Service. Mapped plots are also useful foro single ownership inventoires. Mapped plots can handle boundary overlap and can aprovide less variable estimates for specified forest conditions. Mapping is a good fit for fixed plot inventories where the fixed area plot is used for both mapping...

  12. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  13. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  14. Estimating clinical chemistry reference values based on an existing data set of unselected animals.

    PubMed

    Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe

    2008-11-01

    In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.

  15. Special Issue: Tenth International Conference on Finite Elements in Fluids, Tucson, Arizona.Copyright © 1999 John Wiley & Sons, Ltd.Save Title to My Profile

    <map name="map_email-print" id="map_email-print">E-MailPrintmap>

    Volume 31, Issue 1, Pages 1-406(15 September 1999)

    Preface

    Preface

    NASA Astrophysics Data System (ADS)

    Oden, J. T.; Prudhomme, S.

    1999-09-01

    We present a new approach to deliver reliable approximations of the norm of the residuals resulting from finite element solutions to the Stokes and Oseen equations. The method is based upon a global solve in a bubble space using iterative techniques. This provides an alternative to the classical equilibrated element residual methods for which it is necessary to construct proper boundary conditions for each local problem. The method is first used to develop a global a posteriori error estimator. It is then applied in a strategy to control the numerical error in specific outputs or quantities of interest which are functions of the solutions to the Stokes and Oseen equations. Copyright

  16. Mapping Children--Mapping Space.

    ERIC Educational Resources Information Center

    Pick, Herbert L., Jr.

    Research is underway concerning the way the perception, conception, and representation of spatial layout develops. Three concepts are important here--space itself, frame of reference, and cognitive map. Cognitive map refers to a form of representation of the behavioral space, not paired associate or serial response learning. Other criteria…

  17. Mapping racism.

    PubMed

    Moss, Donald B

    2006-01-01

    The author uses the metaphor of mapping to illuminate a structural feature of racist thought, locating the degraded object along vertical and horizontal axes. These axes establish coordinates of hierarchy and of distance. With the coordinates in place, racist thought begins to seem grounded in natural processes. The other's identity becomes consolidated, and parochialism results. The use of this kind of mapping is illustrated via two patient vignettes. The author presents Freud's (1905, 1927) views in relation to such a "mapping" process, as well as Adorno's (1951) and Baldwin's (1965). Finally, the author conceptualizes the crucial status of primitivity in the workings of racist thought.

  18. Mapping Biodiversity.

    ERIC Educational Resources Information Center

    World Wildlife Fund, Washington, DC.

    This document features a lesson plan that examines how maps help scientists protect biodiversity and how plants and animals are adapted to specific ecoregions by comparing biome, ecoregion, and habitat. Samples of instruction and assessment are included. (KHR)

  19. Genetic Mapping

    MedlinePlus

    ... Sheets A Brief Guide to Genomics About NHGRI Research About the International HapMap Project Biological Pathways Chromosome Abnormalities Chromosomes Cloning Comparative Genomics DNA Microarray Technology DNA Sequencing Deoxyribonucleic Acid ( ...

  20. Verification of the WFAS Lightning Efficiency Map

    Treesearch

    Paul Sopko; Don Latham; Isaac Grenfell

    2007-01-01

    A Lightning Ignition Efficiency map was added to the suite of daily maps offered by the Wildland Fire Assessment System (WFAS) in 1999. This map computes a lightning probability of ignition (POI) based on the estimated fuel type, fuel depth, and 100-hour fuel moisture interpolated from the Remote Automated Weather Station (RAWS) network. An attempt to verify the...

  1. Confidence level estimation in multi-target classification problems

    NASA Astrophysics Data System (ADS)

    Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia

    2018-04-01

    This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.

  2. Vegetation mapping from high-resolution satellite images in the heterogeneous arid environments of Socotra Island (Yemen)

    NASA Astrophysics Data System (ADS)

    Malatesta, Luca; Attorre, Fabio; Altobelli, Alfredo; Adeeb, Ahmed; De Sanctis, Michele; Taleb, Nadim M.; Scholte, Paul T.; Vitale, Marcello

    2013-01-01

    Socotra Island (Yemen), a global biodiversity hotspot, is characterized by high geomorphological and biological diversity. In this study, we present a high-resolution vegetation map of the island based on combining vegetation analysis and classification with remote sensing. Two different image classification approaches were tested to assess the most accurate one in mapping the vegetation mosaic of Socotra. Spectral signatures of the vegetation classes were obtained through a Gaussian mixture distribution model, and a sequential maximum a posteriori (SMAP) classification was applied to account for the heterogeneity and the complex spatial pattern of the arid vegetation. This approach was compared to the traditional maximum likelihood (ML) classification. Satellite data were represented by a RapidEye image with 5 m pixel resolution and five spectral bands. Classified vegetation relevés were used to obtain the training and evaluation sets for the main plant communities. Postclassification sorting was performed to adjust the classification through various rule-based operations. Twenty-eight classes were mapped, and SMAP, with an accuracy of 87%, proved to be more effective than ML (accuracy: 66%). The resulting map will represent an important instrument for the elaboration of conservation strategies and the sustainable use of natural resources in the island.

  3. Map Separates

    USGS Publications Warehouse

    ,

    2001-01-01

    U.S. Geological Survey (USGS) topographic maps are printed using up to six colors (black, blue, green, red, brown, and purple). To prepare your own maps or artwork based on maps, you can order separate black-and-white film positives or negatives for any color printed on a USGS topographic map, or for one or more of the groups of related features printed in the same color on the map (such as drainage and drainage names from the blue plate.) In this document, examples are shown with appropriate ink color to illustrate the various separates. When purchased, separates are black-and-white film negatives or positives. After you receive a film separate or composite from the USGS, you can crop, enlarge or reduce, and edit to add or remove details to suit your special needs. For example, you can adapt the separates for making regional and local planning maps or for doing many kinds of studies or promotions by using the features you select and then printing them in colors of your choice.

  4. Venus mapping

    NASA Technical Reports Server (NTRS)

    Batson, R. M.; Morgan, H. F.; Sucharski, Robert

    1991-01-01

    Semicontrolled image mosaics of Venus, based on Magellan data, are being compiled at 1:50,000,000, 1:10,000,000, 1:5,000,000, and 1:1,000,000 scales to support the Magellan Radar Investigator (RADIG) team. The mosaics are semicontrolled in the sense that data gaps were not filled and significant cosmetic inconsistencies exist. Contours are based on preliminary radar altimetry data that is subjected to revision and improvement. Final maps to support geologic mapping and other scientific investigations, to be compiled as the dataset becomes complete, will be sponsored by the Planetary Geology and Geophysics Program and/or the Venus Data Analysis Program. All maps, both semicontrolled and final, will be published as I-maps by the United States Geological Survey. All of the mapping is based on existing knowledge of the spacecraft orbit; photogrammetric triangulation, a traditional basis for geodetic control on planets where framing cameras were used, is not feasible with the radar images of Venus, although an eventual shift of coordinate system to a revised spin-axis location is anticipated. This is expected to be small enough that it will affect only large-scale maps.

  5. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  6. Site Map | USDA Plant Hardiness Zone Map

    Science.gov Websites

    Acknowledgments & Citation Copyright Map & Data Downloads Map Downloads Geography (GIS) Downloads Multi ; Citation Copyright Map & Data Downloads Map Downloads Geography (GIS) Downloads Multi-ZIP Code Finder

  7. Estimation of Maximum Ground Motions in the Form of ShakeMaps and Assessment of Potential Human Fatalities from Scenario Earthquakes on the Chishan Active Fault in southern Taiwan

    NASA Astrophysics Data System (ADS)

    Liu, Kun Sung; Huang, Hsiang Chi; Shen, Jia Rong

    2017-04-01

    Historically, there were many damaging earthquakes in southern Taiwan during the last century. Some of these earthquakes had resulted in heavy loss of human lives. Accordingly, assessment of potential seismic hazards has become increasingly important in southern Taiwan, including Kaohsiung, Tainan and northern Pingtung areas since the Central Geological Survey upgraded the Chishan active fault from suspected fault to Category I in 2010. In this study, we first estimate the maximum seismic ground motions in term of PGA, PGV and MMI by incorporating a site-effect term in attenuation relationships, aiming to show high seismic hazard areas in southern Taiwan. Furthermore, we will assess potential death tolls due to large future earthquakes occurring on Chishan active fault. As a result, from the maximum PGA ShakeMap for an Mw7.2 scenario earthquake on the Chishan active fault in southern Taiwan, we can see that areas with high PGA above 400 gals, are located in the northeastern, central and northern parts of southwestern Kaohsiung as well as the southern part of central Tainan. In addition, comparing the cities located in Tainan City at similar distances from the Chishan fault have relatively greater PGA and PGV than those in Kaohsiung City and Pingtung County. This is mainly due to large site response factors in Tainan. On the other hand, seismic hazard in term of PGA and PGV, respectively, show that they are not particular high in the areas near the Chishan fault. The main reason is that these areas are marked with low site response factors. Finally, the estimated fatalities in Kaohsiung City at 5230, 4285 and 2786, respectively, for Mw 7.2, 7.0 and 6.8 are higher than those estimated for Tainan City and Pingtung County. The main reason is high population density above 10000 persons per km2 are present in Fongshan, Zuoying, Sanmin, Cianjin, Sinsing, Yancheng, Lingya Districts and between 5,000 and 10,000 persons per km2 are present in Nanzih and Gushan Districts in

  8. Mapping Potassium

    NASA Image and Video Library

    2015-04-16

    During the first year of NASA MESSENGER orbital mission, the spacecraft GRS instrument measured the elemental composition of Mercury surface materials. mong the most important discoveries from the GRS was the observation of higher abundances of the moderately volatile elements potassium, sodium, and chlorine than expected from previous scientific models and theories. Particularly high concentrations of these elements were observed at high northern latitudes, as illustrated in this potassium abundance map, which provides a view of the surface centered at 60° N latitude and 120° E longitude. This map was the first elemental map ever made of Mercury's surface and is to-date the only map to report absolute elemental concentrations, in comparison to element ratios. Prior to MESSENGER's arrival at Mercury, scientists expected that the planet would be depleted in moderately volatile elements, as is the case for our Moon. The unexpectedly high abundances observed with the GRS have forced a reevaluation of our understanding of the formation and evolution of Mercury. In addition, the K map provided the first evidence for distinct geochemical terranes on Mercury, as the high-potassium region was later found to also be distinct in its low Mg/Si, Ca/Si, S/Si, and high Na/Si and Cl/Si abundances. Instrument: Gamma-Ray Spectrometer (GRS) http://photojournal.jpl.nasa.gov/catalog/PIA19414

  9. Vision-based mapping with cooperative robots

    NASA Astrophysics Data System (ADS)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  10. Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.

    PubMed

    McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark

    2018-07-01

    To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Seismic hazard maps for Haiti

    USGS Publications Warehouse

    Frankel, Arthur; Harmsen, Stephen; Mueller, Charles; Calais, Eric; Haase, Jennifer

    2011-01-01

    We have produced probabilistic seismic hazard maps of Haiti for peak ground acceleration and response spectral accelerations that include the hazard from the major crustal faults, subduction zones, and background earthquakes. The hazard from the Enriquillo-Plantain Garden, Septentrional, and Matheux-Neiba fault zones was estimated using fault slip rates determined from GPS measurements. The hazard from the subduction zones along the northern and southeastern coasts of Hispaniola was calculated from slip rates derived from GPS data and the overall plate motion. Hazard maps were made for a firm-rock site condition and for a grid of shallow shear-wave velocities estimated from topographic slope. The maps show substantial hazard throughout Haiti, with the highest hazard in Haiti along the Enriquillo-Plantain Garden and Septentrional fault zones. The Matheux-Neiba Fault exhibits high hazard in the maps for 2% probability of exceedance in 50 years, although its slip rate is poorly constrained.

  12. Drawing Road Networks with Mental Maps.

    PubMed

    Lin, Shih-Syun; Lin, Chao-Hung; Hu, Yan-Jhang; Lee, Tong-Yee

    2014-09-01

    Tourist and destination maps are thematic maps designed to represent specific themes in maps. The road network topologies in these maps are generally more important than the geometric accuracy of roads. A road network warping method is proposed to facilitate map generation and improve theme representation in maps. The basic idea is deforming a road network to meet a user-specified mental map while an optimization process is performed to propagate distortions originating from road network warping. To generate a map, the proposed method includes algorithms for estimating road significance and for deforming a road network according to various geometric and aesthetic constraints. The proposed method can produce an iconic mark of a theme from a road network and meet a user-specified mental map. Therefore, the resulting map can serve as a tourist or destination map that not only provides visual aids for route planning and navigation tasks, but also visually emphasizes the presentation of a theme in a map for the purpose of advertising. In the experiments, the demonstrations of map generations show that our method enables map generation systems to generate deformed tourist and destination maps efficiently.

  13. Misclassification bias in areal estimates

    Treesearch

    Raymond L. Czaplewski

    1992-01-01

    In addition to thematic maps, remote sensing provides estimates of area in different thematic categories. Areal estimates are frequently used for resource inventories, management planning, and assessment analyses. Misclassification causes bias in these statistical areal estimates. For example, if a small percentage of a common cover type is misclassified as a rare...

  14. Mapping your competitive position.

    PubMed

    D'Aveni, Richard A

    2007-11-01

    A price-benefit positioning map helps you see, through your customers' eyes, how your product compares with all its competitors in a market. You can draw such a map quickly and objectively, without having to resort to costly, time-consuming consumer surveys or subjective estimates of the excellence of your product and the shortcomings of all the others. Creating a positioning map involves three steps: First, define your market to include everything your customers might consider to be your product's competitors or substitutes. Second, track the price your customers actually pay (wholesale or retail? bundled or unbundled?) and identify what your customers see as your offering's primary benefit. This is done through regression analysis, determining which of the product's attributes (as described objectively by rating services, government agencies, R&D departments, and the like) explains most of the variance in its price. Third, draw the map by plotting on a graph the position of every product in the market you've selected according to its price and its level of primary benefit, and draw a line that runs through the middle of the points. What you get is a picture of the competitive landscape of your market, where all the products above the line command a price premium owing to some secondary benefit customers value, and all those below the line are positioned to earn market share through lower prices and reduced secondary benefits. Using examples as varied as Harley-Davidson motorcycles, Motorola cell phones, and the New York restaurant market, Tuck professor D'Aveni demonstrates some of the many ways the maps can be used: to locate unoccupied or less-crowded spaces in highly competitive markets, for instance, or to identify opportunities created through changes in the relationship between the primary benefit and prices. The maps even allow companies to anticipate--and counter-- rivals' strategies. R eprint RO711G

  15. Investigation of contrast-enhanced subtracted breast CT images with MAP-EM based on projection-based weighting imaging.

    PubMed

    Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo

    2018-06-01

    Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.

  16. Map Downloads | USDA Plant Hardiness Zone Map

    Science.gov Websites

    formats. National, regional, and state maps are available under the View Maps section. Print Quality Maps dpi Graphic TIF 222 MB US Map 300 dpi Adobe Photoshop PS 25 MB *Print quality maps are very large | Non-Discrimination Statement | Information Quality | USA.gov | Whitehouse.gov

  17. Map projections

    USGS Publications Warehouse

    ,

    1993-01-01

    A map projection is used to portray all or part of the round Earth on a flat surface. This cannot be done without some distortion. Every projection has its own set of advantages and disadvantages. There is no "best" projection. The mapmaker must select the one best suited to the needs, reducing distortion of the most important features. Mapmakers and mathematicians have devised almost limitless ways to project the image of the globe onto paper. Scientists at the U. S. Geological Survey have designed projections for their specific needs—such as the Space Oblique Mercator, which allows mapping from satellites with little or no distortion. This document gives the key properties, characteristics, and preferred uses of many historically important projections and of those frequently used by mapmakers today.

  18. Estimated Radiation Dosage on Mars

    NASA Image and Video Library

    2002-03-01

    This global map of Mars, based on data from NASA Mars Odyssey, shows the estimated radiation dosages from cosmic rays reaching the surface, a serious health concern for any future human exploration of the planet.

  19. SAFIS Area Estimation Techniques

    Treesearch

    Gregory A. Reams

    2000-01-01

    The Southern Annual Forest inventory System (SAFIS) is in various stages of implementation in 8 of the 13 southern states served by the Southern Research Station of the USDA Forest Service. Compared to periodic inventories, SAFIS requires more rapid generation of land use and land cover maps. The current photo system for phase one area estimation has changed little...

  20. SAFIS area estimation techniques

    Treesearch

    Gregory A. Reams

    2000-01-01

    The Southern Annual Forest Inventory System (SAFIS) is in various stages of implementation in 8 of the 13 southern states served by the Southern Research Station of the USDA Forest Service. Compared to periodic inventories, SAFIS requires more rapid generation of land use and land cover maps. The current photo system for phase one area estimation has changed little...

  1. Assessing the Importance of Prior Biospheric Fluxes on Inverse Model Estimates of CO2

    NASA Astrophysics Data System (ADS)

    Philip, S.; Johnson, M. S.; Potter, C. S.; Genovese, V. B.

    2017-12-01

    Atmospheric mixing ratios of carbon dioxide (CO2) are largely controlled by anthropogenic emissions and biospheric sources/sinks. The processes controlling terrestrial biosphere-atmosphere carbon exchange are currently not fully understood, resulting in models having significant differences in the quantification of biospheric CO2 fluxes. Currently, atmospheric chemical transport models (CTM) and global climate models (GCM) use multiple different biospheric CO2 flux models resulting in large differences in simulating the global carbon cycle. The Orbiting Carbon Observatory 2 (OCO-2) satellite mission was designed to allow for the improved understanding of the processes involved in the exchange of carbon between terrestrial ecosystems and the atmosphere, and therefore allowing for more accurate assessment of the seasonal/inter-annual variability of CO2. OCO-2 provides much-needed CO2 observations in data-limited regions allowing for the evaluation of model simulations of greenhouse gases (GHG) and facilitating global/regional estimates of "top-down" CO2 fluxes. We conduct a 4-D Variation (4D-Var) data assimilation with the GEOS-Chem (Goddard Earth Observation System-Chemistry) CTM using 1) OCO-2 land nadir and land glint retrievals and 2) global in situ surface flask observations to constrain biospheric CO2 fluxes. We apply different state-of-the-science year-specific CO2 flux models (e.g., NASA-CASA (NASA-Carnegie Ames Stanford Approach), CASA-GFED (Global Fire Emissions Database), Simple Biosphere Model version 4 (SiB-4), and LPJ (Lund-Postdam-Jena)) to assess the impact of "a priori" flux predictions to "a posteriori" estimates. We will present the "top-down" CO2 flux estimates for the year 2015 using OCO-2 and in situ observations, and a complete indirect evaluation of the a priori and a posteriori flux estimates using independent in situ observations. We will also present our assessment of the variability of "top-down" CO2 flux estimates when using different

  2. Planck 2015 results. VIII. High Frequency Instrument data processing: Calibration and maps

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bock, J. J.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Lellouch, E.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Moss, A.; Mottet, S.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vibert, L.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    This paper describes the processing applied to the cleaned, time-ordered information obtained from the Planck High Frequency Instrument (HFI) with the aim of producing photometrically calibrated maps in temperature and (for the first time) in polarization. The data from the entire 2.5-year HFI mission include almost five full-sky surveys. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To obtain the best accuracy on the calibration over such a large range, two different photometric calibration schemes have been used. The 545 and 857 GHz data are calibrated using models of planetary atmospheric emission. The lower frequencies (from 100 to 353 GHz) are calibrated using the time-variable cosmological microwave background dipole, which we call the orbital dipole. This source of calibration only depends on the satellite velocity with respect to the solar system. Using a CMB temperature of TCMB = 2.7255 ± 0.0006 K, it permits an independent measurement of the amplitude of the CMB solar dipole (3364.3 ± 1.5 μK), which is approximatively 1σ higher than the WMAP measurement with a direction that is consistent between the two experiments. We describe the pipeline used to produce the maps ofintensity and linear polarization from the HFI timelines, and the scheme used to set the zero level of the maps a posteriori. We also summarize the noise characteristics of the HFI maps in the 2015 Planck data release and present some null tests to assess their quality. Finally, we discuss the major systematic effects and in particular the leakage induced by flux mismatch between the detectors that leads to spurious polarization signal.

  3. Influence of erroneous patient records on population pharmacokinetic modeling and individual bayesian estimation.

    PubMed

    van der Meer, Aize Franciscus; Touw, Daniël J; Marcus, Marco A E; Neef, Cornelis; Proost, Johannes H

    2012-10-01

    Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual maximum a posteriori Bayesian (MAPB) estimation. A total of 1123 patient records of neonates who were administered vancomycin were used for population PK modeling by iterative 2-stage Bayesian (ITSB) analysis. Cut-off values for weighted residuals were tested for exclusion of records from the analysis. A simulation study was performed to assess the influence of erroneous records on population modeling and individual MAPB estimation. Also the cut-off values for weighted residuals were tested in the simulation study. Errors in registration have limited the influence on outcomes of population PK modeling but can have detrimental effects on individual MAPB estimation. A population PK model created from a data set with many registration errors has little influence on subsequent MAPB estimates for precisely recorded data. A weighted residual value of 2 for concentration measurements has good discriminative power for identification of erroneous records. ITSB analysis and its individual estimates are hardly affected by most registration errors. Large registration errors can be detected by weighted residuals of concentration.

  4. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  5. Relationship mapping

    NASA Astrophysics Data System (ADS)

    Benachenhou, D.

    2009-04-01

    Information-technology departments in large enterprises spend 40% of budget on information integration-combining information from different data sources into a coherent form. IDC, a market-intelligence firm, estimates that the market for data integration and access software (which includes the key enabling technology for information integration) was about 2.5 billion in 2007, and is expected to grow to 3.8 billion in 2012. This is only the cost estimate for structured or traditional database information integration. Just imagine the market for transforming text into structured information and subsequent fusion with traditional databases.

  6. Mapping flood hazards under uncertainty through probabilistic flood inundation maps

    NASA Astrophysics Data System (ADS)

    Stephens, T.; Bledsoe, B. P.; Miller, A. J.; Lee, G.

    2017-12-01

    Changing precipitation, rapid urbanization, and population growth interact to create unprecedented challenges for flood mitigation and management. Standard methods for estimating risk from flood inundation maps generally involve simulations of floodplain hydraulics for an established regulatory discharge of specified frequency. Hydraulic model results are then geospatially mapped and depicted as a discrete boundary of flood extents and a binary representation of the probability of inundation (in or out) that is assumed constant over a project's lifetime. Consequently, existing methods utilized to define flood hazards and assess risk management are hindered by deterministic approaches that assume stationarity in a nonstationary world, failing to account for spatio-temporal variability of climate and land use as they translate to hydraulic models. This presentation outlines novel techniques for portraying flood hazards and the results of multiple flood inundation maps spanning hydroclimatic regions. Flood inundation maps generated through modeling of floodplain hydraulics are probabilistic reflecting uncertainty quantified through Monte-Carlo analyses of model inputs and parameters under current and future scenarios. The likelihood of inundation and range of variability in flood extents resulting from Monte-Carlo simulations are then compared with deterministic evaluations of flood hazards from current regulatory flood hazard maps. By facilitating alternative approaches of portraying flood hazards, the novel techniques described in this presentation can contribute to a shifting paradigm in flood management that acknowledges the inherent uncertainty in model estimates and the nonstationary behavior of land use and climate.

  7. Estimating the Geocenter from GNSS Observations

    NASA Astrophysics Data System (ADS)

    Dach, Rolf; Michael, Meindl; Beutler, Gerhard; Schaer, Stefan; Lutz, Simon; Jäggi, Adrian

    2014-05-01

    The satellites of the Global Navigation Satellite Systems (GNSS) are orbiting the Earth according to the laws of celestial mechanics. As a consequence, the satellites are sensitive to the coordinates of the center of mass of the Earth. The coordinates of the (ground) tracking stations are referring to the center of figure as the conventional origin of the reference frame. The difference between the center of mass and center of figure is the instantaneous geocenter. Following this definition the global GNSS solutions are sensitive to the geocenter. Several studies demonstrated strong correlations of the GNSS-derived geocenter coordinates with parameters intended to absorb radiation pressure effects acting on the GNSS satellites, and with GNSS satellite clock parameters. One should thus pose the question to what extent these satellite-related parameters absorb (or hide) the geocenter information. A clean simulation study has been performed to answer this question. The simulation environment allows it in particular to introduce user-defined shifts of the geocenter (systematic inconsistencies between the satellite's and station's reference frames). These geocenter shifts may be recovered by the mentioned parameters - provided they were set up in the analysis. If the geocenter coordinates are not estimated, one may find out which other parameters absorb the user-defined shifts of the geocenter and to what extent. Furthermore, the simulation environment also allows it to extract the correlation matrix from the a posteriori covariance matrix to study the correlations between different parameter types of the GNSS analysis system. Our results show high degrees of correlations between geocenter coordinates, orbit-related parameters, and satellite clock parameters. These correlations are of the same order of magnitude as the correlations between station heights, troposphere, and receiver clock parameters in each regional or global GNSS network analysis. If such correlations

  8. Human Mind Maps

    ERIC Educational Resources Information Center

    Glass, Tom

    2016-01-01

    When students generate mind maps, or concept maps, the maps are usually on paper, computer screens, or a blackboard. Human Mind Maps require few resources and little preparation. The main requirements are space where students can move around and a little creativity and imagination. Mind maps can be used for a variety of purposes, and Human Mind…

  9. Topographic maps: Tools for planning

    USGS Publications Warehouse

    Kaufman, George A.

    1980-01-01

    Topographic maps are a detailed record of a land area, giving geographic positions and elevations for both natural and man-made features. They show the shape of the land the mountains, valleys, and plains by means of brown contour lines (lines of equal elevation above sea level). In steep mountainous areas, contours are closely spaced; in flatter areas, they are far apart. The elevation of any point on the map can be estimated by referring to the elevations of the contour lines above and below it.