Sample records for priori error analysis

  1. Orbit/attitude estimation with LANDSAT Landmark data

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Waligora, S.

    1979-01-01

    The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.

  2. Some Simultaneous Inference Procedures for A Priori Contrasts.

    ERIC Educational Resources Information Center

    Convey, John J.

    The testing of a priori contrasts, post hoc contrasts, and experimental error rates are discussed. Methods for controlling the experimental error rate for a set of a priori contrasts tested simultaneously have been developed by Dunnett, Dunn, Sidak, and Krishnaiah. Each of these methods is discussed and contrasted as to applicability, power, and…

  3. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  4. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  5. Evaluation of the impact of observations on blended sea surface winds in a two-dimensional variational scheme using degrees of freedom

    NASA Astrophysics Data System (ADS)

    Wang, Ting; Xiang, Jie; Fei, Jianfang; Wang, Yi; Liu, Chunxia; Li, Yuanxiang

    2017-12-01

    This paper presents an evaluation of the observational impacts on blended sea surface winds from a two-dimensional variational data assimilation (2D-Var) scheme. We begin by briefly introducing the analysis sensitivity with respect to observations in variational data assimilation systems and its relationship with the degrees of freedom for signal (DFS), and then the DFS concept is applied to the 2D-Var sea surface wind blending scheme. Two methods, a priori and a posteriori, are used to estimate the DFS of the zonal ( u) and meridional ( v) components of winds in the 2D-Var blending scheme. The a posteriori method can obtain almost the same results as the a priori method. Because only by-products of the blending scheme are used for the a posteriori method, the computation time is reduced significantly. The magnitude of the DFS is critically related to the observational and background error statistics. Changing the observational and background error variances can affect the DFS value. Because the observation error variances are assumed to be uniform, the observational influence at each observational location is related to the background error variance, and the observations located at the place where there are larger background error variances have larger influences. The average observational influence of u and v with respect to the analysis is about 40%, implying that the background influence with respect to the analysis is about 60%.

  6. A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; Fang, Zhichao

    2014-01-01

    We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153

  7. FAMA: Fast Automatic MOOG Analysis

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2014-02-01

    FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

  8. Assessment of Mars Atmospheric Temperature Retrievals from the Thermal Emission Spectrometer Radiances

    NASA Technical Reports Server (NTRS)

    Hoffman, Matthew J.; Eluszkiewicz, Janusz; Weisenstein, Deborah; Uymin, Gennady; Moncet, Jean-Luc

    2012-01-01

    Motivated by the needs of Mars data assimilation. particularly quantification of measurement errors and generation of averaging kernels. we have evaluated atmospheric temperature retrievals from Mars Global Surveyor (MGS) Thermal Emission Spectrometer (TES) radiances. Multiple sets of retrievals have been considered in this study; (1) retrievals available from the Planetary Data System (PDS), (2) retrievals based on variants of the retrieval algorithm used to generate the PDS retrievals, and (3) retrievals produced using the Mars 1-Dimensional Retrieval (M1R) algorithm based on the Optimal Spectral Sampling (OSS ) forward model. The retrieved temperature profiles are compared to the MGS Radio Science (RS) temperature profiles. For the samples tested, the M1R temperature profiles can be made to agree within 2 K with the RS temperature profiles, but only after tuning the prior and error statistics. Use of a global prior that does not take into account the seasonal dependence leads errors of up 6 K. In polar samples. errors relative to the RS temperature profiles are even larger. In these samples, the PDS temperature profiles also exhibit a poor fit with RS temperatures. This fit is worse than reported in previous studies, indicating that the lack of fit is due to a bias correction to TES radiances implemented after 2004. To explain the differences between the PDS and Ml R temperatures, the algorithms are compared directly, with the OSS forward model inserted into the PDS algorithm. Factors such as the filtering parameter, the use of linear versus nonlinear constrained inversion, and the choice of the forward model, are found to contribute heavily to the differences in the temperature profiles retrieved in the polar regions, resulting in uncertainties of up to 6 K. Even outside the poles, changes in the a priori statistics result in different profile shapes which all fit the radiances within the specified error. The importance of the a priori statistics prevents reliable global retrievals based a single a priori and strongly implies that a robust science analysis must instead rely on retrievals employing localized a priori information, for example from an ensemble based data assimilation system such as the Local Ensemble Transform Kalman Filter (LETKF).

  9. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  10. Neuromuscular dose-response studies: determining sample size.

    PubMed

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  11. Covariance analysis of the airborne laser ranging system

    NASA Technical Reports Server (NTRS)

    Englar, T. S., Jr.; Hammond, C. L.; Gibbs, B. P.

    1981-01-01

    The requirements and limitations of employing an airborne laser ranging system for detecting crustal shifts of the Earth within centimeters over a region of approximately 200 by 400 km are presented. The system consists of an aircraft which flies over a grid of ground deployed retroreflectors, making six passes over the grid at two different altitudes. The retroreflector baseline errors are assumed to result from measurement noise, a priori errors on the aircraft and retroreflector positions, tropospheric refraction, and sensor biases.

  12. Using meta-information of a posteriori Bayesian solutions of the hypocentre location task for improving accuracy of location error estimation

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2015-06-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.

  13. Orbit determination of the Next-Generation Beidou satellites with Intersatellite link measurements and a priori orbit constraints

    NASA Astrophysics Data System (ADS)

    Ren, Xia; Yang, Yuanxi; Zhu, Jun; Xu, Tianhe

    2017-11-01

    Intersatellite Link (ISL) technology helps to realize the auto update of broadcast ephemeris and clock error parameters for Global Navigation Satellite System (GNSS). ISL constitutes an important approach with which to both improve the observation geometry and extend the tracking coverage of China's Beidou Navigation Satellite System (BDS). However, ISL-only orbit determination might lead to the constellation drift, rotation, and even lead to the divergence in orbit determination. Fortunately, predicted orbits with good precision can be used as a priori information with which to constrain the estimated satellite orbit parameters. Therefore, the precision of satellite autonomous orbit determination can be improved by consideration of a priori orbit information, and vice versa. However, the errors of rotation and translation in a priori orbit will remain in the ultimate result. This paper proposes a constrained precise orbit determination (POD) method for a sub-constellation of the new Beidou satellite constellation with only a few ISLs. The observation model of dual one-way measurements eliminating satellite clock errors is presented, and the orbit determination precision is analyzed with different data processing backgrounds. The conclusions are as follows. (1) With ISLs, the estimated parameters are strongly correlated, especially the positions and velocities of satellites. (2) The performance of determined BDS orbits will be improved by the constraints with more precise priori orbits. The POD precision is better than 45 m with a priori orbit constrain of 100 m precision (e.g., predicted orbits by telemetry tracking and control system), and is better than 6 m with precise priori orbit constraints of 10 m precision (e.g., predicted orbits by international GNSS monitoring & Assessment System (iGMAS)). (3) The POD precision is improved by additional ISLs. Constrained by a priori iGMAS orbits, the POD precision with two, three, and four ISLs is better than 6, 3, and 2 m, respectively. (4) The in-plane link and out-of-plane link have different contributions to observation configuration and system observability. The POD with weak observation configuration (e.g., one in-plane link and one out-of-plane link) should be tightly constrained with a priori orbits.

  14. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  15. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less

  16. A priori stability results for PFC

    NASA Astrophysics Data System (ADS)

    Rossiter, J. A.

    2017-02-01

    Despite its popularity in industry and obvious efficacy, predictive functional control has few rigorous a priori stability results in the literature. In many cases, common sense and intuition with some trial and error are the main design tools. This paper seeks to tackle that gap by providing some analysis of the control law and showing what forms of stability assurances can be given and how these depend on the user choices of coincidence horizon and desired closed-loop pole. The conditions are separated into necessary, but not sufficient conditions for stability and, conversely, sufficient but not necessary conditions. Numerical examples demonstrate the efficacy of these conditions and the ease of use.

  17. Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations

    NASA Astrophysics Data System (ADS)

    Loseille, A.; Dervieux, A.; Alauzet, F.

    2010-04-01

    This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.

  18. A priori error estimates for an hp-version of the discontinuous Galerkin method for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.; Oden, J. Tinsley

    1993-01-01

    A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.

  19. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  20. Error analysis for the ground-based microwave ozone measurements during STOIC

    NASA Technical Reports Server (NTRS)

    Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick

    1995-01-01

    We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.

  1. Explicit error bounds for the α-quasi-periodic Helmholtz problem.

    PubMed

    Lord, Natacha H; Mulholland, Anthony J

    2013-10-01

    This paper considers a finite element approach to modeling electromagnetic waves in a periodic diffraction grating. In particular, an a priori error estimate associated with the α-quasi-periodic transformation is derived. This involves the solution of the associated Helmholtz problem being written as a product of e(iαx) and an unknown function called the α-quasi-periodic solution. To begin with, the well-posedness of the continuous problem is examined using a variational formulation. The problem is then discretized, and a rigorous a priori error estimate, which guarantees the uniqueness of this approximate solution, is derived. In previous studies, the continuity of the Dirichlet-to-Neumann map has simply been assumed and the dependency of the regularity constant on the system parameters, such as the wavenumber, has not been shown. To address this deficiency, in this paper an explicit dependence on the wavenumber and the degree of the polynomial basis in the a priori error estimate is obtained. Since the finite element method is well known for dealing with any geometries, comparison of numerical results obtained using the α-quasi-periodic transformation with a lattice sum technique is then presented.

  2. Hierarchical learning induces two simultaneous, but separable, prediction errors in human basal ganglia.

    PubMed

    Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael

    2013-03-27

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.

  3. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  4. Data-driven region-of-interest selection without inflating Type I error rate.

    PubMed

    Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard

    2017-01-01

    In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.

  5. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    NASA Astrophysics Data System (ADS)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  6. Hierarchical Learning Induces Two Simultaneous, But Separable, Prediction Errors in Human Basal Ganglia

    PubMed Central

    Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew

    2013-01-01

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092

  7. State space truncation with quantified errors for accurate solutions to discrete Chemical Master Equation

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653

  8. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  9. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-04-22

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  10. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  11. The application of Skylab altimetry to marine geoid determination

    NASA Technical Reports Server (NTRS)

    Mourad, A. G.; Gopalapillai, S.; Kuhner, M.; Fubara, D. M. (Principal Investigator)

    1975-01-01

    The author had identified the following significant results. The major results can be divided broadly into two groups. One group is concerned with the effects of errors inherent in the various input data, such as the orbit emphemeris, a priori geoid etc. The other consists of the results of the actual analysis of the data from the Skylab EREP passes 4, 6, 7, and 9. Results from the first group were obtained from the analysis of some preliminary data from EREP pass 9 mode 5. The second group of results consists of a set of recovered bias terms for each of the submodes of observations and a set of nine altimetry geoid profiles corresponding to the various passes and modes. Along with each of these profiles, the a priori geoid, gravity anomaly, and the bathymetric data profiles are also presented for easy comparison.

  12. An a priori solar radiation pressure model for the QZSS Michibiki satellite

    NASA Astrophysics Data System (ADS)

    Zhao, Qile; Chen, Guo; Guo, Jing; Liu, Jingnan; Liu, Xianglin

    2018-02-01

    It has been noted that the satellite laser ranging (SLR) residuals of the Quasi-Zenith Satellite System (QZSS) Michibiki satellite orbits show very marked dependence on the elevation angle of the Sun above the orbital plane (i.e., the β angle). It is well recognized that the systematic error is caused by mismodeling of the solar radiation pressure (SRP). Although the error can be reduced by the updated ECOM SRP model, the orbit error is still very large when the satellite switches to orbit-normal (ON) orientation. In this study, an a priori SRP model was established for the QZSS Michibiki satellite to enhance the ECOM model. This model is expressed in ECOM's D, Y, and B axes (DYB) using seven parameters for the yaw-steering (YS) mode, and additional three parameters are used to compensate the remaining modeling deficiencies, particularly the perturbations in the Y axis, based on a redefined DYB for the ON mode. With the proposed a priori model, QZSS Michibiki's precise orbits over 21 months were determined. SLR validation indicated that the systematic β -angle-dependent error was reduced when the satellite was in the YS mode, and better than an 8-cm root mean square (RMS) was achieved. More importantly, the orbit quality was also improved significantly when the satellite was in the ON mode. Relative to ECOM and adjustable box-wing model, the proposed SRP model showed the best performance in the ON mode, and the RMS of the SLR residuals was better than 15 cm, which was a two times improvement over the ECOM without a priori model used, but was still two times worse than the YS mode.

  13. Maximizing the probability of satisfying the clinical goals in radiation therapy treatment planning under setup uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders

    2015-07-15

    Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less

  14. Adaptive optics system performance approximations for atmospheric turbulence correction

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1990-10-01

    Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.

  15. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  16. A general model for attitude determination error analysis

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Seidewitz, ED; Nicholson, Mark

    1988-01-01

    An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.

  17. Geodetic positioning using a global positioning system of satellites

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1980-01-01

    Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.

  18. An Emprical Point Error Model for Tls Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  19. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  20. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  1. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  2. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  3. Phobos mass estimations from MEX and Viking 1 data: influence of different noise sources and estimation strategies

    NASA Astrophysics Data System (ADS)

    Kudryashova, M.; Rosenblatt, P.; Marty, J.-C.

    2015-08-01

    The mass of Phobos is an important parameter which, together with second-order gravity field coefficients and libration amplitude, constrains internal structure and nature of the moon. And thus, it needs to be known with high precision. Nevertheless, Phobos mass (GM, more precisely) estimated by different authors based on diverse data-sets and methods, varies by more than their 1-sigma error. The most complete lists of GM values are presented in the works of R. Jacobson (2010) and M. Paetzold et al. (2014) and include the estimations in the interval from (5.39 ± 0:03).10^5 (Smith et al., 1995) till (8.5 ± 0.7).10^5[m^3/s^2] (Williams et al., 1988). Furthermore, even the comparison of the estimations coming from the same estimation procedure applied to the consecutive flybys of the same spacecraft (s/c) shows big variations in GMs. The indicated behavior is very pronounced in the GM estimations stemming from the Viking1 flybys in February 1977 (as well as from MEX flybys, though in a smaller amplitude) and in this work we made an attempt to figure out its roots. The errors of Phobos GM estimations depend on the precision of the model (e.g. accuracy of Phobos a priori ephemeris and its a priori GM value) as well as on the radio-tracking measurements quality (noise, coverage, flyby distance). In the present work we are testing the impact of mentioned above error sources by means of simulations. We also consider the effect of the uncertainties in a priori Phobos positions on the GM estimations from real observations. Apparently, the strategy (i.e. splitting real observations in data-arcs, whether they stem from the close approaches of Phobos by spacecraft or from analysis of the s/c orbit evolution around Mars) of the estimations has an impact on the Phobos GM estimation.

  4. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  5. Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review.

    PubMed

    Groppe, David M; Urbach, Thomas P; Kutas, Marta

    2011-12-01

    Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the familywise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation. Copyright © 2011 Society for Psychophysiological Research.

  6. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

    PubMed Central

    Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

    2014-01-01

    We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

  7. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    PubMed

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  8. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  9. Error analysis for a spaceborne laser ranging system

    NASA Technical Reports Server (NTRS)

    Pavlis, E. C.

    1979-01-01

    The dependence (or independence) of baseline accuracies, obtained from a typical mission of a spaceborne ranging system, on several factors is investigated. The emphasis is placed on a priori station information, but factors such as the elevation cut-off angle, the geometry of the network, the mean orbital height, and to a limited extent geopotential modeling are also examined. The results are obtained through simulations, but some theoretical justification is also given. Guidelines for freeing the results from these dependencies are suggested for most of the factors.

  10. The Ohio State 1991 geopotential and sea surface topography harmonic coefficient models

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.; Wang, Yan Ming; Pavlis, Nikolaos K.

    1991-01-01

    The computation is described of a geopotential model to deg 360, a sea surface topography model to deg 10/15, and adjusted Geosat orbits for the first year of the exact repeat mission (ERM). This study started from the GEM-T2 potential coefficient model and it's error covariance matrix and Geosat orbits (for 22 ERMs) computed by Haines et al. using the GEM-T2 model. The first step followed the general procedures which use a radial orbit error theory originally developed by English. The Geosat data was processed to find corrections to the a priori geopotential model, corrections to a radial orbit error model for 76 Geosat arcs, and coefficients of a harmonic representation of the sea surface topography. The second stage of the analysis took place by doing a combination of the GEM-T2 coefficients with 30 deg gravity data derived from surface gravity data and anomalies obtained from altimeter data. The analysis has shown how a high degree spherical harmonic model can be determined combining the best aspects of two different analysis techniques. The error analysis was described that has led to the accuracy estimates for all the coefficients to deg 360. Significant work is needed to improve the modeling effort.

  11. Quaternion normalization in spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Deutschmann, J.; Markley, F. L.; Bar-Itzhack, Itzhack Y.

    1993-01-01

    Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques.

  12. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  13. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC

  14. A theory for predicting composite laminate warpage resulting from fabrication

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1975-01-01

    Linear laminate theory is used in conjunction with the moment-curvature relationship to derive equations for predicting end deflections due to warpage without solving the coupled fourth-order partial differential equations of the plate. Using these equations, it is found that a 1 deg error in the orientation angle of one ply is sufficient to produce warpage end deflection equal to two laminate thicknesses in a 10 inch by 10 inch laminate made from 8-ply Mod-I/epoxy. From a sensitivity analysis on the governing parameters, it is found that a 3 deg fiber migration or a void volume ratio of three percent in some plies is sufficient to produce laminate warpage corner deflection equal to several laminate thicknesses. Tabular and graphical data are presented which can be used to identify possible errors contributing to laminate warpage and/or to obtain an a priori assessment when unavoidable errors during fabrication are anticipated.

  15. The importance of using dynamical a-priori profiles for infrared O3 retrievals : the case of IASI.

    NASA Astrophysics Data System (ADS)

    Peiro, H.; Emili, E.; Le Flochmoen, E.; Barret, B.; Cariolle, D.

    2016-12-01

    Tropospheric ozone (O3) is a trace gas involved in the global greenhouse effect. To quantify its contribution to global warming, an accurate determination of O3 profiles is necessary. The instrument IASI (Infrared Atmospheric Sounding Interferometer), on board satellite MetOP-A, is the more sensitive sensor to tropospheric O3 with a high spatio-temporal coverage. Satellite retrievals are often based on the inversion of the measured radiance data with a variational approach. This requires an a priori profile and the correspondent error covariance matrix (COV) as ancillary input. Previous studies have shown some biases ( 20%) in IASI retrievals for tropospheric column in the Southern Hemisphere (SH). A possible source of errors is caused by the a priori profile. This study aims to i) build a dynamical a priori profile O3 with a Chemistry Transport Model (CTM), ii) integrate and to demonstrate the interest of this a priori profile in IASI retrievals.Global O3 profiles are retrieved from IASI radiances with the SOFRID (Software for a fast Retrieval of IASI Data) algorithm. It is based on the RTTOV (Radiative Transfer for TOVS) code and a 1D-Var retrieval scheme. Until now, a constant a priori profile was based on a combination of MOZAIC, WOUDC-SHADOZ and Aura/MLS data named here CLIM PR. The global CTM MOCAGE (Modèle de Chimie Atmosphérique à Grande Echelle) has been used with a linear O3 chemistry scheme to assimilate Microwave Limb Sounder (MLS) data. The model resolution of 2°x2°, with 60 sigma-hybrid vertical levels covering the stratosphere has been used. MLS level 2 products have been assimilated with a 4D-VAR variational algorithm to constrain stratospheric O3 and obtain high quality a priori profiles O3 above the tropopause. From this reanalysis, we built these profiles at a 6h frequency on a coarser resolution grid 10°x20° named MOCAGE+MLS PR.Statistical comparisons between retrievals and ozonesondes have shown better correlations and smaller biases for MOCAGE+MLS PR than CLIM PR. We found biases of 6% instead of 33% in SH showing that the a priori plays an important role within O3 infrared-retrievals. Improvements of IASI retrievals have been obtained in the free troposphere and low stratosphere, inserting dynamical a priori profiles from a CTM in SOFRID. Possible advancements would be to insert dynamical COV in SOFRID.

  16. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1997-01-01

    We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a testbed for the use of the distortion representation of forecast errors, (2) act as one means of validating the GEOS data assimilation system and (3) help to describe the impact of the ERS 1 scatterometer data.

  17. Using CO2:CO Correlations to Improve Inverse Analyses of Carbon Fluxes

    NASA Technical Reports Server (NTRS)

    Palmer, Paul I.; Suntharalingam, Parvadha; Jones, Dylan B. A.; Jacob, Daniel J.; Streets, David G.; Fu, Qingyan; Vay, Stephanie A.; Sachse, Glen W.

    2006-01-01

    Observed correlations between atmospheric concentrations of CO2 and CO represent potentially powerful information for improving CO2 surface flux estimates through coupled CO2-CO inverse analyses. We explore the value of these correlations in improving estimates of regional CO2 fluxes in east Asia by using aircraft observations of CO2 and CO from the TRACE-P campaign over the NW Pacific in March 2001. Our inverse model uses regional CO2 and CO surface fluxes as the state vector, separating biospheric and combustion contributions to CO2. CO2-CO error correlation coefficients are included in the inversion as off-diagonal entries in the a priori and observation error covariance matrices. We derive error correlations in a priori combustion source estimates of CO2 and CO by propagating error estimates of fuel consumption rates and emission factors. However, we find that these correlations are weak because CO source uncertainties are mostly determined by emission factors. Observed correlations between atmospheric CO2 and CO concentrations imply corresponding error correlations in the chemical transport model used as the forward model for the inversion. These error correlations in excess of 0.7, as derived from the TRACE-P data, enable a coupled CO2-CO inversion to achieve significant improvement over a CO2-only inversion for quantifying regional fluxes of CO2.

  18. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations: 1. Forward model, error analysis, and information content

    NASA Astrophysics Data System (ADS)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-05-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  19. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations. Part I: Forward model, error analysis, and information content.

    PubMed

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-05-27

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness ( τ ), effective radius ( r eff ), and cloud-top height ( h ). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  20. Description and Sensitivity Analysis of the SOLSE/LORE-2 and SAGE III Limb Scattering Ozone Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.

    2002-01-01

    The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.

  1. Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).

    PubMed

    Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie

    2017-01-01

    This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.

  2. GPS (Global Positioning System) Error Budgets, Accuracy and Applications Considerations for Test and Training Ranges.

    DTIC Science & Technology

    1982-12-01

    RELATIONSHIP OF POOP AND HOOP WITH A PRIORI ALTITUDE UNCERTAINTY IN 3 DIMENSIONAL NAVIGATION. 4Satellite configuration ( AZEL ), (00,100), (900,10O), (180,10O...RELATIONSHIP OF HOOP WITH A PRIORI ALTITUDE UNCERTAINTY IN 2 DIMENSIONAL NAVIGATION. Satellite configuration ( AZEL ), (°,lO), (90,10), (180,lOO), (27o8...UNCERTAINTY IN 2 DIMENSIONAL NAVIGATION. Satellite configuration ( AZEL ), (00,100), (909,200), (l80*,30*), (270*,40*) 4.4-12 4.t 78 " 70 " 30F 20F 4S, a

  3. Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities

    NASA Astrophysics Data System (ADS)

    Eyuboglu, B. Murat; Pilkington, Theo C.

    1993-08-01

    In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.

  4. Quantitative fluorescence tomography using a trimodality system: in vivo validation

    PubMed Central

    Lin, Yuting; Barber, William C.; Iwanczyk, Jan S.; Roeck, Werner W.; Nalcioglu, Orhan; Gulsen, Gultekin

    2010-01-01

    A fully integrated trimodality fluorescence, diffuse optical, and x-ray computed tomography (FT∕DOT∕XCT) system for small animal imaging is reported in this work. The main purpose of this system is to obtain quantitatively accurate fluorescence concentration images using a multimodality approach. XCT offers anatomical information, while DOT provides the necessary background optical property map to improve FT image accuracy. The quantitative accuracy of this trimodality system is demonstrated in vivo. In particular, we show that a 2-mm-diam fluorescence inclusion located 8 mm deep in a nude mouse can only be localized when functional a priori information from DOT is available. However, the error in the recovered fluorophore concentration is nearly 87%. On the other hand, the fluorophore concentration can be accurately recovered within 2% error when both DOT functional and XCT structural a priori information are utilized together to guide and constrain the FT reconstruction algorithm. PMID:20799770

  5. An Improved Empirical Harmonic Model of the Celestial Intermediate Pole Offsets from a Global VLBI Solution

    NASA Astrophysics Data System (ADS)

    Belda, Santiago; Heinkelmann, Robert; Ferrándiz, José M.; Karbon, Maria; Nilsson, Tobias; Schuh, Harald

    2017-10-01

    Very Long Baseline Interferometry (VLBI) is the only space geodetic technique capable of measuring all the Earth orientation parameters (EOP) accurately and simultaneously. Modeling the Earth's rotational motion in space within the stringent consistency goals of the Global Geodetic Observing System (GGOS) makes VLBI observations essential for constraining the rotation theories. However, the inaccuracy of early VLBI data and the outdated products could cause non-compliance with these goals. In this paper, we perform a global VLBI analysis of sessions with different processing settings to determine a new set of empirical corrections to the precession offsets and rates, and to the amplitudes of a wide set of terms included in the IAU 2006/2000A precession-nutation theory. We discuss the results in terms of consistency, systematic errors, and physics of the Earth. We find that the largest improvements w.r.t. the values from IAU 2006/2000A precession-nutation theory are associated with the longest periods (e.g., 18.6-yr nutation). A statistical analysis of the residuals shows that the provided corrections attain an error reduction at the level of 15 μas. Additionally, including a Free Core Nutation (FCN) model into a priori Celestial Pole Offsets (CPOs) provides the lowest Weighted Root Mean Square (WRMS) of residuals. We show that the CPO estimates are quite insensitive to TRF choice, but slightly sensitive to the a priori EOP and the inclusion of different VLBI sessions. Finally, the remaining residuals reveal two apparent retrograde signals with periods of nearly 2069 and 1034 days.

  6. Derivation and Error Analysis of the Earth Magnetic Anomaly Grid at 2 arc min Resolution Version 3 (EMAG2v3)

    NASA Astrophysics Data System (ADS)

    Meyer, B.; Chulliat, A.; Saltus, R.

    2017-12-01

    The Earth Magnetic Anomaly Grid at 2 arc min resolution version 3, EMAG2v3, combines marine and airborne trackline observations, satellite data, and magnetic observatory data to map the location, intensity, and extent of lithospheric magnetic anomalies. EMAG2v3 includes over 50 million new data points added to NCEI's Geophysical Database System (GEODAS) in recent years. The new grid relies only on observed data, and does not utilize a priori geologic structure or ocean-age information. Comparing this grid to other global magnetic anomaly compilations (e.g., EMAG2 and WDMAM), we can see that the inclusion of a priori ocean-age patterns forces an artificial linear pattern to the grid; the data-only approach allows for greater complexity in representing the evolution along oceanic spreading ridges and continental margins. EMAG2v3 also makes use of the satellite-derived lithospheric field model MF7 in order to accurately represent anomalies with wavelengths greater than 300 km and to create smooth grid merging boundaries. The heterogeneous distribution of errors in the observations used in compiling the EMAG2v3 was explored, and is reported in the final distributed grid. This grid is delivered at both 4 km continuous altitude above WGS84, as well as at sea level for all oceanic and coastal regions.

  7. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  8. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  9. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  10. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  11. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations. Part I: Forward model, error analysis, and information content

    PubMed Central

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2018-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available. PMID:29707470

  12. Retrieval of Ice Cloud Properties Using an Optimal Estimation Algorithm and MODIS Infrared Observations. Part I: Forward Model, Error Analysis, and Information Content

    NASA Technical Reports Server (NTRS)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  13. Retrieval of Ice Cloud Properties Using an Optimal Estimation Algorithm and MODIS Infrared Observations. Part I: Forward Model, Error Analysis, and Information Content

    NASA Technical Reports Server (NTRS)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  14. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  15. The gravity field model IGGT_R1 based on the second invariant of the GOCE gravitational gradient tensor

    NASA Astrophysics Data System (ADS)

    Lu, Biao; Luo, Zhicai; Zhong, Bo; Zhou, Hao; Flechtner, Frank; Förste, Christoph; Barthelmes, Franz; Zhou, Rui

    2017-11-01

    Based on tensor theory, three invariants of the gravitational gradient tensor (IGGT) are independent of the gradiometer reference frame (GRF). Compared to traditional methods for calculation of gravity field models based on the gravity field and steady-state ocean circulation explorer (GOCE) data, which are affected by errors in the attitude indicator, using IGGT and least squares method avoids the problem of inaccurate rotation matrices. The IGGT approach as studied in this paper is a quadratic function of the gravity field model's spherical harmonic coefficients. The linearized observation equations for the least squares method are obtained using a Taylor expansion, and the weighting equation is derived using the law of error propagation. We also investigate the linearization errors using existing gravity field models and find that this error can be ignored since the used a-priori model EIGEN-5C is sufficiently accurate. One problem when using this approach is that it needs all six independent gravitational gradients (GGs), but the components V_{xy} and V_{yz} of GOCE are worse due to the non-sensitive axes of the GOCE gradiometer. Therefore, we use synthetic GGs for both inaccurate gravitational gradient components derived from the a-priori gravity field model EIGEN-5C. Another problem is that the GOCE GGs are measured in a band-limited manner. Therefore, a forward and backward finite impulse response band-pass filter is applied to the data, which can also eliminate filter caused phase change. The spherical cap regularization approach (SCRA) and the Kaula rule are then applied to solve the polar gap problem caused by GOCE's inclination of 96.7° . With the techniques described above, a degree/order 240 gravity field model called IGGT_R1 is computed. Since the synthetic components of V_{xy} and V_{yz} are not band-pass filtered, the signals outside the measurement bandwidth are replaced by the a-priori model EIGEN-5C. Therefore, this model is practically a combined gravity field model which contains GOCE GGs signals and long wavelength signals from the a-priori model EIGEN-5C. Finally, IGGT_R1's accuracy is evaluated by comparison with other gravity field models in terms of difference degree amplitudes, the geostrophic velocity in the Agulhas current area, gravity anomaly differences as well as by comparison to GNSS/leveling data.

  16. The limits of direct satellite tracking with the Global Positioning System (GPS)

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Yunck, T. P.

    1988-01-01

    Recent advances in high precision differential Global Positioning System-based satellite tracking can be applied to the more conventional direct tracking of low earth satellites. To properly evaluate the limiting accuracy of direct GPS-based tracking, it is necessary to account for the correlations between the a-priori errors in GPS states, Y-bias, and solar pressure parameters. These can be obtained by careful analysis of the GPS orbit determination process. The analysis indicates that sub-meter accuracy can be readily achieved for a user above 1000 km altitude, even when the user solution is obtained with data taken 12 hours after the data used in the GPS orbit solutions.

  17. Thickness distribution of a cooling pyroclastic flow deposit on Augustine Volcano, Alaska: Optimization using InSAR, FEMs, and an adaptive mesh algorithm

    USGS Publications Warehouse

    Masterlark, Timothy; Lu, Zhong; Rykhus, Russell P.

    2006-01-01

    Interferometric synthetic aperture radar (InSAR) imagery documents the consistent subsidence, during the interval 1992–1999, of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine Volcano, Alaska. We construct finite element models (FEMs) that simulate thermoelastic contraction of the PFD to account for the observed subsidence. Three-dimensional problem domains of the FEMs include a thermoelastic PFD embedded in an elastic substrate. The thickness of the PFD is initially determined from the difference between post- and pre-eruption digital elevation models (DEMs). The initial excess temperature of the PFD at the time of deposition, 640 °C, is estimated from FEM predictions and an InSAR image via standard least-squares inverse methods. Although the FEM predicts the major features of the observed transient deformation, systematic prediction errors (RMSE = 2.2 cm) are most likely associated with errors in the a priori PFD thickness distribution estimated from the DEM differences. We combine an InSAR image, FEMs, and an adaptive mesh algorithm to iteratively optimize the geometry of the PFD with respect to a minimized misfit between the predicted thermoelastic deformation and observed deformation. Prediction errors from an FEM, which includes an optimized PFD geometry and the initial excess PFD temperature estimated from the least-squares analysis, are sub-millimeter (RMSE = 0.3 mm). The average thickness (9.3 m), maximum thickness (126 m), and volume (2.1 × 107m3) of the PFD, estimated using the adaptive mesh algorithm, are about twice as large as the respective estimations for the a priori PFD geometry. Sensitivity analyses suggest unrealistic PFD thickness distributions are required for initial excess PFD temperatures outside of the range 500–800 °C.

  18. Catastrophic photometric redshift errors: Weak-lensing survey requirements

    DOE PAGES

    Bernstein, Gary; Huterer, Dragan

    2010-01-11

    We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number N spec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of N spec is ~10 6 we findmore » that using only the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in N spec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the z s – z p distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less

  19. IGS preparations for the next reprocessing and ITRF

    NASA Astrophysics Data System (ADS)

    Griffiths, J.; Rebischung, P.; Garayt, B.; Ray, J.

    2012-04-01

    The International GNSS Service (IGS) is preparing for a second reanalysis of the full history of data collected by the global network using the latest models and methodologies. This effort is designed to obtain improved, consistent satellite orbits, station and satellite clocks, Earth orientation parameters (EOPs) and terrestrial frame products using the current IGS framework, IGS08/igs08.atx. It follows a successful first reprocessing campaign, which provided the IGS input to ITRF2008. Likewise, this second campaign (repro2) should provide the IGS contribution to the next ITRF. We will discuss the analysis standards adopted for repro2, including treatment of and mitigation against non-tidal loading effects, and improvements expected with respect to the first reprocessing campaign. International Earth Rotation and Reference Systems Service (IERS) Conventions of 2010 are expected to be implemented. Though, no improvements in the diurnal and semidiurnal EOP tide models will be made, so associated errors will remain. Adoption of new orbital force models and consistent handling of satellite attitude changes are expected to improve IGS clock and orbit products. A priori Earth-reflected radiation pressure models should nearly eliminate the ~2.5 cm orbit radial bias previously observed using laser ranging methods. Also, a priori modeling of radiation forces exerted in signal transmission should improve the orbit products. And use of consistent satellite attitude models should help with satellite clock estimation during Earth and Moon eclipses. Improvements of the terrestrial frame products are expected from, for example, the inclusion of second order ionospheric corrections and also the a priori modeling of Earth-reflected radiation pressure. Because of remaining unmodeled orbital forces, systematic errors will however likely continue to affect the origin of the repro2 frames and prevent a contribution of GNSS to the origin of the next ITRF. On the other hand, the planned inclusion of satellite phase center offsets in the long-term stacking of the repro2 frames could help in defining the scale rate of the next ITRF.

  20. Survey of editors and reviewers of high-impact psychology journals: statistical and research design problems in submitted manuscripts.

    PubMed

    Harris, Alex; Reeder, Rachelle; Hyun, Jenny

    2011-01-01

    The authors surveyed 21 editors and reviewers from major psychology journals to identify and describe the statistical and design errors they encounter most often and to get their advice regarding prevention of these problems. Content analysis of the text responses revealed themes in 3 major areas: (a) problems with research design and reporting (e.g., lack of an a priori power analysis, lack of congruence between research questions and study design/analysis, failure to adequately describe statistical procedures); (b) inappropriate data analysis (e.g., improper use of analysis of variance, too many statistical tests without adjustments, inadequate strategy for addressing missing data); and (c) misinterpretation of results. If researchers attended to these common methodological and analytic issues, the scientific quality of manuscripts submitted to high-impact psychology journals might be significantly improved.

  1. Systematic feasibility analysis of a quantitative elasticity estimation for breast anatomy using supine/prone patient postures.

    PubMed

    Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P

    2016-03-01

    Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.

  2. [Serious events: from statutory requirements to the implementation].

    PubMed

    Aullen, J-P; Lassale, B; Verdot, J-J

    2008-11-01

    From 2005, PACA area has formed think-tank group a priori risk in the transfusional chain. It has enabled to determine each step of the elementary process and evaluate the frequency, the seriousness and critical effect of the errors. Blood sample and conformity are the most critical points and depend on the vigilance identity. In September 2007, the southern blood bank of France has settled 12 nonconformities levels of blood samples. They send the listing of the nonconformities every month. This listing enables the executive staff to determine the errors and, therefore, to solve them. The regional notification of 2007 to 2008 confirms analysis of the think-tank team. Hence, we were able to list the most serious cases. Public and private hospitals have to notify the serious events and will be bound to evaluate professional practices. These acts will be taken into account by the regional-medical contract.

  3. Validating Affordances as an Instrument for Design and a Priori Analysis of Didactical Situations in Mathematics

    ERIC Educational Resources Information Center

    Sollervall, Håkan; Stadler, Erika

    2015-01-01

    The aim of the presented case study is to investigate how coherent analytical instruments may guide the a priori and a posteriori analyses of a didactical situation. In the a priori analysis we draw on the notion of affordances, as artefact-mediated opportunities for action, to construct hypothetical trajectories of goal-oriented actions that have…

  4. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.

  5. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.

  6. Analysis of the discontinuous Galerkin method applied to the European option pricing problem

    NASA Astrophysics Data System (ADS)

    Hozman, J.

    2013-12-01

    In this paper we deal with a numerical solution of a one-dimensional Black-Scholes partial differential equation, an important scalar nonstationary linear convection-diffusion-reaction equation describing the pricing of European vanilla options. We present a derivation of the numerical scheme based on the space semidiscretization of the model problem by the discontinuous Galerkin method with nonsymmetric stabilization of diffusion terms and with the interior and boundary penalty. The main attention is paid to the investigation of a priori error estimates for the proposed scheme. The appended numerical experiments illustrate the theoretical results and the potency of the method, consequently.

  7. Does raising type 1 error rate improve power to detect interactions in linear regression models? A simulation study.

    PubMed

    Durand, Casey P

    2013-01-01

    Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.

  8. A theory for predicting composite laminate warpage resulting from fabrication

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1974-01-01

    Linear laminate theory is used with the moment-curvature relationship to derive equations for predicting end deflections due to warpage without solving the coupled fourth-order partial differential equations of the plate. Composite micro- and macrohyphenmechanics are used with laminate theory to assess the contribution of factors such as ply misorientation, fiber migration, and fiber and/or void volume ratio nonuniformity on the laminate warpage. Using these equations, it was found that a 1 deg error in the orientation angle of one ply was sufficient to produce warpage end deflection equal to two laminate thicknesses in a 10 inch by 10 inch laminate made from 8 ply Mod-I/epoxy. Using a sensitivity analysis on the governing parameters, it was found that a 3 deg fiber migration or a void volume ratio of three percent in some plies is sufficient to produce laminate warpage corner deflection equal to several laminate thicknesses. Tabular and graphical data are presented which can be used to identify possible errors contributing to laminate warpage and/or to obtain an a priori assessment when unavoidable errors during fabrication are anticipated.

  9. TES/MLS Aura L2 Carbon Monoxide (CO) Nadir (TML2CO)

    Atmospheric Science Data Center

    2018-05-06

    TES/MLS Aura L2 Carbon Monoxide (CO) Nadir (TML2CO) Atmospheric ... profile estimates and associated errors derived using TES & MLS spectral radiance measurements taken at nearest time and locations. ... a priori constraint vectors. News:  TES News Join TES News List Project Title:  TES ...

  10. TES/MLS Aura L2 Carbon Monoxide (CO) Nadir (TML2CO)

    Atmospheric Science Data Center

    2018-05-07

    TES/MLS Aura L2 Carbon Monoxide (CO) Nadir (TML2CO) ... profile estimates and associated errors derived using TES & MLS spectral radiance measurements taken at nearest time and locations. ... a priori constraint vectors. News:  TES News Join TES News List Project Title:  TES ...

  11. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  12. Comparison of GOME tropospheric NO2 columns with NO2 profiles deduced from ground-based in situ measurements

    NASA Astrophysics Data System (ADS)

    Schaub, D.; Boersma, K. F.; Kaiser, J. W.; Weiss, A. K.; Folini, D.; Eskes, H. J.; Buchmann, B.

    2006-08-01

    Nitrogen dioxide (NO2) vertical tropospheric column densities (VTCs) retrieved from the Global Ozone Monitoring Experiment (GOME) are compared to coincident ground-based tropospheric NO2 columns. The ground-based columns are deduced from in situ measurements at different altitudes in the Alps for 1997 to June 2003, yielding a unique long-term comparison of GOME NO2 VTC data retrieved by a collaboration of KNMI (Royal Netherlands Meteorological Institute) and BIRA/IASB (Belgian Institute for Space Aeronomy) with independently derived tropospheric NO2 profiles. A first comparison relates the GOME retrieved tropospheric columns to the tropospheric columns obtained by integrating the ground-based NO2 measurements. For a second comparison, the tropospheric profiles constructed from the ground-based measurements are first multiplied with the averaging kernel (AK) of the GOME retrieval. The second approach makes the comparison independent from the a priori NO2 profile used in the GOME retrieval. This allows splitting the total difference between the column data sets into two contributions: one that is due to differences between the a priori and the ground-based NO2 profile shapes, and another that can be attributed to uncertainties in both the remaining retrieval parameters (such as, e.g., surface albedo or aerosol concentration) and the ground-based in situ NO2 profiles. For anticyclonic clear sky conditions the comparison indicates a good agreement between the columns (n=157, R=0.70/0.74 for the first/second comparison approach, respectively). The mean relative difference (with respect to the ground-based columns) is -7% with a standard deviation of 40% and GOME on average slightly underestimating the ground-based columns. Both data sets show a similar seasonal behaviour with a distinct maximum of spring NO2 VTCs. Further analysis indicates small GOME columns being systematically smaller than the ground-based ones. The influence of different shapes in the a priori and the ground-based NO2 profile is analysed by considering AK information. It is moderate and indicates similar shapes of the profiles for clear sky conditions. Only for large GOME columns, differences between the profile shapes explain the larger part of the relative difference. In contrast, the other error sources give rise to the larger relative differences found towards smaller columns. Further, for the clear sky cases, errors from different sources are found to compensate each other partially. The comparison for cloudy cases indicates a poorer agreement between the columns (n=60, R=0.61). The mean relative difference between the columns is 60% with a standard deviation of 118% and GOME on average overestimating the ground-based columns. The clear improvement after inclusion of AK information (n=60, R=0.87) suggests larger errors in the a priori NO2 profiles under cloudy conditions and demonstrates the importance of using accurate profile information for (partially) clouded scenes.

  13. Refined discrete and empirical horizontal gradients in VLBI analysis

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Böhm, Johannes

    2018-02-01

    Missing or incorrect consideration of azimuthal asymmetry of troposphere delays is a considerable error source in space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). So-called horizontal troposphere gradients are generally utilized for modeling such azimuthal variations and are particularly required for observations at low elevation angles. Apart from estimating the gradients within the data analysis, which has become common practice in space geodetic techniques, there is also the possibility to determine the gradients beforehand from different data sources than the actual observations. Using ray-tracing through Numerical Weather Models (NWMs), we determined discrete gradient values referred to as GRAD for VLBI observations, based on the standard gradient model by Chen and Herring (J Geophys Res 102(B9):20489-20502, 1997. https://doi.org/10.1029/97JB01739) and also for new, higher-order gradient models. These gradients are produced on the same data basis as the Vienna Mapping Functions 3 (VMF3) (Landskron and Böhm in J Geod, 2017.https://doi.org/10.1007/s00190-017-1066-2), so they can also be regarded as the VMF3 gradients as they are fully consistent with each other. From VLBI analyses of the Vienna VLBI and Satellite Software (VieVS), it becomes evident that baseline length repeatabilities (BLRs) are improved on average by 5% when using a priori gradients GRAD instead of estimating the gradients. The reason for this improvement is that the gradient estimation yields poor results for VLBI sessions with a small number of observations, while the GRAD a priori gradients are unaffected from this. We also developed a new empirical gradient model applicable for any time and location on Earth, which is included in the Global Pressure and Temperature 3 (GPT3) model. Although being able to describe only the systematic component of azimuthal asymmetry and no short-term variations at all, even these empirical a priori gradients slightly reduce (improve) the BLRs with respect to the estimation of gradients. In general, this paper addresses that a priori horizontal gradients are actually more important for VLBI analysis than previously assumed, as particularly the discrete model GRAD as well as the empirical model GPT3 are indeed able to refine and improve the results.

  14. Objective determination of image end-members in spectral mixture analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.

    1993-01-01

    Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.

  15. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  16. Sequential Least-Squares Using Orthogonal Transformations. [spacecraft communication/spacecraft tracking-data smoothing

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1975-01-01

    Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.

  17. Object aggregation using Neyman-Pearson analysis

    NASA Astrophysics Data System (ADS)

    Bai, Li; Hinman, Michael L.

    2003-04-01

    This paper presents a novel approach to: 1) distinguish military vehicle groups, and 2) identify names of military vehicle convoys in the level-2 fusion process. The data is generated from a generic Ground Moving Target Indication (GMTI) simulator that utilizes Matlab and Microsoft Access. This data is processed to identify the convoys and number of vehicles in the convoy, using the minimum timed distance variance (MTDV) measurement. Once the vehicle groups are formed, convoy association is done using hypothesis techniques based upon Neyman Pearson (NP) criterion. One characteristic of NP is the low error probability when a-priori information is unknown. The NP approach was demonstrated with this advantage over a Bayesian technique.

  18. An automated approach to the segmentation of HEp-2 cells for the indirect immunofluorescence ANA test.

    PubMed

    Tonti, Simone; Di Cataldo, Santa; Bottino, Andrea; Ficarra, Elisa

    2015-03-01

    The automatization of the analysis of Indirect Immunofluorescence (IIF) images is of paramount importance for the diagnosis of autoimmune diseases. This paper proposes a solution to one of the most challenging steps of this process, the segmentation of HEp-2 cells, through an adaptive marker-controlled watershed approach. Our algorithm automatically conforms the marker selection pipeline to the peculiar characteristics of the input image, hence it is able to cope with different fluorescent intensities and staining patterns without any a priori knowledge. Furthermore, it shows a reduced sensitivity to over-segmentation errors and uneven illumination, that are typical issues of IIF imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Discriminating two nonorthogonal states against a noise channel by feed-forward control

    NASA Astrophysics Data System (ADS)

    Guo, Li-Sha; Xu, Bao-Ming; Zou, Jian; Wang, Chao-Quan; Li, Hai; Li, Jun-Gang; Shao, Bin

    2015-02-01

    We propose a scheme by using the feed-forward control (FFC) to realize a better effect of discrimination of two nonorthogonal states after passing a noise channel based on the minimum-error (ME) discrimination. We show that the application of our scheme can highly improve the effect of discrimination compared with the ME discrimination without the FFC for any pair of nonorthogonal states and any degree of amplitude damping. Especially, the effect of our optimal discrimination can reach that of the two initial nonorthogonal pure states in the presence of the noise channel in a deterministic way for equal a priori probabilities or even be better than that in a probabilistic way for unequal a priori probabilities.

  20. Developing an A Priori Database for Passive Microwave Snow Water Retrievals Over Ocean

    NASA Astrophysics Data System (ADS)

    Yin, Mengtao; Liu, Guosheng

    2017-12-01

    A physically optimized a priori database is developed for Global Precipitation Measurement Microwave Imager (GMI) snow water retrievals over ocean. The initial snow water content profiles are derived from CloudSat Cloud Profiling Radar (CPR) measurements. A radiative transfer model in which the single-scattering properties of nonspherical snowflakes are based on the discrete dipole approximate results is employed to simulate brightness temperatures and their gradients. Snow water content profiles are then optimized through a one-dimensional variational (1D-Var) method. The standard deviations of the difference between observed and simulated brightness temperatures are in a similar magnitude to the observation errors defined for observation error covariance matrix after the 1D-Var optimization, indicating that this variational method is successful. This optimized database is applied in a Bayesian retrieval snow water algorithm. The retrieval results indicated that the 1D-Var approach has a positive impact on the GMI retrieved snow water content profiles by improving the physical consistency between snow water content profiles and observed brightness temperatures. Global distribution of snow water contents retrieved from the a priori database is compared with CloudSat CPR estimates. Results showed that the two estimates have a similar pattern of global distribution, and the difference of their global means is small. In addition, we investigate the impact of using physical parameters to subset the database on snow water retrievals. It is shown that using total precipitable water to subset the database with 1D-Var optimization is beneficial for snow water retrievals.

  1. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  2. A priori mesh grading for the numerical calculation of the head-related transfer functions

    PubMed Central

    Ziegelwanger, Harald; Kreuzer, Wolfgang; Majdak, Piotr

    2017-01-01

    Head-related transfer functions (HRTFs) describe the directional filtering of the incoming sound caused by the morphology of a listener’s head and pinnae. When an accurate model of a listener’s morphology exists, HRTFs can be calculated numerically with the boundary element method (BEM). However, the general recommendation to model the head and pinnae with at least six elements per wavelength renders the BEM as a time-consuming procedure when calculating HRTFs for the full audible frequency range. In this study, a mesh preprocessing algorithm is proposed, viz., a priori mesh grading, which reduces the computational costs in the HRTF calculation process significantly. The mesh grading algorithm deliberately violates the recommendation of at least six elements per wavelength in certain regions of the head and pinnae and varies the size of elements gradually according to an a priori defined grading function. The evaluation of the algorithm involved HRTFs calculated for various geometric objects including meshes of three human listeners and various grading functions. The numerical accuracy and the predicted sound-localization performance of calculated HRTFs were analyzed. A-priori mesh grading appeared to be suitable for the numerical calculation of HRTFs in the full audible frequency range and outperformed uniform meshes in terms of numerical errors, perception based predictions of sound-localization performance, and computational costs. PMID:28239186

  3. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  4. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    NASA Astrophysics Data System (ADS)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  5. Investigating industrial investigation: examining the impact of a priori knowledge and tunnel vision education.

    PubMed

    Maclean, Carla L; Brimacombe, C A Elizabeth; Lindsay, D Stephen

    2013-12-01

    The current study addressed tunnel vision in industrial incident investigation by experimentally testing how a priori information and a human bias (generated via the fundamental attribution error or correspondence bias) affected participants' investigative behavior as well as the effectiveness of a debiasing intervention. Undergraduates and professional investigators engaged in a simulated industrial investigation exercise. We found that participants' judgments were biased by knowledge about the safety history of either a worker or piece of equipment and that a human bias was evident in participants' decision making. However, bias was successfully reduced with "tunnel vision education." Professional investigators demonstrated a greater sophistication in their investigative decision making compared to undergraduates. The similarities and differences between these two populations are discussed. (c) 2013 APA, all rights reserved

  6. A Multiple Group Measurement Model of Children's Reports of Parental Socioeconomic Status. Discussion Papers No. 531-78.

    ERIC Educational Resources Information Center

    Mare, Robert D.; Mason, William M.

    An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…

  7. A new fictitious domain approach for Stokes equation

    NASA Astrophysics Data System (ADS)

    Yang, Min

    2017-10-01

    The purpose of this paper is to present a new fictitious domain approach based on the Nietzsche’s method combining with a penalty method for the Stokes equation. This method allows for an easy and flexible handling of the geometrical aspects. Stability and a priori error estimate are proved. Finally, a numerical experiment is provided to verify the theoretical findings.

  8. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  9. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  10. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  11. In-Flight System Identification

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1998-01-01

    A method is proposed and studied whereby the system identification cycle consisting of experiment design and data analysis can be repeatedly implemented aboard a test aircraft in real time. This adaptive in-flight system identification scheme has many advantages, including increased flight test efficiency, adaptability to dynamic characteristics that are imperfectly known a priori, in-flight improvement of data quality through iterative input design, and immediate feedback of the quality of flight test results. The technique uses equation error in the frequency domain with a recursive Fourier transform for the real time data analysis, and simple design methods employing square wave input forms to design the test inputs in flight. Simulation examples are used to demonstrate that the technique produces increasingly accurate model parameter estimates resulting from sequentially designed and implemented flight test maneuvers. The method has reasonable computational requirements, and could be implemented aboard an aircraft in real time.

  12. Assimilating data into open ocean tidal models

    NASA Astrophysics Data System (ADS)

    Kivman, Gennady A.

    The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.

  13. Frequency-difference MIT imaging of cerebral haemorrhage with a hemispherical coil array: numerical modelling.

    PubMed

    Zolgharni, M; Griffiths, H; Ledger, P D

    2010-08-01

    The feasibility of detecting a cerebral haemorrhage with a hemispherical MIT coil array consisting of 56 exciter/sensor coils of 10 mm radius and operating at 1 and 10 MHz was investigated. A finite difference method combined with an anatomically realistic head model comprising 12 tissue types was used to simulate the strokes. Frequency-difference images were reconstructed from the modelled data with different levels of the added phase noise and two types of a priori boundary errors: a displacement of the head and a size scaling error. The results revealed that a noise level of 3 m degrees (standard deviation) was adequate for obtaining good visualization of a peripheral stroke (volume approximately 49 ml). The simulations further showed that the displacement error had to be within 3-4 mm and the scaling error within 3-4% so as not to cause unacceptably large artefacts on the images.

  14. Numerical stability in problems of linear algebra.

    NASA Technical Reports Server (NTRS)

    Babuska, I.

    1972-01-01

    Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.

  15. Bounding filter - A simple solution to lack of exact a priori statistics.

    NASA Technical Reports Server (NTRS)

    Nahi, N. E.; Weiss, I. M.

    1972-01-01

    Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.

  16. Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2006-01-01

    A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.

  17. Characterization of the probabilistic traveling salesman problem.

    PubMed

    Bowler, Neill E; Fink, Thomas M A; Ball, Robin C

    2003-09-01

    We show that stochastic annealing can be successfully applied to gain new results on the probabilistic traveling salesman problem. The probabilistic "traveling salesman" must decide on an a priori order in which to visit n cities (randomly distributed over a unit square) before learning that some cities can be omitted. We find the optimized average length of the pruned tour follows E(L(pruned))=sqrt[np](0.872-0.105p)f(np), where p is the probability of a city needing to be visited, and f(np)-->1 as np--> infinity. The average length of the a priori tour (before omitting any cities) is found to follow E(L(a priori))=sqrt[n/p]beta(p), where beta(p)=1/[1.25-0.82 ln(p)] is measured for 0.05< or =p< or =0.6. Scaling arguments and indirect measurements suggest that beta(p) tends towards a constant for p<0.03. Our stochastic annealing algorithm is based on limited sampling of the pruned tour lengths, exploiting the sampling error to provide the analog of thermal fluctuations in simulated (thermal) annealing. The method has general application to the optimization of functions whose cost to evaluate rises with the precision required.

  18. The constitutive a priori and the distinction between mathematical and physical possibility

    NASA Astrophysics Data System (ADS)

    Everett, Jonathan

    2015-11-01

    This paper is concerned with Friedman's recent revival of the notion of the relativized a priori. It is particularly concerned with addressing the question as to how Friedman's understanding of the constitutive function of the a priori has changed since his defence of the idea in his Dynamics of Reason. Friedman's understanding of the a priori remains influenced by Reichenbach's initial defence of the idea; I argue that this notion of the a priori does not naturally lend itself to describing the historical development of space-time physics. Friedman's analysis of the role of the rotating frame thought experiment in the development of general relativity - which he suggests made the mathematical possibility of four-dimensional space-time a genuine physical possibility - has a central role in his argument. I analyse this thought experiment and argue that it is better understood by following Cassirer and placing emphasis on regulative principles. Furthermore, I argue that Cassirer's Kantian framework enables us to capture Friedman's key insights into the nature of the constitutive a priori.

  19. Ability of the current global observing network to constrain N2O sources and sinks

    NASA Astrophysics Data System (ADS)

    Millet, D. B.; Wells, K. C.; Chaliyakunnel, S.; Griffis, T. J.; Henze, D. K.; Bousserez, N.

    2014-12-01

    The global observing network for atmospheric N2O combines flask and in-situ measurements at ground stations with sustained and campaign-based aircraft observations. In this talk we apply a new global model of N2O (based on GEOS-Chem) and its adjoint to assess the strengths and weaknesses of this network for quantifying N2O emissions. We employ an ensemble of pseudo-observation analyses to evaluate the relative constraints provided by ground-based (surface, tall tower) and airborne (HIPPO, CARIBIC) observations, and the extent to which variability (e.g. associated with pulsing or seasonality of emissions) not captured by the a priori inventory can bias the inferred fluxes. We find that the ground-based and HIPPO datasets each provide a stronger constraint on the distribution of global emissions than does the CARIBIC dataset on its own. Given appropriate initial conditions, we find that our inferred surface fluxes are insensitive to model errors in the stratospheric loss rate of N2O over the timescale of our analysis (2 years); however, the same is not necessarily true for model errors in stratosphere-troposphere exchange. Finally, we examine the a posteriori error reduction distribution to identify priority locations for future N2O measurements.

  20. Functional MRI Preprocessing in Lesioned Brains: Manual Versus Automated Region of Interest Analysis

    PubMed Central

    Garrison, Kathleen A.; Rogalsky, Corianne; Sheng, Tong; Liu, Brent; Damasio, Hanna; Winstein, Carolee J.; Aziz-Zadeh, Lisa S.

    2015-01-01

    Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant’s structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions, such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant’s non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error) that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise, but may provide a more accurate estimate of brain response. In this study, commonly used automated and manual approaches to ROI analysis were directly compared by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study, involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. Significant differences were identified in task-related effect size and percent-activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design. PMID:26441816

  1. Bandwidth efficient channel estimation method for airborne hyperspectral data transmission in sparse doubly selective communication channels

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.

    2017-10-01

    A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.

  2. Ideas for a pattern-oriented approach towards a VERA analysis ensemble

    NASA Astrophysics Data System (ADS)

    Gorgas, T.; Dorninger, M.

    2010-09-01

    Ideas for a pattern-oriented approach towards a VERA analysis ensemble For many applications in meteorology and especially for verification purposes it is important to have some information about the uncertainties of observation and analysis data. A high quality of these "reference data" is an absolute necessity as the uncertainties are reflected in verification measures. The VERA (Vienna Enhanced Resolution Analysis) scheme includes a sophisticated quality control tool which accounts for the correction of observational data and provides an estimation of the observation uncertainty. It is crucial for meteorologically and physically reliable analysis fields. VERA is based on a variational principle and does not need any first guess fields. It is therefore NWP model independent and can also be used as an unbiased reference for real time model verification. For downscaling purposes VERA uses an a priori knowledge on small-scale physical processes over complex terrain, the so called "fingerprint technique", which transfers information from rich to data sparse regions. The enhanced Joint D-PHASE and COPS data set forms the data base for the analysis ensemble study. For the WWRP projects D-PHASE and COPS a joint activity has been started to collect GTS and non-GTS data from the national and regional meteorological services in Central Europe for 2007. Data from more than 11.000 stations are available for high resolution analyses. The usage of random numbers as perturbations for ensemble experiments is a common approach in meteorology. In most implementations, like for NWP-model ensemble systems, the focus lies on error growth and propagation on the spatial and temporal scale. When defining errors in analysis fields we have to consider the fact that analyses are not time dependent and that no perturbation method aimed at temporal evolution is possible. Further, the method applied should respect two major sources of analysis errors: Observation errors AND analysis or interpolation errors. With the concept of an analysis ensemble we hope to get a more detailed sight on both sources of analysis errors. For the computation of the VERA ensemble members a sample of Gaussian random perturbations is produced for each station and parameter. The deviation of perturbations is based on the correction proposals by the VERA QC scheme to provide some "natural" limits for the ensemble. In order to put more emphasis on the weather situation we aim to integrate the main synoptic field structures as weighting factors for the perturbations. Two widely approved approaches are used for the definition of these main field structures: The Principal Component Analysis and a 2D-Discrete Wavelet Transform. The results of tests concerning the implementation of this pattern-supported analysis ensemble system and a comparison of the different approaches are given in the presentation.

  3. Anomalous tidal loading signals in South-West England and Brittany

    NASA Astrophysics Data System (ADS)

    Keshin, M.; Penna, N. T.; Clarke, P. J.; Bos, M. S.; Baker, T. F.

    2010-05-01

    The tidal deformation of the Earth, including ocean tide loading (OTL), sheds light on the Earth's internal structure. Uncertainties in the knowledge of this deformation may be a source of both direct and propagated periodic errors in GPS geodesy. The increasing number of global GPS stations with long histories of observations, as well as recent developments in precise GPS geodesy such as the availability of reprocessed satellite orbits, enables further study of these geophysical and geodetic phenomena. There are more than 10 worldwide regions where OTL displacement amplitudes exceed 25mm. In our work we considered one such region covering South-West England and stretching southward along the coasts of France, Spain and Portugal. Estimates of three-dimensional harmonic site motion at each of the principal diurnal (K1, O1, P1, Q1) and semi-diurnal (K2, M2, N2, S2) frequencies were obtained for 40 European stations with at least 2 year observation span, using the GIPSY-OASIS II software package with reprocessed precise satellite orbits from JPL. All GPS data available from 2002.0 to 2010.0 were considered. 34 stations were situated close to the Atlantic coast; a further 6 inland stations at similar latitudes were processed as a check on solid Earth tide models. Inter-model OTL displacement differences are small, especially for the inland sites; the problematic Bristol Channel area of South-West England was excluded. We validated the quality of our GPS estimates by using and comparing three different analysis strategies: (1) Harmonic estimation of total tidal displacement in 24-hour Precise Point Positioning (PPP) batch solutions: harmonic displacements are estimated per coordinate component for each of the eight principal tidal constituents. OTL is not modelled a priori, and nodal corrections are applied in post-processing after combination of the daily results; (2) Harmonic estimation of residual tidal displacement in 24-hour PPP batch solutions: OTL is modelled a priori using the FES2004 model in the reference frame of the whole Earth system (CM); the residual harmonic displacements are estimated per component per principal tidal constituent. Minor tidal harmonics are removed a priori using the routine "hardisp" by D. Agnew. Because of this, post-processing nodal corrections are not applied; (3) Amplitude and phase from kinematic PPP processing: kinematic GPS processing with a priori OTL modelling using FES2004 and hardisp as in (2); amplitude spectra are later estimated from the entire coordinate time series using the Lomb-Scargle periodogram method. We typically obtain excellent (0.3-0.7mm except for the K1 and K2 constituents) phasor agreement between all three strategies, comparable to the inter-model agreement between computed OTL displacements and suggesting that the GPS analysis strategy robustly detects actual tidal displacements. For sites in inland Europe where computed OTL displacements are less than 10mm with inter-model differences of less than 0.2mm, residual harmonic amplitudes are also at the 0.3-0.7mm level, confirming that solid Earth tides are modelled to at least this accuracy. For GPS stations located in South-West England and Brittany, onshore of the continental shelf, anomalous residual tidal signals were detected of about 2-3mm magnitude for the vertical M2 OTL constituent (10% of the expected signal). In contrast, sites in the Iberian Peninsula, with similar expected OTL magnitudes, have residuals at the expected 0.3-0.7mm level. Sites near to the Bay of Biscay show transitional behaviour between these regimes. Therefore at these locations, the different modern ocean tide models that agree very well must all either be systematically in error, or the difference in behaviour may be caused by errors in the displacement Green's functions applicable to loads on the nearby continental shelf.

  4. The inverse problem of refraction travel times, part II: Quantifying refraction nonuniqueness using a three-layer model

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.

    2005-01-01

    This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.

  5. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    NASA Astrophysics Data System (ADS)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  6. Altered neural encoding of prediction errors in assault-related posttraumatic stress disorder.

    PubMed

    Ross, Marisa C; Lenow, Jennifer K; Kilts, Clinton D; Cisler, Josh M

    2018-05-12

    Posttraumatic stress disorder (PTSD) is widely associated with deficits in extinguishing learned fear responses, which relies on mechanisms of reinforcement learning (e.g., updating expectations based on prediction errors). However, the degree to which PTSD is associated with impairments in general reinforcement learning (i.e., outside of the context of fear stimuli) remains poorly understood. Here, we investigate brain and behavioral differences in general reinforcement learning between adult women with and without a current diagnosis of PTSD. 29 adult females (15 PTSD with exposure to assaultive violence, 14 controls) underwent a neutral reinforcement-learning task (i.e., two arm bandit task) during fMRI. We modeled participant behavior using different adaptations of the Rescorla-Wagner (RW) model and used Independent Component Analysis to identify timecourses for large-scale a priori brain networks. We found that an anticorrelated and risk sensitive RW model best fit participant behavior, with no differences in computational parameters between groups. Women in the PTSD group demonstrated significantly less neural encoding of prediction errors in both a ventral striatum/mPFC and anterior insula network compared to healthy controls. Weakened encoding of prediction errors in the ventral striatum/mPFC and anterior insula during a general reinforcement learning task, outside of the context of fear stimuli, suggests the possibility of a broader conceptualization of learning differences in PTSD than currently proposed in current neurocircuitry models of PTSD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Adaptability and phenotypic stability of common bean genotypes through Bayesian inference.

    PubMed

    Corrêa, A M; Teodoro, P E; Gonçalves, M C; Barroso, L M A; Nascimento, M; Santos, A; Torres, F E

    2016-04-27

    This study used Bayesian inference to investigate the genotype x environment interaction in common bean grown in Mato Grosso do Sul State, and it also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 13 common bean genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian inference was effective for the selection of upright common bean genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. According to Bayesian inference, the EMGOPA-201, BAMBUÍ, CNF 4999, CNF 4129 A 54, and CNFv 8025 genotypes had specific adaptability to favorable environments, while the IAPAR 14 and IAC CARIOCA ETE genotypes had specific adaptability to unfavorable environments.

  8. Automated Analysis of dUT1 from IVS Intensive Sessions with VieVS

    NASA Astrophysics Data System (ADS)

    Uunila, M.; Haas, R.; Kareinen, N.; Lindfors, T.

    2012-12-01

    The Vienna VLBI Software (VieVS) version 1d is used in its batch mode to analyze IVS Intensive sessions automatically to derive the Earth rotation parameter dUT1. The automation process uses a shell script that is run daily by a cron process. The goal is to achieve dUT1 results as soon as the NGS file is fetched from the VieVS server. Three types of analysis strategies, called S-1, S-2 and S-3, are used in the process in order to compare different parameterizations and to improve the latency of deriving dUT1. The S-1 analysis strategy uses as a priori Earth orientation parameters the values provided by the EOP-file "finals2000A", uses as mapping function the Global Mapping Function (GMF), and does not apply atmospheric loading. The S-2 analysis strategy differs from the first analysis strategy by using the Vienna Mapping function (VM1) instead of the GMF and by applying atmospheric loading. The S-3 analysis strategy differs from the second approach by using the IERS C04 values as a priori Earth orientation parameters. All other parameters are treated identically for the three analysis strategies. The latency of the results for the first analysis strategy is 2-3 days from the end of a session and is dominated by the time that is necessary to correlate the observational data and to pre-process the data, i.e. to provide an NGS file where group delay ambiguities are resolved and the ionospheric effects are corrected. The latency of the results for the second strategy is slightly worse, about 3-4 days, mainly due to the time that it takes until VMF1 and atmospheric loading based on ECMWF analysis data are available. The latency of the results for the third strategy is even worse, about 30 days, and is dominated by the time that it takes until the IERS C04 data are available. The RMS values of the formal errors of the three strategies in the case of INT1 sessions are 21, 22, and 17 microseconds for strategies 1, 2 and 3, respectively. The formal error of S-3 is the best, but the latency is the worst. To enhance the latency of the S-1, we currently are working on including the necessary pre-processing steps, i.e. group delay ambiguity resolution and ionospheric correction, directly into VieVS. The results of the automated analysis are provided both as data files and in graphical form on the Metsähovi Web pages http://www.metsahovi.fi/vlbi/vievs/results GMF, .../results VM1, and .../results C04, respectively.

  9. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  10. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE PAGES

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen; ...

    2017-10-27

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  11. High-precision radiometric tracking for planetary approach and encounter in the inner solar system

    NASA Technical Reports Server (NTRS)

    Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.

    1989-01-01

    The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.

  12. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kiktenko, E. O.; Trushechkin, A. S.; Lim, C. C. W.; Kurochkin, Y. V.; Fedorov, A. K.

    2017-10-01

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. The proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.

  13. Resting State Network Estimation in Individual Subjects

    PubMed Central

    Hacker, Carl D.; Laumann, Timothy O.; Szrama, Nicholas P.; Baldassarre, Antonello; Snyder, Abraham Z.

    2014-01-01

    Resting-state functional magnetic resonance imaging (fMRI) has been used to study brain networks associated with both normal and pathological cognitive function. The objective of this work is to reliably compute resting state network (RSN) topography in single participants. We trained a supervised classifier (multi-layer perceptron; MLP) to associate blood oxygen level dependent (BOLD) correlation maps corresponding to pre-defined seeds with specific RSN identities. Hard classification of maps obtained from a priori seeds was highly reliable across new participants. Interestingly, continuous estimates of RSN membership retained substantial residual error. This result is consistent with the view that RSNs are hierarchically organized, and therefore not fully separable into spatially independent components. After training on a priori seed-based maps, we propagated voxel-wise correlation maps through the MLP to produce estimates of RSN membership throughout the brain. The MLP generated RSN topography estimates in individuals consistent with previous studies, even in brain regions not represented in the training data. This method could be used in future studies to relate RSN topography to other measures of functional brain organization (e.g., task-evoked responses, stimulation mapping, and deficits associated with lesions) in individuals. The multi-layer perceptron was directly compared to two alternative voxel classification procedures, specifically, dual regression and linear discriminant analysis; the perceptron generated more spatially specific RSN maps than either alternative. PMID:23735260

  14. Comparison of tropospheric NO2 from in situ aircraft measurements with near-real-time and standard product data from OMI

    NASA Astrophysics Data System (ADS)

    Bucsela, E. J.; Perring, A. E.; Cohen, R. C.; Boersma, K. F.; Celarier, E. A.; Gleason, J. F.; Wenig, M. O.; Bertram, T. H.; Wooldridge, P. J.; Dirksen, R.; Veefkind, J. P.

    2008-08-01

    We present an analysis of in situ NO2 measurements from aircraft experiments between summer 2004 and spring 2006. The data are from the INTEX-A, PAVE, and INTEX-B campaigns and constitute the most comprehensive set of tropospheric NO2 profiles to date. Profile shapes from INTEX-A and PAVE are found to be qualitatively similar to annual mean profiles from the GEOS-Chem model. Using profiles from the INTEX-B campaign, we perform error-weighted linear regressions to compare the Ozone Monitoring Instrument (OMI) tropospheric NO2 columns from the near-real-time product (NRT) and standard product (SP) with the integrated in situ columns. Results indicate that the OMI SP algorithm yields NO2 amounts lower than the in situ columns by a factor of 0.86 (±0.2) and that NO2 amounts from the NRT algorithm are higher than the in situ data by a factor of 1.68 (±0.6). The correlation between the satellite and in situ data is good (r = 0.83) for both algorithms. Using averaging kernels, the influence of the algorithm's a priori profiles on the satellite retrieval is explored. Results imply that air mass factors from the a priori profiles are on average slightly larger (˜10%) than those from the measured profiles, but the differences are not significant.

  15. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  16. A Priori Calculations of Thermodynamic Functions

    DTIC Science & Technology

    1991-12-01

    Ten closes this work with a brief summary and offers suggestions for improving the model and for future research. S CHAPTER TWO In this chapter, we...we must first define the theoretical model . The molecules studied in this work contain up to 10 non- hydrogen atoms and, in general, are not...is given by equation (2-31) for two different geometries or two different theoretical models . Equation (2-31) shows the error in the force constant has

  17. The inverse problem of refraction travel times, part I: Types of Geophysical Nonuniqueness through Minimization

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.

    2005-01-01

    In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.

  18. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  19. A robust pseudo-inverse spectral filter applied to the Earth Radiation Budget Experiment (ERBE) scanning channels

    NASA Technical Reports Server (NTRS)

    Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.

    1984-01-01

    Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.

  20. A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.

    PubMed

    Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng

    2016-05-01

    In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.

  1. Tissue resistivity estimation in the presence of positional and geometrical uncertainties.

    PubMed

    Baysal, U; Eyüboğlu, B M

    2000-08-01

    Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.

  2. Highlights of TOMS Version 9 Total Ozone Algorithm

    NASA Technical Reports Server (NTRS)

    Bhartia, Pawan; Haffner, David

    2012-01-01

    The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side benefit of this algorithm is that it is considerably simpler than the present algorithm that uses a database of 1512 profiles to retrieve total ozone. These profiles are tedious to construct and modify. Though conceptually similar to the SBUV V8 algorithm that was developed about a decade ago, the SBUV and TOMS V9 algorithms differ in detail. The TOMS algorithm uses 3 wavelengths to retrieve the profile while the SBUV algorithm uses 6-9 wavelengths, so TOMS provides less profile information. However both algorithms have comparable total ozone information and TOMS V9 can be easily adapted to use additional wavelengths from instruments like GOME, OMI and OMPS to provide better profile information at smaller SZAs. The other significant difference between the two algorithms is that while the SBUV algorithm has been optimized for deriving monthly zonal means by making an appropriate choice of the a priori error covariance matrix, the TOMS algorithm has been optimized for tracking short-term variability using month and latitude dependent covariance matrices.

  3. Improved human observer performance in digital reconstructed radiograph verification in head and neck cancer radiotherapy.

    PubMed

    Sturgeon, Jared D; Cox, John A; Mayo, Lauren L; Gunn, G Brandon; Zhang, Lifei; Balter, Peter A; Dong, Lei; Awan, Musaddiq; Kocak-Uzel, Esengul; Mohamed, Abdallah Sherif Radwan; Rosenthal, David I; Fuller, Clifton David

    2015-10-01

    Digitally reconstructed radiographs (DRRs) are routinely used as an a priori reference for setup correction in radiotherapy. The spatial resolution of DRRs may be improved to reduce setup error in fractionated radiotherapy treatment protocols. The influence of finer CT slice thickness reconstruction (STR) and resultant increased resolution DRRs on physician setup accuracy was prospectively evaluated. Four head and neck patient CT-simulation images were acquired and used to create DRR cohorts by varying STRs at 0.5, 1, 2, 2.5, and 3 mm. DRRs were displaced relative to a fixed isocenter using 0-5 mm random shifts in the three cardinal axes. Physician observers reviewed DRRs of varying STRs and displacements and then aligned reference and test DRRs replicating daily KV imaging workflow. A total of 1,064 images were reviewed by four blinded physicians. Observer errors were analyzed using nonparametric statistics (Friedman's test) to determine whether STR cohorts had detectably different displacement profiles. Post hoc bootstrap resampling was applied to evaluate potential generalizability. The observer-based trial revealed a statistically significant difference between cohort means for observer displacement vector error ([Formula: see text]) and for [Formula: see text]-axis [Formula: see text]. Bootstrap analysis suggests a 15% gain in isocenter translational setup error with reduction of STR from 3 mm to [Formula: see text]2 mm, though interobserver variance was a larger feature than STR-associated measurement variance. Higher resolution DRRs generated using finer CT scan STR resulted in improved observer performance at shift detection and could decrease operator-dependent geometric error. Ideally, CT STRs [Formula: see text]2 mm should be utilized for DRR generation in the head and neck.

  4. An analysis of the convergence of Newton iterations for solving elliptic Kepler's equation

    NASA Astrophysics Data System (ADS)

    Elipe, A.; Montijano, J. I.; Rández, L.; Calvo, M.

    2017-12-01

    In this note a study of the convergence properties of some starters E_0 = E_0(e,M) in the eccentricity-mean anomaly variables for solving the elliptic Kepler's equation (KE) by Newton's method is presented. By using a Wang Xinghua's theorem (Xinghua in Math Comput 68(225):169-186, 1999) on best possible error bounds in the solution of nonlinear equations by Newton's method, we obtain for each starter E_0(e,M) a set of values (e,M) \\in [0, 1) × [0, π ] that lead to the q-convergence in the sense that Newton's sequence (E_n)_{n ≥ 0} generated from E_0 = E_0(e,M) is well defined, converges to the exact solution E^* = E^*(e,M) of KE and further \\vert E_n - E^* \\vert ≤ q^{2^n -1} \\vert E_0 - E^* \\vert holds for all n ≥ 0. This study completes in some sense the results derived by Avendaño et al. (Celest Mech Dyn Astron 119:27-44, 2014) by using Smale's α -test with q=1/2. Also since in KE the convergence rate of Newton's method tends to zero as e → 0, we show that the error estimates given in the Wang Xinghua's theorem for KE can also be used to determine sets of q-convergence with q = e^k \\widetilde{q} for all e \\in [0,1) and a fixed \\widetilde{q} ≤ 1. Some remarks on the use of this theorem to derive a priori estimates of the error \\vert E_n - E^* \\vert after n Kepler's iterations are given. Finally, a posteriori bounds of this error that can be used to a dynamical estimation of the error are also obtained.

  5. Validation of Suomi NPP OMPS Limb Profiler Ozone Measurements

    NASA Astrophysics Data System (ADS)

    Buckner, S. N.; Flynn, L. E.; McCormick, M. P.; Anderson, J.

    2017-12-01

    The Ozone Mapping and Profiler Suite (OMPS) Limb Profiler onboard the Suomi National Polar-Orbiting Partnership satellite (SNPP) makes measurements of limb-scattered solar radiances over Ultraviolet and Visible wavelengths. These measurements are used in retrieval algorithms to create high vertical resolution ozone profiles, helping monitor the evolution of the atmospheric ozone layer. NOAA is in the process of implementing these algorithms to make near-real-time versions of these products. The main objective of this project is to generate estimates of the accuracy and precision of the OMPS Limb products by analysis of matchup comparisons with similar products from the Earth Observing System Microwave Limb Sounder (EOS Aura MLS). The studies investigated the sources of errors, and classified them with respect to height, geographic location, and atmospheric and observation conditions. In addition, this project included working with the algorithm developers in an attempt to develop corrections and adjustments. Collocation and zonal mean comparisons were made and statistics were gathered on both a daily and monthly basis encompassing the entire OMPS data record. This validation effort of the OMPS-LP data will be used to help validate data from the Stratosphere Aerosol and Gas Experiment III on the International Space Station (SAGE III ISS) and will also be used in conjunction with the NOAA Total Ozone from Assimilation of Stratosphere and Troposphere (TOAST) product to develop a new a-priori for the NOAA Unique Combined Atmosphere Processing System (NUCAPS) ozone product. The current NUCAPS ozone product uses a combination of Cross-track Infrared Sounder (CrIS) data for the troposphere and a tropopause based climatology derived from ozonesonde data for the stratosphere a-priori. The latest version of TOAST uses a combination of both CrIS and OMPS-LP data. We will further develop the newest version of TOAST and incorporate it into the NUCAPS system as a new a-priori, in hopes of creating a better global ozone product.

  6. Comparing mass balance and adjoint methods for inverse modeling of nitrogen dioxide columns for global nitrogen oxide emissions

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew; Martin, Randall V.; Padmanabhan, Akhila; Henze, Daven K.

    2017-04-01

    Satellite observations offer information applicable to top-down constraints on emission inventories through inverse modeling. Here we compare two methods of inverse modeling for emissions of nitrogen oxides (NOx) from nitrogen dioxide (NO2) columns using the GEOS-Chem chemical transport model and its adjoint. We treat the adjoint-based 4D-Var modeling approach for estimating top-down emissions as a benchmark against which to evaluate variations on the mass balance method. We use synthetic NO2 columns generated from known NOx emissions to serve as "truth." We find that error in mass balance inversions can be reduced by up to a factor of 2 with an iterative process that uses finite difference calculations of the local sensitivity of NO2 columns to a change in emissions. In a simplified experiment to recover local emission perturbations, horizontal smearing effects due to NOx transport are better resolved by the adjoint approach than by mass balance. For more complex emission changes, or at finer resolution, the iterative finite difference mass balance and adjoint methods produce similar global top-down inventories when inverting hourly synthetic observations, both reducing the a priori error by factors of 3-4. Inversions of simulated satellite observations from low Earth and geostationary orbits also indicate that both the mass balance and adjoint inversions produce similar results, reducing a priori error by a factor of 3. As the iterative finite difference mass balance method provides similar accuracy as the adjoint method, it offers the prospect of accurately estimating top-down NOx emissions using models that do not have an adjoint.

  7. Advantages of estimating rate corrections during dynamic propagation of spacecraft rates: Applications to real-time attitude determination of SAMPEX

    NASA Technical Reports Server (NTRS)

    Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.

    1994-01-01

    This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.

  8. A model-independent comparison of the rates of uptake and short term retention of 47Ca and 85Sr by the skeleton.

    PubMed

    Reeve, J; Hesp, R

    1976-12-22

    1. A method has been devised for comparing the impulse response functions of the skeleton for two or more boneseeking tracers, and for estimating the contribution made by measurement errors to the differences between any pair of impulse response functions. 2. Comparisons were made between the calculated impulse response functions for 47Ca and 85Sr obtained in simultaneous double tracer studies in sixteen subjects. Collectively the differences between the 47Ca and 85Sr functions could be accounted for entirely by measurement errors. 3. Because the calculation of an impulse response function requires fewer a priori assumptions than other forms of mathematical analysis, and automatically corrects for differences induced by recycling of tracer and non-identical rates of excretory plasma clearance of tracer, it is concluded that differences shown in previous in vivo studies between the fluxes of Ca and Sr into bone can be fully accounted for by undetermined oversimplifications in the various mathematical models used to analyse the results of those studies. 85Sr is therefore an adequate tracer for bone calcium in most in vivo studies.

  9. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  10. Image-derived input function with factor analysis and a-priori information.

    PubMed

    Simončič, Urban; Zanotti-Fregonara, Paolo

    2015-02-01

    Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.

  11. Poor interoperability of the Adams-Harbertson method for analysis of anthocyanins: comparison with AOAC pH differential method.

    PubMed

    Brooks, Larry M; Kuhlman, Benjamin J; McKesson, Doug W; McCloskey, Leo

    2013-01-01

    The poor interoperability of anthocyanin glycosides measurements by two pH differential methods is documented. Adams-Harbertson, which was proposed for commercial winemaking, was compared to AOAC Official Method 2005.02 for wine. California bottled wines (Pinot Noir, Merlot, and Cabernet Sauvignon) were assayed in a collaborative study (n=105), which found mean precision of Adams-Harbertson winery versus reference measurements to be 77 +/- 20%. Maximum error is expected to be 48% for Pinot Noir, 42% for Merlot, and 34% for Cabernet Sauvignon from reproducibility RSD. Range of measurements was actually 30 to 91% for Pinot Noir. An interoperability study (n=30) found Adams-Harbertson produces measurements that are nominally 150% of the AOAC pH differential method. Large analytical chemistry differences are: AOAC method uses Beer-Lambert equation and measures absorbance at pH 1.0 and 4.5, proposed a priori by Flueki and Francis; whereas Adams-Harbertson uses "universal" standard curve and measures absorbance ad hoc at pH 1.8 and 4.9 to reduce the effects of so-called co-pigmentation. Errors relative to AOAC are produced by Adams-Harbertson standard curve over Beer-Lambert and pH 1.8 over pH 1.0. The study recommends using AOAC Official Method 2005.02 for analysis of wine anthocyanin glycosides.

  12. Double absorbing boundaries for finite-difference time-domain electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaGrone, John, E-mail: jlagrone@smu.edu; Hagstrom, Thomas, E-mail: thagstrom@smu.edu

    We describe the implementation of optimal local radiation boundary condition sequences for second order finite difference approximations to Maxwell's equations and the scalar wave equation using the double absorbing boundary formulation. Numerical experiments are presented which demonstrate that the design accuracy of the boundary conditions is achieved and, for comparable effort, exceeds that of a convolution perfectly matched layer with reasonably chosen parameters. An advantage of the proposed approach is that parameters can be chosen using an accurate a priori error bound.

  13. Matching of Ground-Based LiDAR and Aerial Image Data For Mobile Robot Localization in Densely Forested Environments

    DTIC Science & Technology

    2013-11-01

    for rovers operating in close proximity to points of interest. Techniques such as Simultaneous Localization and Mapping ( SLAM ) have been utilized...successfully to localize rovers in a variety of settings and scenarios [3,4]. SLAM focuses on building a local map of landmarks as observed by a rover...more landmarks are observed and errors filtered. SLAM therefore does not require a priori knowledge of the locations of landmarks or that of the rover

  14. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  15. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  16. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  17. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  18. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  19. Analysis of ICESat Data Using Kalman Filter and Kriging to Study Height Changes in East Antarctica

    NASA Technical Reports Server (NTRS)

    Herring, Thomas A.

    2005-01-01

    We analyze ICESat derived heights collected between Feb. 03-Nov. 04 using a kriging/Kalman filtering approach to investigate height changes in East Antarctica. The model's parameters are height change to an a priori static digital height model, seasonal signal expressed as an amplitude Beta and phase Theta, and height-change rate dh/dt for each (100 km)(exp 2) block. From the Kalman filter results, dh/dt has a mean of -0.06 m/yr in the flat interior of East Antarctica. Spatially correlated pointing errors in the current data releases give uncertainties in the range 0.06 m/yr, making height change detection unreliable at this time. Our test shows that when using all available data with pointing knowledge equivalent to that of Laser 2a, height change detection with an accuracy level 0.02 m/yr can be achieved over flat terrains in East Antarctica.

  20. A new simplex chemometric approach to identify olive oil blends with potentially high traceability.

    PubMed

    Semmar, N; Laroussi-Mezghani, S; Grati-Kamoun, N; Hammami, M; Artaud, J

    2016-10-01

    Olive oil blends (OOBs) are complex matrices combining different cultivars at variable proportions. Although qualitative determinations of OOBs have been subjected to several chemometric works, quantitative evaluations of their contents remain poorly developed because of traceability difficulties concerning co-occurring cultivars. Around this question, we recently published an original simplex approach helping to develop predictive models of the proportions of co-occurring cultivars from chemical profiles of resulting blends (Semmar & Artaud, 2015). Beyond predictive model construction and validation, this paper presents an extension based on prediction errors' analysis to statistically define the blends with the highest predictability among all the possible ones that can be made by mixing cultivars at different proportions. This provides an interesting way to identify a priori labeled commercial products with potentially high traceability taking into account the natural chemical variability of different constitutive cultivars. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Determination of layer ordering using sliding-window Fourier transform of x-ray reflectivity data

    NASA Astrophysics Data System (ADS)

    Smigiel, E.; Knoll, A.; Broll, N.; Cornet, A.

    1998-01-01

    X-ray reflectometry allows the determination of the thickness, density and roughness of thin layers on a substrate from several Angstroms to some hundred nanometres. The thickness is determined by simulation with trial-and-error methods after extracting initial values of the layer thicknesses from the result of a classical Fast Fourier Transform (FFT) of the reflectivity data. However, the order information of the layers is lost during classical FFT. The order of the layers has then to be known a priori. In this paper, it will be shown that the order of the layers can be obtained by a sliding-window Fourier transform, the so-called Gabor representation. This joint time-frequency analysis allows the direct determination of the order of the layers and, therefore, the use of a more appropriate starting model for refining simulations. A simulated and a measured example show the interest of this method.

  2. Photo-multiplier Tube Based Hybrid MRI and Frequency Domain Fluorescence Tomography System for Small Animal Imaging

    PubMed Central

    Lin, Y; Ghijsen, M T; Gao, H; Liu, N; Nalcioglu, O; Gulsen, G

    2014-01-01

    Fluorescence tomography (FT) is a promising molecular imaging technique that can spatially resolve both fluorophore concentration and lifetime parameters. However, recovered fluorophore parameters highly depend on the size and depth of the object due to the ill-posedness of the FT inverse problem. Structural a priori information from another high spatial resolution imaging modality has been demonstrated to significantly improve FT reconstruction accuracy. In this study, we have constructed a combined magnetic resonance imaging (MRI) and FT system for small animal imaging. A photo-multiplier tube (PMT) is used as the detector to acquire frequency domain FT measurements. This is the first MR-compatible time-resolved FT system that can reconstruct both fluorescence concentration and lifetime maps simultaneously. The performance of the hybrid system is evaluated with phantom studies. Two different fluorophores, Indocyanine Green (ICG) and 3-3′ Diethylthiatricarbocyanine Iodide (DTTCI), which have similar excitation and emission spectra but different lifetimes, are utilized. The fluorescence concentration and lifetime maps are both reconstructed with and without the structural a priori information obtained from MRI for comparison. We show that the hybrid system can accurately recover both fluorescence intensity and lifetime within 10% error for two 4.2 mm-diameter cylindrical objects embedded in a 38 mm-diameter cylindrical phantom when MRI structural a priori information is utilized. PMID:21753235

  3. Uncertainty analysis for fluorescence tomography with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Reinbacher-Köstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann

    2011-07-01

    Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.

  4. Turbulent Output-Based Anisotropic Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Carlson, Jan-Renee

    2010-01-01

    Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.

  5. Global Simultaneous Estimation of Present-Day Surface Mass Trend and GIA Using Multi-Sensor Geodetic Data Combination

    NASA Astrophysics Data System (ADS)

    Wu, X.; Heflin, M. B.; Schotman, H.; Vermeersen, B. L.; Dong, D.; Gross, R. S.; Ivins, E. R.; Moore, A. W.; Owen, S. E.

    2009-12-01

    Separating geodetic signatures of present-day surface mass trend and Glacial Isostatic Adjustment (GIA) requires multi-data types of different physical characteristics. We take a kinematic approach to the global simultaneous estimation problem. Three sets of global spherical harmonic coefficients from degree 1 to 60 of the present-day surface mass trend, vertical and horizontal GIA induced surface velocity fields, as well as rotation vectors of 15 major tectonic plates are solved for. The estimation is carried out using GRACE geoid trend, 3-dimensional velocities measured at 664 SLR/VLBI/GPS sites, the data-assimilated JPL ECCO ocean model. The ICE-5G/IJ05 (VM2) predictions are used as a priori GIA mean model. An a priori covariance matrix is constructed in the spherical harmonic domain for the GIA model by propagating the covariance matrices of random and geographically correlated ice thickness errors and upper/lower mantle viscosity errors so that the resulting magnitude and geographic pattern of the geoid uncertainties roughly reflect the difference between two recent GIA models. Unprecedented high-precision results are achieved. For example, geocenter velocities due to present-day surface mass trend and due to GIA are both determined to uncertainties of better than 0.1 mm/yr without using direct geodetic geocenter information. Information content of the data sets, future improvements, and benefits from new data will also be explored in the global inverse framework.

  6. Recovery of chemical Estimates by Field Inhomogeneity Neighborhood Error Detection (REFINED): Fat/Water Separation at 7T

    PubMed Central

    Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.

    2012-01-01

    I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815

  7. Recovery of chemical estimates by field inhomogeneity neighborhood error detection (REFINED): fat/water separation at 7 tesla.

    PubMed

    Narayan, Sreenath; Kalhan, Satish C; Wilson, David L

    2013-05-01

    To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.

  8. Instrumental variables vs. grouping approach for reducing bias due to measurement error.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2008-01-01

    Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.

  9. Accuracy Investigation of Creating Orthophotomaps Based on Images Obtained by Applying Trimble-UX5 UAV

    NASA Astrophysics Data System (ADS)

    Hlotov, Volodymyr; Hunina, Alla; Siejka, Zbigniew

    2017-06-01

    The main purpose of this work is to confirm the possibility of making largescale orthophotomaps applying unmanned aerial vehicle (UAV) Trimble- UX5. A planned altitude reference of the studying territory was carried out before to the aerial surveying. The studying territory has been marked with distinctive checkpoints in the form of triangles (0.5 × 0.5 × 0.2 m). The checkpoints used to precise the accuracy of orthophotomap have been marked with similar triangles. To determine marked reference point coordinates and check-points method of GNSS in real-time kinematics (RTK) measuring has been applied. Projecting of aerial surveying has been done with the help of installed Trimble Access Aerial Imaging, having been used to run out the UX5. Aerial survey out of the Trimble UX5 UAV has been done with the help of the digital camera SONY NEX-5R from 200m and 300 m altitude. These aerial surveying data have been calculated applying special photogrammetric software Pix 4D. The orthophotomap of the surveying objects has been made with its help. To determine the precise accuracy of the got results of aerial surveying the checkpoint coordinates according to the orthophotomap have been set. The average square error has been calculated according to the set coordinates applying GNSS measurements. A-priori accuracy estimation of spatial coordinates of the studying territory using the aerial surveying data have been calculated: mx=0.11 m, my=0.15 m, mz=0.23 m in the village of Remeniv and mx=0.26 m, my=0.38 m, mz=0.43 m in the town of Vynnyky. The accuracy of determining checkpoint coordinates has been investigated using images obtained out of UAV and the average square error of the reference points. Based on comparative analysis of the got results of the accuracy estimation of the made orthophotomap it can be concluded that the value the average square error does not exceed a-priori accuracy estimation. The possibility of applying Trimble UX5 UAV for making large-scale orthophotomaps has been investigated. The aerial surveying output data using UAV can be applied for monitoring potentially dangerous for people objects, the state border controlling, checking out the plots of settlements. Thus, it is important to control the accuracy the got results. Having based on the done analysis and experimental researches it can be concluded that applying UAV gives the possibility to find data more efficiently in comparison with the land surveying methods. As the result, the Trimble UX5 UAV gives the possibility to survey built-up territories with the required accuracy for making orthophotomaps with the following scales 1: 2000, 1: 1000, 1: 500.

  10. Deep space target location with Hubble Space Telescope (HST) and Hipparcos data

    NASA Technical Reports Server (NTRS)

    Null, George W.

    1988-01-01

    Interplanetary spacecraft navigation requires accurate a priori knowledge of target positions. A concept is presented for attaining improved target ephemeris accuracy using two future Earth-orbiting optical observatories, the European Space Agency (ESA) Hipparcos observatory and the Nasa Hubble Space Telescope (HST). Assuming nominal observatory performance, the Hipparcos data reduction will provide an accurate global star catalog, and HST will provide a capability for accurate angular measurements of stars and solar system bodies. The target location concept employs HST to observe solar system bodies relative to Hipparcos catalog stars and to determine the orientation (frame tie) of these stars to compact extragalactic radio sources. The target location process is described, the major error sources discussed, the potential target ephemeris error predicted, and mission applications identified. Preliminary results indicate that ephemeris accuracy comparable to the errors in individual Hipparcos catalog stars may be possible with a more extensive HST observing program. Possible future ground and spacebased replacements for Hipparcos and HST astrometric capabilities are also discussed.

  11. Calibration Errors in Interferometric Radio Polarimetry

    NASA Astrophysics Data System (ADS)

    Hales, Christopher A.

    2017-08-01

    Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.

  12. Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.

    PubMed

    Wang, Wei; Tong, Shaocheng

    2018-02-01

    This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.

  13. Comparing dietary patterns derived by two methods and their associations with obesity in Polish girls aged 13-21 years: the cross-sectional GEBaHealth study.

    PubMed

    Wadolowska, Lidia; Kowalkowska, Joanna; Czarnocinska, Jolanta; Jezewska-Zychowicz, Marzena; Babicz-Zielinska, Ewa

    2017-05-01

    To compare dietary patterns (DPs) derived by two methods and their assessment as a factor of obesity in girls aged 13-21 years. Data from a cross-sectional study conducted among the representative sample of Polish females ( n = 1,107) aged 13-21 years were used. Subjects were randomly selected. Dietary information was collected using three short-validated food frequency questionnaires (FFQs) regarding fibre intake, fat intake and overall food intake variety. DPs were identified by two methods: a priori approach (a priori DPs) and cluster analysis (data-driven DPs). The association between obesity and DPs and three single dietary characteristics was examined using multiple logistic regression analysis. Four data-driven DPs were obtained: 'Low-fat-Low-fibre-Low-varied' (21.2%), 'Low-fibre' (29.1%), 'Low-fat' (25.0%) and 'High-fat-Varied' (24.7%). Three a priori DPs were pre-defined: 'Non-healthy' (16.6%), 'Neither-pro-healthy-nor-non-healthy' (79.1%) and 'Pro-healthy' (4.3%). Girls with 'Low-fibre' DP were less likely to have central obesity (adjusted odds ratio (OR) = 0.36; 95% confidence interval (CI): 0.17, 0.75) than girls with 'Low-fat-Low-fibre-Low-varied' DP (reference group, OR = 1.00). No significant associations were found between a priori DPs and overweight including obesity or central obesity. The majority of girls with 'Non-healthy' DP were also classified as 'Low-fibre' DP in the total sample, in girls with overweight including obesity and in girls with central obesity (81.7%, 80.6% and 87.3%, respectively), while most girls with 'Pro-healthy' DP were classified as 'Low-fat' DP (67.8%, 87.6% and 52.1%, respectively). We found that the a priori approach as well as cluster analysis can be used to derive opposite health-oriented DPs in Polish females. Both methods have provided disappointing outcomes in explaining the association between obesity and DPs. The cluster analysis, in comparison with the a priori approach, was more useful for finding any relationship between DPs and central obesity. Our study highlighted the importance of method used to derive DPs in exploring associations between diet and obesity.

  14. Effects of measurement unobservability on neural extended Kalman filter tracking

    NASA Astrophysics Data System (ADS)

    Stubberud, Stephen C.; Kramer, Kathleen A.

    2009-05-01

    An important component of tracking fusion systems is the ability to fuse various sensors into a coherent picture of the scene. When multiple sensor systems are being used in an operational setting, the types of data vary. A significant but often overlooked concern of multiple sensors is the incorporation of measurements that are unobservable. An unobservable measurement is one that may provide information about the state, but cannot recreate a full target state. A line of bearing measurement, for example, cannot provide complete position information. Often, such measurements come from passive sensors such as a passive sonar array or an electronic surveillance measure (ESM) system. Unobservable measurements will, over time, result in the measurement uncertainty to grow without bound. While some tracking implementations have triggers to protect against the detrimental effects, many maneuver tracking algorithms avoid discussing this implementation issue. One maneuver tracking technique is the neural extended Kalman filter (NEKF). The NEKF is an adaptive estimation algorithm that estimates the target track as it trains a neural network on line to reduce the error between the a priori target motion model and the actual target dynamics. The weights of neural network are trained in a similar method to the state estimation/parameter estimation Kalman filter techniques. The NEKF has been shown to improve target tracking accuracy through maneuvers and has been use to predict target behavior using the new model that consists of the a priori model and the neural network. The key to the on-line adaptation of the NEKF is the fact that the neural network is trained using the same residuals as the Kalman filter for the tracker. The neural network weights are treated as augmented states to the target track. Through the state-coupling function, the weights are coupled to the target states. Thus, if the measurements cause the states of the target track to be unobservable, then the weights of the neural network have unobservable modes as well. In recent analysis, the NEKF was shown to have a significantly larger growth in the eigenvalues of the error covariance matrix than the standard EKF tracker when the measurements were purely bearings-only. This caused detrimental effects to the ability of the NEKF to model the target dynamics. In this work, the analysis is expanded to determine the detrimental effects of bearings-only measurements of various uncertainties on the performance of the NEKF when these unobservable measurements are interlaced with completely observable measurements. This analysis provides the ability to put implementation limitations on the NEKF when bearings-only sensors are present.

  15. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  16. Geomagnetic model investigations for 1980 - 1989: A model for strategic defense initiative particle beam experiments and a study in the effects of data types and observatory bias solutions

    NASA Technical Reports Server (NTRS)

    Langel, Robert A.; Sabaka, T. J.; Baldwin, R. T.

    1991-01-01

    Two suites of geomagnetic field models were generated at the request of Los Alamos National Lab. concerning Strategic Defense Initiative (SDI) research. The first is a progression of five models incorporating MAGSAT data and data from a sequence of batches as a priori information. The batch sequence is: post 1979.5 observatory data, post 1980 land survey and selected aeromagnetic and marine survey data, a special White Sands (NM) area survey by Project Magnet with some additional post 1980 marine survey data, and finally DE-2 satellite data. These models are of 13th deg and order in their main field terms, and deg and order 10 in their first derivative temporal terms. The second suite consists of four models based solely upon post 1983.5 observatory and survey data. They are of deg and order 10 in main field and 8 in a first deg Taylor series. A comprehensive error analysis was applied to both series, which accounted for error sources such as the truncated core and crustal fields, and the neglected Sq and low deg crustal fields. Comparison of the power spectrum of the MGST (10/81) model with those of this series show good agreement.

  17. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  18. Assessment and Verification of SLS Block 1-B Exploration Upper Stage State and Stage Disposal Performance

    NASA Technical Reports Server (NTRS)

    Patrick, Sean; Oliver, Emerson

    2018-01-01

    One of the SLS Navigation System's key performance requirements is a constraint on the payload system's delta-v allocation to correct for insertion errors due to vehicle state uncertainty at payload separation. The SLS navigation team has developed a Delta-Delta-V analysis approach to assess the effect on trajectory correction maneuver (TCM) design needed to correct for navigation errors. This approach differs from traditional covariance analysis based methods and makes no assumptions with regard to the propagation of the state dynamics. This allows for consideration of non-linearity in the propagation of state uncertainties. The Delta-Delta-V analysis approach re-optimizes perturbed SLS mission trajectories by varying key mission states in accordance with an assumed state error. The state error is developed from detailed vehicle 6-DOF Monte Carlo analysis or generated using covariance analysis. These perturbed trajectories are compared to a nominal trajectory to determine necessary TCM design. To implement this analysis approach, a tool set was developed which combines the functionality of a 3-DOF trajectory optimization tool, Copernicus, and a detailed 6-DOF vehicle simulation tool, Marshall Aerospace Vehicle Representation in C (MAVERIC). In addition to delta-v allocation constraints on SLS navigation performance, SLS mission requirement dictate successful upper stage disposal. Due to engine and propellant constraints, the SLS Exploration Upper Stage (EUS) must dispose into heliocentric space by means of a lunar fly-by maneuver. As with payload delta-v allocation, upper stage disposal maneuvers must place the EUS on a trajectory that maximizes the probability of achieving a heliocentric orbit post Lunar fly-by considering all sources of vehicle state uncertainty prior to the maneuver. To ensure disposal, the SLS navigation team has developed an analysis approach to derive optimal disposal guidance targets. This approach maximizes the state error covariance prior to the maneuver to develop and re-optimize a nominal disposal maneuver (DM) target that, if achieved, would maximize the potential for successful upper stage disposal. For EUS disposal analysis, a set of two tools was developed. The first considers only the nominal pre-disposal maneuver state, vehicle constraints, and an a priori estimate of the state error covariance. In the analysis, the optimal nominal disposal target is determined. This is performed by re-formulating the trajectory optimization to consider constraints on the eigenvectors of the error ellipse applied to the nominal trajectory. A bisection search methodology is implemented in the tool to refine these dispersions resulting in the maximum dispersion feasible for successful disposal via lunar fly-by. Success is defined based on the probability that the vehicle will not impact the lunar surface and will achieve a characteristic energy (C3) relative to the Earth such that it is no longer in the Earth-Moon system. The second tool propagates post-disposal maneuver states to determine the success of disposal for provided trajectory achieved states. This is performed using the optimized nominal target within the 6-DOF vehicle simulation. This paper will discuss the application of the Delta-Delta-V analysis approach for performance evaluation as well as trajectory re-optimization so as to demonstrate the system's capability in meeting performance constraints. Additionally, further discussion of the implementation of assessing disposal analysis will be provided.

  19. An adaptive finite element method for the inequality-constrained Reynolds equation

    NASA Astrophysics Data System (ADS)

    Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha

    2018-07-01

    We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.

  20. Nonparametric method for failures detection and localization in the actuating subsystem of aircraft control system

    NASA Astrophysics Data System (ADS)

    Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.

  1. Monitoring of ground movement in open pit iron mines of Carajás Province (Amazon region) based on A-DInSAR techniques using TerraSAR-X data

    NASA Astrophysics Data System (ADS)

    Silva, Guilherme Gregório; Mura, José Claudio; Paradella, Waldir Renato; Gama, Fabio Furlan; Temporim, Filipe Altoé

    2017-04-01

    Persistent scatterer interferometry (PSI) analysis of a large area is always a challenging task regarding the removal of the atmospheric phase component. This work presents an investigation of ground movement measurements based on a combination of differential SAR interferometry time-series (DTS) and PSI techniques, applied on a large area of extent with open pit iron mines located in Carajás (Brazilian Amazon Region), aiming at detecting linear and nonlinear ground movement. These mines have presented a history of instability, and surface monitoring measurements over sectors of the mines (pit walls) have been carried out based on ground-based radar and total station (prisms). Using a priori information regarding the topographic phase error and a phase displacement model derived from DTS, temporal phase unwrapping in the PSI processing and the removal of the atmospheric phases can be performed more efficiently. A set of 33 TerraSAR-X (TSX-1) images, acquired during the period from March 2012 to April 2013, was used to perform this investigation. The DTS analysis was carried out on a stack of multilook unwrapped interferograms using an extension of SVD to obtain the least-square solution. The height errors and deformation rates provided by the DTS approach were subtracted from the stack of interferograms to perform the PSI analysis. This procedure improved the capability of the PSI analysis for detecting high rates of deformation, as well as increased the numbers of point density of the final results. The proposed methodology showed good results for monitoring surface displacement in a large mining area, which is located in a rain forest environment, providing very useful information about the ground movement for planning and risk control.

  2. Monitoring of surface movement in a large area of the open pit iron mines (Carajás, Brazil) based on A-DInSAR techniques using TerraSAR-X data

    NASA Astrophysics Data System (ADS)

    Mura, José C.; Paradella, Waldir R.; Gama, Fabio F.; Silva, Guilherme G.

    2016-10-01

    PSI (Persistent Scatterer Interferometry) analysis of large area is always a challenging task regarding the removal of the atmospheric phase component. This work presents an investigation of ground deformation measurements based on a combination of DInSAR Time-Series (DTS) and PSI techniques, applied in a large area of open pit iron mines located in Carajás (Brazilian Amazon Region), aiming at detect high rates of linear and nonlinear ground deformation. These mines have presented a historical of instability and surface monitoring measurements over sectors of the mines (pit walls) have been carried out based on ground based radar and total station (prisms). By using a priori information regarding the topographic phase error and phase displacement model derived from DTS, temporal phase unwrapping in the PSI processing and the removal of the atmospheric phases can be performed more efficiently. A set of 33 TerraSAR-X-1 images, acquired during the period from March 2012 to April 2013, was used to perform this investigation. The DTS analysis was carried out on a stack of multi-look unwrapped interferogram using an extension of SVD to obtain the Least-Square solution. The height errors and deformation rates provided by the DTS approach were subtracted from the stack of interferogram to perform the PSI analysis. This procedure improved the capability of the PSI analysis to detect high rates of deformation as well as increased the numbers of point density of the final results. The proposed methodology showed good results for monitoring surface displacement in a large mining area, which is located in a rain forest environment, providing very useful information about the ground movement for planning and risks control.

  3. Brain Cortical Thickness Differences in Adolescent Females with Substance Use Disorders.

    PubMed

    Boulos, Peter K; Dalwani, Manish S; Tanabe, Jody; Mikulich-Gilbertson, Susan K; Banich, Marie T; Crowley, Thomas J; Sakai, Joseph T

    2016-01-01

    We recruited right-handed female patients, 14-19 years of age, from a university-based treatment program for youths with substance use disorders and community controls similar for age, race and zip code of residence. We obtained 43 T1-weighted structural brain images (22 patients and 21 controls) to examine group differences in cortical thickness across the entire brain as well as six a priori regions-of-interest: 1) medial orbitofrontal cortex; 2) rostral anterior cingulate cortex; and 3) middle frontal cortex, in each hemisphere. Age and IQ were entered as nuisance factors for all analyses. A priori region-of-interest analyses yielded no significant differences. However, whole-brain group comparisons revealed that the left pregenual rostral anterior cingulate cortex extending into the left medial orbitofrontal region (355.84 mm2 in size), a subset of two of our a priori regions-of-interest, was significantly thinner in patients compared to controls (vertex-level threshold p = 0.005 and cluster-level family wise error corrected threshold p = 0.05). The whole-brain group differences did not survive after adjusting for depression or externalizing scores. Whole-brain within-patient analyses demonstrated a positive association between cortical thickness in the left precuneus and behavioral disinhibition scores (458.23 mm2 in size). Adolescent females with substance use disorders have significant differences in brain cortical thickness in regions engaged by the default mode network and that have been associated with problems of emotional dysregulation, inhibition, and behavioral control in past studies.

  4. A numerical fragment basis approach to SCF calculations.

    NASA Astrophysics Data System (ADS)

    Hinde, Robert J.

    1997-11-01

    The counterpoise method is often used to correct for basis set superposition error in calculations of the electronic structure of bimolecular systems. One drawback of this approach is the need to specify a ``reference state'' for the system; for reactive systems, the choice of an unambiguous reference state may be difficult. An example is the reaction F^- + HCl arrow HF + Cl^-. Two obvious reference states for this reaction are F^- + HCl and HF + Cl^-; however, different counterpoise-corrected interaction energies are obtained using these two reference states. We outline a method for performing SCF calculations which employs numerical basis functions; this method attempts to eliminate basis set superposition errors in an a priori fashion. We test the proposed method on two one-dimensional, three-center systems and discuss the possibility of extending our approach to include electron correlation effects.

  5. Atmospheric CO2 inversions on the mesoscale using data-driven prior uncertainties: quantification of the European terrestrial CO2 fluxes

    NASA Astrophysics Data System (ADS)

    Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas F.; Heimann, Martin

    2018-03-01

    Optimized biogenic carbon fluxes for Europe were estimated from high-resolution regional-scale inversions, utilizing atmospheric CO2 measurements at 16 stations for the year 2007. Additional sensitivity tests with different data-driven error structures were performed. As the atmospheric network is rather sparse and consequently contains large spatial gaps, we use a priori biospheric fluxes to further constrain the inversions. The biospheric fluxes were simulated by the Vegetation Photosynthesis and Respiration Model (VPRM) at a resolution of 0.1° and optimized against eddy covariance data. Overall we estimate an a priori uncertainty of 0.54 GtC yr-1 related to the poor spatial representation between the biospheric model and the ecosystem sites. The sink estimated from the atmospheric inversions for the area of Europe (as represented in the model domain) ranges between 0.23 and 0.38 GtC yr-1 (0.39 and 0.71 GtC yr-1 up-scaled to geographical Europe). This is within the range of posterior flux uncertainty estimates of previous studies using ground-based observations.

  6. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  7. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    PubMed

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  8. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  9. On the analysis of incoherent scatter radar data from non-thermal ionospheric plasma - Effects of measurement noise and an inexact theory

    NASA Astrophysics Data System (ADS)

    Suvanto, K.

    1990-07-01

    Statistical inversion theory is employed to estimate parameter uncertainties in incoherent scatter radar studies of non-Maxwellian ionospheric plasma. Measurement noise and the inexact nature of the plasma model are considered as potential sources of error. In most of the cases investigated here, it is not possible to determine electron density, line-of-sight ion and electron temperatures, ion composition, and two non-Maxwellian shape factors simultaneously. However, if the molecular ion velocity distribution is highly non-Maxwellian, all these quantities can sometimes be retrieved from the data. This theoretical result supports the validity of the only successful non-Maxwellian, mixed-species fit discussed in the literature. A priori information on one of the parameters, e.g., the electron density, often reduces the parameter uncertainties significantly and makes composition fits possible even if the six-parameter fit cannot be performed. However, small (less than 0.5) non-Maxwellian shape factors remain difficult to distinguish.

  10. Dynamic Response of a Planetary Gear System Using a Finite Element/Contact Mechanics Model

    NASA Technical Reports Server (NTRS)

    Parker, Robert G.; Agashe, Vinayak; Vijayakar, Sandeep M.

    2000-01-01

    The dynamic response of a helicopter planetary gear system is examined over a wide range of operating speeds and torques. The analysis tool is a unique, semianalytical finite element formulation that admits precise representation of the tooth geometry and contact forces that are crucial in gear dynamics. Importantly, no a priori specification of static transmission error excitation or mesh frequency variation is required; the dynamic contact forces are evaluated internally at each time step. The calculated response shows classical resonances when a harmonic of mesh frequency coincides with a natural frequency. However, peculiar behavior occurs where resonances expected to be excited at a given speed are absent. This absence of particular modes is explained by analytical relationships that depend on the planetary configuration and mesh frequency harmonic. The torque sensitivity of the dynamic response is examined and compared to static analyses. Rotation mode response is shown to be more sensitive to input torque than translational mode response.

  11. Low Boom Configuration Analysis with FUN3D Adjoint Simulation Framework

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2011-01-01

    Off-body pressure, forces, and moments for the Gulfstream Low Boom Model are computed with a Reynolds Averaged Navier Stokes solver coupled with the Spalart-Allmaras (SA) turbulence model. This is the first application of viscous output-based adaptation to reduce estimated discretization errors in off-body pressure for a wing body configuration. The output adaptation approach is compared to an a priori grid adaptation technique designed to resolve the signature on the centerline by stretching and aligning the grid to the freestream Mach angle. The output-based approach produced good predictions of centerline and off-centerline measurements. Eddy viscosity predicted by the SA turbulence model increased significantly with grid adaptation. Computed lift as a function of drag compares well with wind tunnel measurements for positive lift, but predicted lift, drag, and pitching moment as a function of angle of attack has significant differences from the measured data. The sensitivity of longitudinal forces and moment to grid refinement is much smaller than the differences between the computed and measured data.

  12. A general algorithm for peak-tracking in multi-dimensional NMR experiments.

    PubMed

    Ravel, P; Kister, G; Malliavin, T E; Delsuc, M A

    2007-04-01

    We present an algorithmic method allowing automatic tracking of NMR peaks in a series of spectra. It consists in a two phase analysis. The first phase is a local modeling of the peak displacement between two consecutive experiments using distance matrices. Then, from the coefficients of these matrices, a value graph containing the a priori set of possible paths used by these peaks is generated. On this set, the minimization under constraint of the target function by a heuristic approach provides a solution to the peak-tracking problem. This approach has been named GAPT, standing for General Algorithm for NMR Peak Tracking. It has been validated in numerous simulations resembling those encountered in NMR spectroscopy. We show the robustness and limits of the method for situations with many peak-picking errors, and presenting a high local density of peaks. It is then applied to the case of a temperature study of the NMR spectrum of the Lipid Transfer Protein (LTP).

  13. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  14. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information

    PubMed Central

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294

  15. A priori analysis: an application to the estimate of the uncertainty in course grades

    NASA Astrophysics Data System (ADS)

    Lippi, G. L.

    2014-07-01

    A priori analysis (APA) is discussed as a tool to assess the reliability of grades in standard curricular courses. This unusual, but striking, application is presented when teaching the section on the data treatment of a laboratory course to illustrate the characteristics of the APA and its potential for widespread use, beyond the traditional physics curriculum. The conditions necessary for this kind of analysis are discussed, the general framework is set out and a specific example is given to illustrate its various aspects. Students are often struck by this unusual application and are more apt to remember the APA. Instructors may also benefit from some of the gathered information, as discussed in the paper.

  16. Recovery of intrinsic fluorescence from single-point interstitial measurements for quantification of doxorubicin concentration.

    PubMed

    Baran, Timothy M; Foster, Thomas H

    2013-10-01

    We developed a method for the recovery of intrinsic fluorescence from single-point measurements in highly scattering and absorbing samples without a priori knowledge of the sample optical properties. The goal of the study was to demonstrate accurate recovery of fluorophore concentration in samples with widely varying background optical properties, while simultaneously recovering the optical properties. Tissue-simulating phantoms containing doxorubicin, MnTPPS, and Intralipid-20% were created, and fluorescence measurements were performed using a single isotropic probe. The resulting spectra were analyzed using a forward-adjoint fluorescence model in order to recover the fluorophore concentration and background optical properties. We demonstrated recovery of doxorubicin concentration with a mean error of 11.8%. The concentration of the background absorber was recovered with an average error of 23.2% and the scattering spectrum was recovered with a mean error of 19.8%. This method will allow for the determination of local concentrations of fluorescent drugs, such as doxorubicin, from minimally invasive fluorescence measurements. This is particularly interesting in the context of transarterial chemoembolization (TACE) treatment of liver cancer. © 2013 Wiley Periodicals, Inc.

  17. A Novel A Posteriori Investigation of Scalar Flux Models for Passive Scalar Dispersion in Compressible Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Braman, Kalen; Raman, Venkat

    2011-11-01

    A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.

  18. Map based navigation for autonomous underwater vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuohy, S.T.; Leonard, J.J.; Bellingham, J.G.

    1995-12-31

    In this work, a map based navigation algorithm is developed wherein measured geophysical properties are matched to a priori maps. The objectives is a complete algorithm applicable to a small, power-limited AUV which performs in real time to a required resolution with bounded position error. Interval B-Splines are introduced for the non-linear representation of two-dimensional geophysical parameters that have measurement uncertainty. Fine-scale position determination involves the solution of a system of nonlinear polynomial equations with interval coefficients. This system represents the complete set of possible vehicle locations and is formulated as the intersection of contours established on each map frommore » the simultaneous measurement of associated geophysical parameters. A standard filter mechanisms, based on a bounded interval error model, predicts the position of the vehicle and, therefore, screens extraneous solutions. When multiple solutions are found, a tracking mechanisms is applied until a unique vehicle location is determined.« less

  19. QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2014-01-01

    We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n -gon, our construction produces 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n ( n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called 'serendipity' elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed.

  20. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  1. Acousto-thermometric recovery of the deep temperature profile using heat conduction equations

    NASA Astrophysics Data System (ADS)

    Anosov, A. A.; Belyaev, R. V.; Vilkov, V. A.; Dvornikova, M. V.; Dvornikova, V. V.; Kazanskii, A. S.; Kuryatnikova, N. A.; Mansfel'd, A. D.

    2012-09-01

    In a model experiment using the acousto-thermographic method, deep temperature profiles varying in time are recovered. In the recovery algorithm, we used a priori information in the form of a requirement that the calculated temperature must satisfy the heat conduction equation. The problem is reduced to determining two parameters: the initial temperature and the temperature conductivity coefficient of the object under consideration (the plasticine band). During the experiment, there was independent inspection using electronic thermometers mounted inside the plasticine. The error in the temperature conductivity coefficient was about 17% and the error in initial temperature determination was less than one degree. Such recovery results allow application of this approach to solving a number of medical problems. It is experimentally proved that acoustic irregularities influence the acousto-thermometric results as well. It is shown that in the chosen scheme of experiment (which corresponds to measurements of human muscle tissue), this influence can be neglected.

  2. Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca

    2017-11-01

    Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.

  3. Epinephrine Auto-Injector Versus Drawn Up Epinephrine for Anaphylaxis Management: A Scoping Review.

    PubMed

    Chime, Nnenna O; Riese, Victoria G; Scherzer, Daniel J; Perretta, Julianne S; McNamara, LeAnn; Rosen, Michael A; Hunt, Elizabeth A

    2017-08-01

    Anaphylaxis is a life-threatening event. Most clinical symptoms of anaphylaxis can be reversed by prompt intramuscular administration of epinephrine using an auto-injector or epinephrine drawn up in a syringe and delays and errors may be fatal. The aim of this scoping review is to identify and compare errors associated with use of epinephrine drawn up in a syringe versus epinephrine auto-injectors in order to assist hospitals as they choose which approach minimizes risk of adverse events for their patients. PubMed, Embase, CINAHL, Web of Science, and the Cochrane Library were searched using terms agreed to a priori. We reviewed human and simulation studies reporting errors associated with the use of epinephrine in anaphylaxis. There were multiple screening stages with evolving feedback. Each study was independently assessed by two reviewers for eligibility. Data were extracted using an instrument modeled from the Zaza et al instrument and grouped into themes. Three main themes were noted: 1) ergonomics, 2) dosing errors, and 3) errors due to route of administration. Significant knowledge gaps in the operation of epinephrine auto-injectors among healthcare providers, patients, and caregivers were identified. For epinephrine in a syringe, there were more frequent reports of incorrect dosing and erroneous IV administration with associated adverse cardiac events. For the epinephrine auto-injector, unintentional administration to the digit was an error reported on multiple occasions. This scoping review highlights knowledge gaps and a diverse set of errors regardless of the approach to epinephrine preparation during management of anaphylaxis. There are more potentially life-threatening errors reported for epinephrine drawn up in a syringe than with the auto-injectors. The impact of these knowledge gaps and potentially fatal errors on patient outcomes, cost, and quality of care is worthy of further investigation.

  4. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  5. Lidar inversion of atmospheric backscatter and extinction-to-backscatter ratios by use of a Kalman filter.

    PubMed

    Rocadenbosch, F; Soriano, C; Comerón, A; Baldasano, J M

    1999-05-20

    A first inversion of the backscatter profile and extinction-to-backscatter ratio from pulsed elastic-backscatter lidar returns is treated by means of an extended Kalman filter (EKF). The EKF approach enables one to overcome the intrinsic limitations of standard straightforward nonmemory procedures such as the slope method, exponential curve fitting, and the backward inversion algorithm. Whereas those procedures are inherently not adaptable because independent inversions are performed for each return signal and neither the statistics of the signals nor a priori uncertainties (e.g., boundary calibrations) are taken into account, in the case of the Kalman filter the filter updates itself because it is weighted by the imbalance between the a priori estimates of the optical parameters (i.e., past inversions) and the new estimates based on a minimum-variance criterion, as long as there are different lidar returns. Calibration errors and initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate atmospheric stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables one to retrieve the optical parameters as time-range-dependent functions and hence to track the atmospheric evolution; the performance of this approach is limited only by the quality and availability of the a priori information and the accuracy of the atmospheric model used. The study ends with an encouraging practical inversion of a live scene measured at the Nd:YAG elastic-backscatter lidar station at our premises at the Polytechnic University of Catalonia, Barcelona.

  6. Undersampling strategies for compressed sensing accelerated MR spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Vidya Shankar, Rohini; Hu, Houchun Harry; Bikkamane Jayadev, Nutandev; Chang, John C.; Kodibagkar, Vikram D.

    2017-03-01

    Compressed sensing (CS) can accelerate magnetic resonance spectroscopic imaging (MRSI), facilitating its widespread clinical integration. The objective of this study was to assess the effect of different undersampling strategy on CS-MRSI reconstruction quality. Phantom data were acquired on a Philips 3 T Ingenia scanner. Four types of undersampling masks, corresponding to each strategy, namely, low resolution, variable density, iterative design, and a priori were simulated in Matlab and retrospectively applied to the test 1X MRSI data to generate undersampled datasets corresponding to the 2X - 5X, and 7X accelerations for each type of mask. Reconstruction parameters were kept the same in each case(all masks and accelerations) to ensure that any resulting differences can be attributed to the type of mask being employed. The reconstructed datasets from each mask were statistically compared with the reference 1X, and assessed using metrics like the root mean square error and metabolite ratios. Simulation results indicate that both the a priori and variable density undersampling masks maintain high fidelity with the 1X up to five-fold acceleration. The low resolution mask based reconstructions showed statistically significant differences from the 1X with the reconstruction failing at 3X, while the iterative design reconstructions maintained fidelity with the 1X till 4X acceleration. In summary, a pilot study was conducted to identify an optimal sampling mask in CS-MRSI. Simulation results demonstrate that the a priori and variable density masks can provide statistically similar results to the fully sampled reference. Future work would involve implementing these two masks prospectively on a clinical scanner.

  7. A Study on Gröbner Basis with Inexact Input

    NASA Astrophysics Data System (ADS)

    Nagasaka, Kosaku

    Gröbner basis is one of the most important tools in recent symbolic algebraic computations. However, computing a Gröbner basis for the given polynomial ideal is not easy and it is not numerically stable if polynomials have inexact coefficients. In this paper, we study what we should get for computing a Gröbner basis with inexact coefficients and introduce a naive method to compute a Gröbner basis by reduced row echelon form, for the ideal generated by the given polynomial set having a priori errors on their coefficients.

  8. A Priori Error-Controlled Simulation of Electromagnetic Phenomena for HPC

    DTIC Science & Technology

    2013-06-12

    our project manager from the Army in July, coinciding with Prof. Hagstrom’s visit to HyPerComp. We are presently aiming to integrate the CRBC module...2P (P +1)+P 2 = 3P 2+2P equations in 3(P +1)2 = 3P 2+6P +3 variables. Thus 4P +3 additional equations are required. We first incorporate incoming data...P + 1) = 4P + 4 equations, one more than we can use. We remove an equation by only imposing the sum of (30) and (33) for j = k = 0. 4 Meep Meep-1.2

  9. Remaining lifetime modeling using State-of-Health estimation

    NASA Astrophysics Data System (ADS)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.

  10. Reconstruction of Atmospheric Tracer Releases with Optimal Resolution Features: Concentration Data Assimilation

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali

    2015-04-01

    The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.

  11. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  12. Measurement Properties of the Modified Spinal Function Sort (M-SFS): Is It Reliable and Valid in Workers with Chronic Musculoskeletal Pain?

    PubMed

    Trippolini, Maurizio Alen; Janssen, Svenja; Hilfiker, Roger; Oesch, Peter

    2018-06-01

    Purpose To analyze the reliability and validity of a picture-based questionnaire, the Modified Spinal Function Sort (M-SFS). Methods Sixty-two injured workers with chronic musculoskeletal disorders (MSD) were recruited from two work rehabilitation centers. Internal consistency was assessed by Cronbach's alpha. Construct validity was tested based on four a priori hypotheses. Structural validity was measured with principal component analysis (PCA). Test-retest reliability and agreement was evaluated using intraclass correlation coefficient (ICC) and measurement error with the limits of agreement (LoA). Results Total score of the M-SFS was 54.4 (SD 16.4) and 56.1 (16.4) for test and retest, respectively. Item distribution showed no ceiling effects. Cronbach's alpha was 0.94 and 0.95 for test and retest, respectively. PCA showed the presence of four components explaining a total of 74% of the variance. Item communalities were >0.6 in 17 out of 20 items. ICC was 0.90, LoA was ±12.6/16.2 points. The correlations between the M-SFS were 0.89 with the original SFS, 0.49 with the Pain Disability Index, -0.37 and -0.33 with the Numeric Rating Scale for actual pain, -0.52 for selfreported disability due to chronic low back pain, and 0.50, 0.56-0.59 with three distinct lifting tests. No a priori defined hypothesis for construct validity was rejected. Conclusions The M-SFS allows reliable and valid assessment of perceived self-efficacy for work-related tasks and can be recommended for use in patients with chronic MSD. Further research should investigate the proposed M-SFS score of <56 for its predictive validity for non-return to work.

  13. Trial-by-Trial Changes in a Priori Informational Value of External Cues and Subjective Expectancies in Human Auditory Attention

    PubMed Central

    Arjona, Antonio; Gómez, Carlos M.

    2011-01-01

    Background Preparatory activity based on a priori probabilities generated in previous trials and subjective expectancies would produce an attentional bias. However, preparation can be correct (valid) or incorrect (invalid) depending on the actual target stimulus. The alternation effect refers to the subjective expectancy that a target will not be repeated in the same position, causing RTs to increase if the target location is repeated. The present experiment, using the Posner's central cue paradigm, tries to demonstrate that not only the credibility of the cue, but also the expectancy about the next position of the target are changedin a trial by trial basis. Sequences of trials were analyzed. Results The results indicated an increase in RT benefits when sequences of two and three valid trials occurred. The analysis of errors indicated an increase in anticipatory behavior which grows as the number of valid trials is increased. On the other hand, there was also an RT benefit when a trial was preceded by trials in which the position of the target changed with respect to the current trial (alternation effect). Sequences of two alternations or two repetitions were faster than sequences of trials in which a pattern of repetition or alternation is broken. Conclusions Taken together, these results suggest that in Posner's central cue paradigm, and with regard to the anticipatory activity, the credibility of the external cue and of the endogenously anticipated patterns of target location are constantly updated. The results suggest that Bayesian rules are operating in the generation of anticipatory activity as a function of the previous trial's outcome, but also on biases or prior beliefs like the “gambler fallacy”. PMID:21698164

  14. Homogenized total ozone data records from the European sensors GOME/ERS-2, SCIAMACHY/Envisat, and GOME-2/MetOp-A

    NASA Astrophysics Data System (ADS)

    Lerot, C.; Van Roozendael, M.; Spurr, R.; Loyola, D.; Coldewey-Egbers, M.; Kochenova, S.; van Gent, J.; Koukouli, M.; Balis, D.; Lambert, J.-C.; Granville, J.; Zehner, C.

    2014-02-01

    Within the European Space Agency's Climate Change Initiative, total ozone column records from GOME (Global Ozone Monitoring Experiment), SCIAMACHY (SCanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY), and GOME-2 have been reprocessed with GODFIT version 3 (GOME-type Direct FITting). This algorithm is based on the direct fitting of reflectances simulated in the Huggins bands to the observations. We report on new developments in the algorithm from the version implemented in the operational GOME Data Processor v5. The a priori ozone profile database TOMSv8 is now combined with a recently compiled OMI/MLS tropospheric ozone climatology to improve the representativeness of a priori information. The Ring procedure that corrects simulated radiances for the rotational Raman inelastic scattering signature has been improved using a revised semi-empirical expression. Correction factors are also applied to the simulated spectra to account for atmospheric polarization. In addition, the computational performance has been significantly enhanced through the implementation of new radiative transfer tools based on principal component analysis of the optical properties. Furthermore, a soft-calibration scheme for measured reflectances and based on selected Brewer measurements has been developed in order to reduce the impact of level-1 errors. This soft-calibration corrects not only for possible biases in backscattered reflectances, but also for artificial spectral features interfering with the ozone signature. Intersensor comparisons and ground-based validation indicate that these ozone data sets are of unprecedented quality, with stability better than 1% per decade, a precision of 1.7%, and systematic uncertainties less than 3.6% over a wide range of atmospheric states.

  15. Prophylactic Bracing Has No Effect on Lower Extremity Alignment or Functional Performance.

    PubMed

    Hueber, Garrett A; Hall, Emily A; Sage, Brad W; Docherty, Carrie L

    2017-07-01

    Prophylactic ankle bracing is commonly used during physical activity. Understanding how bracing affects body mechanics is critically important when discussing both injury prevention and sport performance. The purpose is to determine if ankle bracing affects lower extremity mechanics during the Landing Error Scoring System test (LESS) and Sage Sway Index (SSI). Thirty physically active participants volunteered for this study. Participants completed the LESS and SSI in both a braced and unsupported conditions. Total errors were recorded for the LESS. Total errors and time (seconds) were recorded for the SSI. The Wilcoxon signed-rank test was utilized to evaluate any differences between the brace conditions for each dependent variable. A priori alpha level was set at p<0.05. The Wilcoxon signed-rank test yielded no significant difference between the braced and unsupported conditions for the LESS (Z=-0.35, p=0.72), SSI time (Z=-0.36, p=0.72), or SSI Errors (Z=-0.37, p=0.71). Ankle braces had no effect on subjective clinical assessments of lower extremity alignment or postural stability. Utilization of a prophylactic support at the ankle did not substantially alter the proximal components of the lower kinetic chain. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Data science approaches to pharmacogenetics.

    PubMed

    Penrod, N M; Moore, J H

    2014-01-01

    Pharmacogenetic studies rely on applied statistics to evaluate genetic data describing natural variation in response to pharmacotherapeutics such as drugs and vaccines. In the beginning, these studies were based on candidate gene approaches that specifically focused on efficacy or adverse events correlated with variants of single genes. This hypothesis driven method required the researcher to have a priori knowledge of which genes or gene sets to investigate. According to rational design, the focus of these studies has been on drug metabolizing enzymes, drug transporters, and drug targets. As technology has progressed, these studies have transitioned to hypothesis-free explorations where markers across the entire genome can be measured in large scale, population based, genome-wide association studies (GWAS). This enables identification of novel genetic biomarkers, therapeutic targets, and analysis of gene-gene interactions, which may reveal molecular mechanisms of drug activities. Ultimately, the challenge is to utilize gene-drug associations to create dosing algorithms based individual genotypes, which will guide physicians and ensure they prescribe the correct dose of the correct drug the first time eliminating trial-and-error and adverse events. We review here basic concepts and applications of data science to the genetic analysis of pharmacologic outcomes.

  17. A comparison of zero-order, first-order, and monod biotransformation models

    USGS Publications Warehouse

    Bekins, B.A.; Warren, E.; Godsy, E.M.

    1998-01-01

    Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K(s), this assumption is often made without verification of this condition. We present a formal error analysis showing that the relative error in the first-order approximation is S/K(S) and in the zero-order approximation the error is K(s)/S. We then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K(s), it may better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of KS for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, we apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set.A formal error analysis is presented showing that the relative error in the first-order approximation is S/KS and in the zero-order approximation the error is KS/S where S is the substrate concentration and KS is the half-saturation constant. The problems that arise when the first-order approximation is used outside the range for which it is valid are examined. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than KS, it may be better to model degradation using a zero-order rate expression.

  18. Scope of inextensible frame hypothesis in local action analysis of spherical reservoirs

    NASA Astrophysics Data System (ADS)

    Vinogradov, Yu. I.

    2017-05-01

    Spherical reservoirs, as objects perfect with respect to their weight, are used in spacecrafts, where thin-walled elements are joined by frames into multifunction structures. The junctions are local, which results in origination of stress concentration regions and the corresponding rigidity problems. The thin-walled elements are reinforced by frame to decrease the stresses in them. To simplify the analysis of the mathematical model of common deformation of the shell (which is a mathematical idealization of the reservoir) and the frame, the assumption that the frame axial line is inextensible is used widely (in particular, in the manual literature). The unjustified use of this assumption significantly distorts the concept of the stress-strain state. In this paper, an example of a lens-shaped structure formed as two spherical shell segments connected by a frame of square profile is used to carry out a numerical comparative analysis of the solutions with and without the inextensible frame hypothesis taken into account. The scope of the hypothesis is shown depending on the structure geometric parameters and the load location degree. The obtained results can be used to determine the stress-strain state of the thin-walled structure with an a priori prescribed error, for example, in research and experimental design of aerospace systems.

  19. Modeling uncertainties for tropospheric nitrogen dioxide columns affecting satellite-based inverse modeling of nitrogen oxides emissions

    NASA Astrophysics Data System (ADS)

    Lin, J.-T.; Liu, Z.; Zhang, Q.; Liu, H.; Mao, J.; Zhuang, G.

    2012-12-01

    Errors in chemical transport models (CTMs) interpreting the relation between space-retrieved tropospheric column densities of nitrogen dioxide (NO2) and emissions of nitrogen oxides (NOx) have important consequences on the inverse modeling. They are however difficult to quantify due to lack of adequate in situ measurements, particularly over China and other developing countries. This study proposes an alternate approach for model evaluation over East China, by analyzing the sensitivity of modeled NO2 columns to errors in meteorological and chemical parameters/processes important to the nitrogen abundance. As a demonstration, it evaluates the nested version of GEOS-Chem driven by the GEOS-5 meteorology and the INTEX-B anthropogenic emissions and used with retrievals from the Ozone Monitoring Instrument (OMI) to constrain emissions of NOx. The CTM has been used extensively for such applications. Errors are examined for a comprehensive set of meteorological and chemical parameters using measurements and/or uncertainty analysis based on current knowledge. Results are exploited then for sensitivity simulations perturbing the respective parameters, as the basis of the following post-model linearized and localized first-order modification. It is found that the model meteorology likely contains errors of various magnitudes in cloud optical depth, air temperature, water vapor, boundary layer height and many other parameters. Model errors also exist in gaseous and heterogeneous reactions, aerosol optical properties and emissions of non-nitrogen species affecting the nitrogen chemistry. Modifications accounting for quantified errors in 10 selected parameters increase the NO2 columns in most areas with an average positive impact of 18% in July and 8% in January, the most important factor being modified uptake of the hydroperoxyl radical (HO2) on aerosols. This suggests a possible systematic model bias such that the top-down emissions will be overestimated by the same magnitude if the model is used for emission inversion without corrections. The modifications however cannot eliminate the large model underestimates in cities and other extremely polluted areas (particularly in the north) as compared to satellite retrievals, likely pointing to underestimates of the a priori emission inventory in these places with important implications for understanding of atmospheric chemistry and air quality. Note that these modifications are simplified and should be interpreted with caution for error apportionment.

  20. A set-theoretic model reference adaptive control architecture for disturbance rejection and uncertainty suppression with strict performance guarantees

    NASA Astrophysics Data System (ADS)

    Arabi, Ehsan; Gruenwald, Benjamin C.; Yucelen, Tansel; Nguyen, Nhan T.

    2018-05-01

    Research in adaptive control algorithms for safety-critical applications is primarily motivated by the fact that these algorithms have the capability to suppress the effects of adverse conditions resulting from exogenous disturbances, imperfect dynamical system modelling, degraded modes of operation, and changes in system dynamics. Although government and industry agree on the potential of these algorithms in providing safety and reducing vehicle development costs, a major issue is the inability to achieve a-priori, user-defined performance guarantees with adaptive control algorithms. In this paper, a new model reference adaptive control architecture for uncertain dynamical systems is presented to address disturbance rejection and uncertainty suppression. The proposed framework is predicated on a set-theoretic adaptive controller construction using generalised restricted potential functions.The key feature of this framework allows the system error bound between the state of an uncertain dynamical system and the state of a reference model, which captures a desired closed-loop system performance, to be less than a-priori, user-defined worst-case performance bound, and hence, it has the capability to enforce strict performance guarantees. Examples are provided to demonstrate the efficacy of the proposed set-theoretic model reference adaptive control architecture.

  1. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  2. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  3. Evaluation and Application of Satellite-Based Latent Heating Profile Estimation Methods

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Grecu, Mircea; Yang, Song; Tao, Wei-Kuo

    2004-01-01

    In recent years, methods for estimating atmospheric latent heating vertical structure from both passive and active microwave remote sensing have matured to the point where quantitative evaluation of these methods is the next logical step. Two approaches for heating algorithm evaluation are proposed: First, application of heating algorithms to synthetic data, based upon cloud-resolving model simulations, can be used to test the internal consistency of heating estimates in the absence of systematic errors in physical assumptions. Second, comparisons of satellite-retrieved vertical heating structures to independent ground-based estimates, such as rawinsonde-derived analyses of heating, provide an additional test. The two approaches are complementary, since systematic errors in heating indicated by the second approach may be confirmed by the first. A passive microwave and combined passive/active microwave heating retrieval algorithm are evaluated using the described approaches. In general, the passive microwave algorithm heating profile estimates are subject to biases due to the limited vertical heating structure information contained in the passive microwave observations. These biases may be partly overcome by including more environment-specific a priori information into the algorithm s database of candidate solution profiles. The combined passive/active microwave algorithm utilizes the much higher-resolution vertical structure information provided by spaceborne radar data to produce less biased estimates; however, the global spatio-temporal sampling by spaceborne radar is limited. In the present study, the passive/active microwave algorithm is used to construct a more physically-consistent and environment-specific set of candidate solution profiles for the passive microwave algorithm and to help evaluate errors in the passive algorithm s heating estimates. Although satellite estimates of latent heating are based upon instantaneous, footprint- scale data, suppression of random errors requires averaging to at least half-degree resolution. Analysis of mesoscale and larger space-time scale phenomena based upon passive and passive/active microwave heating estimates from TRMM, SSMI, and AMSR data will be presented at the conference.

  4. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.

  5. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  6. A Methodology to Seperate and Analyze a Seismic Wide Angle Profile

    NASA Astrophysics Data System (ADS)

    Weinzierl, Wolfgang; Kopp, Heidrun

    2010-05-01

    General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.

  7. Calibrated Multiple Event Relocations of the Central and Eastern United States

    NASA Astrophysics Data System (ADS)

    Yeck, W. L.; Benz, H.; McNamara, D. E.; Bergman, E.; Herrmann, R. B.; Myers, S. C.

    2015-12-01

    Earthquake locations are a first-order observable which form the basis of a wide range of seismic analyses. Currently, the ANSS catalog primarily contains published single-event earthquake locations that rely on assumed 1D velocity models. Increasing the accuracy of cataloged earthquake hypocenter locations and origin times and constraining their associated errors can improve our understanding of Earth structure and have a fundamental impact on subsequent seismic studies. Multiple-event relocation algorithms often increase the precision of relative earthquake hypocenters but are hindered by their limited ability to provide realistic location uncertainties for individual earthquakes. Recently, a Bayesian approach to the multiple event relocation problem has proven to have many benefits including the ability to: (1) handle large data sets; (2) easily incorporate a priori hypocenter information; (3) model phase assignment errors; and, (4) correct for errors in the assumed travel time model. In this study we employ bayseloc [Myers et al., 2007, 2009] to relocate earthquakes in the Central and Eastern United States from 1964-present. We relocate ~11,000 earthquakes with a dataset of ~439,000 arrival time observations. Our dataset includes arrival-time observations from the ANSS catalog supplemented with arrival-time data from the Reviewed ISC Bulletin (prior to 1981), targeted local studies, and arrival-time data from the TA Array. One significant benefit of the bayesloc algorithm is its ability to incorporate a priori constraints on the probability distributions of specific earthquake locations parameters. To constrain the inversion, we use high-quality calibrated earthquake locations from local studies, including studies from: Raton Basin, Colorado; Mineral, Virginia; Guy, Arkansas; Cheneville, Quebec; Oklahoma; and Mt. Carmel, Illinois. We also add depth constraints to 232 earthquakes from regional moment tensors. Finally, we add constraints from four historic (1964-1973) ground truth events from a verification database. We (1) evaluate our ability to improve our location estimations, (2) use improved locations to evaluate Earth structure in seismically active regions, and (3) examine improvements to the estimated locations of historic large magnitude earthquakes.

  8. Rapid multi-wavelength optical assessment of circulating blood volume without a priori data

    NASA Astrophysics Data System (ADS)

    Loginova, Ekaterina V.; Zhidkova, Tatyana V.; Proskurnin, Mikhail A.; Zharov, Vladimir P.

    2016-03-01

    The measurement of circulating blood volume (CBV) is crucial in various medical conditions including surgery, iatrogenic problems, rapid fluid administration, transfusion of red blood cells, or trauma with extensive blood loss including battlefield injuries and other emergencies. Currently, available commercial techniques are invasive and time-consuming for trauma situations. Recently, we have proposed high-speed multi-wavelength photoacoustic/photothermal (PA/PT) flow cytometry for in vivo CBV assessment with multiple dyes as PA contrast agents (labels). As the first step, we have characterized the capability of this technique to monitor the clearance of three dyes (indocyanine green, methylene blue, and trypan blue) in an animal model. However, there are strong demands on improvements in PA/PT flow cytometry. As additional verification of our proof-of-concept of this technique, we performed optical photometric CBV measurements in vitro. Three label dyes—methylene blue, crystal violet and, partially, brilliant green—were selected for simultaneous photometric determination of the components of their two-dye mixtures in the circulating blood in vitro without any extra data (like hemoglobin absorption) known a priori. The tests of single dyes and their mixtures in a flow system simulating a blood transfusion system showed a negligible difference between the sensitivities of the determination of these dyes under batch and flow conditions. For individual dyes, the limits of detection of 3×10-6 M‒3×10-6 M in blood were achieved, which provided their continuous determination at a level of 10-5 M for the CBV assessment without a priori data on the matrix. The CBV assessment with errors no higher than 4% were obtained, and the possibility to apply the developed procedure for optical photometric (flow cytometry) with laser sources was shown.

  9. Quantization of liver tissue in dual kVp computed tomography using linear discriminant analysis

    NASA Astrophysics Data System (ADS)

    Tkaczyk, J. Eric; Langan, David; Wu, Xiaoye; Xu, Daniel; Benson, Thomas; Pack, Jed D.; Schmitz, Andrea; Hara, Amy; Palicek, William; Licato, Paul; Leverentz, Jaynne

    2009-02-01

    Linear discriminate analysis (LDA) is applied to dual kVp CT and used for tissue characterization. The potential to quantitatively model both malignant and benign, hypo-intense liver lesions is evaluated by analysis of portal-phase, intravenous CT scan data obtained on human patients. Masses with an a priori classification are mapped to a distribution of points in basis material space. The degree of localization of tissue types in the material basis space is related to both quantum noise and real compositional differences. The density maps are analyzed with LDA and studied with system simulations to differentiate these factors. The discriminant analysis is formulated so as to incorporate the known statistical properties of the data. Effective kVp separation and mAs relates to precision of tissue localization. Bias in the material position is related to the degree of X-ray scatter and partial-volume effect. Experimental data and simulations demonstrate that for single energy (HU) imaging or image-based decomposition pixel values of water-like tissues depend on proximity to other iodine-filled bodies. Beam-hardening errors cause a shift in image value on the scale of that difference sought between in cancerous and cystic lessons. In contrast, projection-based decomposition or its equivalent when implemented on a carefully calibrated system can provide accurate data. On such a system, LDA may provide novel quantitative capabilities for tissue characterization in dual energy CT.

  10. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  11. Solving of the coefficient inverse problems for a nonlinear singularly perturbed reaction-diffusion-advection equation with the final time data

    NASA Astrophysics Data System (ADS)

    Lukyanenko, D. V.; Shishlenin, M. A.; Volkov, V. T.

    2018-01-01

    We propose the numerical method for solving coefficient inverse problem for a nonlinear singularly perturbed reaction-diffusion-advection equation with the final time observation data based on the asymptotic analysis and the gradient method. Asymptotic analysis allows us to extract a priory information about interior layer (moving front), which appears in the direct problem, and boundary layers, which appear in the conjugate problem. We describe and implement the method of constructing a dynamically adapted mesh based on this a priory information. The dynamically adapted mesh significantly reduces the complexity of the numerical calculations and improve the numerical stability in comparison with the usual approaches. Numerical example shows the effectiveness of the proposed method.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  13. Gridded National Inventory of U.S. Methane Emissions

    NASA Technical Reports Server (NTRS)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  14. Gridded national inventory of U.S. methane emissions

    DOE PAGES

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...

    2016-11-16

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  15. Gridded National Inventory of U.S. Methane Emissions.

    PubMed

    Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L

    2016-12-06

    We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  16. Recovery of intrinsic fluorescence from single-point interstitial measurements for quantification of doxorubicin concentration

    PubMed Central

    Baran, Timothy M.; Foster, Thomas H.

    2014-01-01

    Background and Objective We developed a method for the recovery of intrinsic fluorescence from single-point measurements in highly scattering and absorbing samples without a priori knowledge of the sample optical properties. The goal of the study was to demonstrate accurate recovery of fluorophore concentration in samples with widely varying background optical properties, while simultaneously recovering the optical properties. Materials and Methods Tissue-simulating phantoms containing doxorubicin, MnTPPS, and Intralipid-20% were created, and fluorescence measurements were performed using a single isotropic probe. The resulting spectra were analyzed using a forward-adjoint fluorescence model in order to recover the fluorophore concentration and background optical properties. Results We demonstrated recovery of doxorubicin concentration with a mean error of 11.8%. The concentration of the background absorber was recovered with an average error of 23.2% and the scattering spectrum was recovered with a mean error of 19.8%. Conclusion This method will allow for the determination of local concentrations of fluorescent drugs, such as doxorubicin, from minimally invasive fluorescence measurements. This is particularly interesting in the context of transarterial chemoembolization (TACE) treatment of liver cancer. PMID:24037853

  17. Design of analytical failure detection using secondary observers

    NASA Technical Reports Server (NTRS)

    Sisar, M.

    1982-01-01

    The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.

  18. Open-ocean boundary conditions from interior data: Local and remote forcing of Massachusetts Bay

    USGS Publications Warehouse

    Bogden, P.S.; Malanotte-Rizzoli, P.; Signell, R.

    1996-01-01

    Massachusetts and Cape Cod Bays form a semienclosed coastal basin that opens onto the much larger Gulf of Maine. Subtidal circulation in the bay is driven by local winds and remotely driven flows from the gulf. The local-wind forced flow is estimated with a regional shallow water model driven by wind measurements. The model uses a gravity wave radiation condition along the open-ocean boundary. Results compare reasonably well with observed currents near the coast. In some offshore regions however, modeled flows are an order of magnitude less energetic than the data. Strong flows are observed even during periods of weak local wind forcing. Poor model-data comparisons are attributable, at least in part, to open-ocean boundary conditions that neglect the effects of remote forcing. Velocity measurements from within Massachusetts Bay are used to estimate the remotely forced component of the flow. The data are combined with shallow water dynamics in an inverse-model formulation that follows the theory of Bennett and McIntosh [1982], who considered tides. We extend their analysis to consider the subtidal response to transient forcing. The inverse model adjusts the a priori open-ocean boundary condition, thereby minimizing a combined measure of model-data misfit and boundary condition adjustment. A "consistency criterion" determines the optimal trade-off between the two. The criterion is based on a measure of plausibility for the inverse solution. The "consistent" inverse solution reproduces 56% of the average squared variation in the data. The local-wind-driven flow alone accounts for half of the model skill. The other half is attributable to remotely forced flows from the Gulf of Maine. The unexplained 44% comes from measurement errors and model errors that are not accounted for in the analysis. 

  19. Snow Process Estimation Over the Extratropical Andes Using a Data Assimilation Framework Integrating MERRA Data and Landsat Imagery

    NASA Technical Reports Server (NTRS)

    Cortes, Gonzalo; Girotto, Manuela; Margulis, Steven

    2016-01-01

    A data assimilation framework was implemented with the objective of obtaining high resolution retrospective snow water equivalent (SWE) estimates over several Andean study basins. The framework integrates Landsat fractional snow covered area (fSCA) images, a land surface and snow depletion model, and the Modern Era Retrospective Analysis for Research and Applications (MERRA) reanalysis as a forcing data set. The outputs are SWE and fSCA fields (1985-2015) at a resolution of 90 m that are consistent with the observed depletion record. Verification using in-situ snow surveys showed significant improvements in the accuracy of the SWE estimates relative to forward model estimates, with increases in correlation (0.49-0.87) and reductions in root mean square error (0.316 m to 0.129 m) and mean error (-0.221 m to 0.009 m). A sensitivity analysis showed that the framework is robust to variations in physiography, fSCA data availability and a priori precipitation biases. Results from the application to the headwater basin of the Aconcagua River showed how the forward model versus the fSCA-conditioned estimate resulted in different quantifications of the relationship between runoff and SWE, and different correlation patterns between pixel-wise SWE and ENSO. The illustrative results confirm the influence that ENSO has on snow accumulation for Andean basins draining into the Pacific, with ENSO explaining approximately 25% of the variability in near-peak (1 September) SWE values. Our results show how the assimilation of fSCA data results in a significant improvement upon MERRA-forced modeled SWE estimates, further increasing the utility of the MERRA data for high-resolution snow modeling applications.

  20. Power of mental health nursing research: a statistical analysis of studies in the International Journal of Mental Health Nursing.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2013-02-01

    Having sufficient power to detect effect sizes of an expected magnitude is a core consideration when designing studies in which inferential statistics will be used. The main aim of this study was to investigate the statistical power in studies published in the International Journal of Mental Health Nursing. From volumes 19 (2010) and 20 (2011) of the journal, studies were analysed for their power to detect small, medium, and large effect sizes, according to Cohen's guidelines. The power of the 23 studies included in this review to detect small, medium, and large effects was 0.34, 0.79, and 0.94, respectively. In 90% of papers, no adjustments for experiment-wise error were reported. With a median of nine inferential tests per paper, the mean experiment-wise error rate was 0.51. A priori power analyses were only reported in 17% of studies. Although effect sizes for correlations and regressions were routinely reported, effect sizes for other tests (χ(2)-tests, t-tests, ANOVA/MANOVA) were largely absent from the papers. All types of effect sizes were infrequently interpreted. Researchers are strongly encouraged to conduct power analyses when designing studies, and to avoid scattergun approaches to data analysis (i.e. undertaking large numbers of tests in the hope of finding 'significant' results). Because reviewing effect sizes is essential for determining the clinical significance of study findings, researchers would better serve the field of mental health nursing if they reported and interpreted effect sizes. © 2012 The Authors. International Journal of Mental Health Nursing © 2012 Australian College of Mental Health Nurses Inc.

  1. Markov Random Fields, Stochastic Quantization and Image Analysis

    DTIC Science & Technology

    1990-01-01

    Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.

  2. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  3. POCS-enhanced correction of motion artifacts in parallel MRI.

    PubMed

    Samsonov, Alexey A; Velikina, Julia; Jung, Youngkyoo; Kholmovski, Eugene G; Johnson, Chris R; Block, Walter F

    2010-04-01

    A new method for correction of MRI motion artifacts induced by corrupted k-space data, acquired by multiple receiver coils such as phased arrays, is presented. In our approach, a projections onto convex sets (POCS)-based method for reconstruction of sensitivity encoded MRI data (POCSENSE) is employed to identify corrupted k-space samples. After the erroneous data are discarded from the dataset, the artifact-free images are restored from the remaining data using coil sensitivity profiles. The error detection and data restoration are based on informational redundancy of phased-array data and may be applied to full and reduced datasets. An important advantage of the new POCS-based method is that, in addition to multicoil data redundancy, it can use a priori known properties about the imaged object for improved MR image artifact correction. The use of such information was shown to improve significantly k-space error detection and image artifact correction. The method was validated on data corrupted by simulated and real motion such as head motion and pulsatile flow.

  4. QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES

    PubMed Central

    RAND, ALEXANDER; GILLETTE, ANDREW; BAJAJ, CHANDRAJIT

    2013-01-01

    We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n-gon, our construction produces 2n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n(n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called ‘serendipity’ elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed. PMID:25301974

  5. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint.

    PubMed

    Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong

    2015-12-02

    For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  6. Imaginary-frequency polarizability and van der Waals force constants of two-electron atoms, with rigorous bounds

    NASA Technical Reports Server (NTRS)

    Glover, R. M.; Weinhold, F.

    1977-01-01

    Variational functionals of Braunn and Rebane (1972) for the imagery-frequency polarizability (IFP) have been generalized by the method of Gramian inequalities to give rigorous upper and lower bounds, valid even when the true (but unknown) unperturbed wavefunction must be represented by a variational approximation. Using these formulas in conjunction with flexible variational trial functions, tight error bounds are computed for the IFP and the associated two- and three-body van der Waals interaction constants of the ground 1(1S) and metastable 2(1,3S) states of He and Li(+). These bounds generally establish the ground-state properties to within a fraction of a per cent and metastable properties to within a few per cent, permitting a comparative assessment of competing theoretical methods at this level of accuracy. Unlike previous 'error bounds' for these properties, the present results have a completely a priori theoretical character, with no empirical input data.

  7. The GRAPE aerosol retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.

    2009-11-01

    The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.

  8. The GRAPE aerosol retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.

    2009-04-01

    The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.

  9. Eulerian-Lagrangian Simulations of Transonic Flutter Instabilities

    NASA Technical Reports Server (NTRS)

    Bendiksen, Oddvar O.

    1994-01-01

    This paper presents an overview of recent applications of Eulerian-Lagrangian computational schemes in simulating transonic flutter instabilities. This approach, the fluid-structure system is treated as a single continuum dynamics problem, by switching from an Eulerian to a Lagrangian formulation at the fluid-structure boundary. This computational approach effectively eliminates the phase integration errors associated with previous methods, where the fluid and structure are integrated sequentially using different schemes. The formulation is based on Hamilton's Principle in mixed coordinates, and both finite volume and finite element discretization schemes are considered. Results from numerical simulations of transonic flutter instabilities are presented for isolated wings, thin panels, and turbomachinery blades. The results suggest that the method is capable of reproducing the energy exchange between the fluid and the structure with significantly less error than existing methods. Localized flutter modes and panel flutter modes involving traveling waves can also be simulated effectively with no a priori knowledge of the type of instability involved.

  10. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  11. Trajectory Reconstruction and Uncertainty Analysis Using Mars Science Laboratory Pre-Flight Scale Model Aeroballistic Testing

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark

    2013-01-01

    As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  13. A harmonic analysis of lunar gravity

    NASA Technical Reports Server (NTRS)

    Bills, B. G.; Ferrari, A. J.

    1980-01-01

    An improved model of lunar global gravity has been obtained by fitting a sixteenth-degree harmonic series to a combination of Doppler tracking data from Apollo missions 8, 12, 15, and 16, and Lunar Orbiters 1, 2, 3, 4, and 5, and laser ranging data to the lunar surface. To compensate for the irregular selenographic distribution of these data, the solution algorithm has also incorporated a semi-empirical a priori covariance function. Maps of the free-air gravity disturbance and its formal error are presented, as are free-air anomaly and Bouguer anomaly maps. The lunar gravitational variance spectrum has the form V(G; n) = O(n to the -4th power), as do the corresponding terrestrial and martian spectra. The variance spectra of the Bouguer corrections (topography converted to equivalent gravity) for these bodies have the same basic form as the observed gravity; and, in fact, the spectral ratios are nearly constant throughout the observed spectral range for each body. Despite this spectral compatibility, the correlation between gravity and topography is generally quite poor on a global scale.

  14. A-priori testing of sub-grid models for chemically reacting nonpremixed turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Jimenez, J.; Linan, A.; Rogers, M. M.; Higuera, F. J.

    1996-01-01

    The beta-assumed-pdf approximation of (Cook & Riley 1994) is tested as a subgrid model for the LES computation of nonpremixed turbulent reacting flows, in the limit of cold infinitely fast chemistry, for two plane turbulent mixing layers with different degrees of intermittency. Excellent results are obtained for the computation of integrals properties such as product mass fraction, and the model is applied to other quantities such as powers of the temperature and the pdf of the scalar itself. Even in these cases the errors are small enough to be useful in practical applications. The analysis is extended to slightly out of equilibrium problems such as the generation of radicals, and formulated in terms of the pdf of the scalar gradients. It is shown that the conditional gradient distribution is universal in a wide range of cases whose limits are established. Within those limits, engineering approximations to the radical concentration are also possible. It is argued that the experiments in this paper are essentially in the limit of infinite Reynolds number.

  15. How Can TOLNet Help to Better Understand Tropospheric Ozone? A Satellite Perspective

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew S.

    2018-01-01

    Potential sources of a priori ozone (O3) profiles for use in Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite tropospheric O3 retrievals are evaluated with observations from multiple Tropospheric Ozone Lidar Network (TOLNet) systems in North America. An O3 profile climatology (tropopause-based O3 climatology (TB-Clim), currently proposed for use in the TEMPO O3 retrieval algorithm) derived from ozonesonde observations and O3 profiles from three separate models (operational Goddard Earth Observing System (GEOS-5) Forward Processing (FP) product, reanalysis product from Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA2), and the GEOS-Chem chemical transport model (CTM)) were: 1) evaluated with TOLNet measurements on various temporal scales (seasonally, daily, hourly) and 2) implemented as a priori information in theoretical TEMPO tropospheric O3 retrievals in order to determine how each a priori impacts the accuracy of retrieved tropospheric (0-10 km) and lowermost tropospheric (LMT, 0-2 km) O3 columns. We found that all sources of a priori O3 profiles evaluated in this study generally reproduced the vertical structure of summer-averaged observations. However, larger differences between the a priori profiles and lidar observations were observed when evaluating inter-daily and diurnal variability of tropospheric O3. The TB-Clim O3 profile climatology was unable to replicate observed inter-daily and diurnal variability of O3 while model products, in particular GEOS-Chem simulations, displayed more skill in reproducing these features. Due to the ability of models, primarily the CTM used in this study, on average to capture the inter-daily and diurnal variability of tropospheric and LMT O3 columns, using a priori profiles from CTM simulations resulted in TEMPO retrievals with the best statistical comparison with lidar observations. Furthermore, important from an air quality perspective, when high LMT O3 values were observed, using CTM a priori profiles resulted in TEMPO LMT O3 retrievals with the least bias. The application of time-specific (non-climatological) hourly/daily model predictions as the a priori profile in TEMPO O3 retrievals will be best suited when applying this data to study air quality or event-based processes as the standard retrieval algorithm will still need to use a climatology product. Follow-on studies to this work are currently being conducted to investigate the application of different CTM-predicted O3 climatology products in the standard TEMPO retrieval algorithm. Finally, similar methods to those used in this study can be easily applied by TEMPO data users to recalculate tropospheric O3 profiles provided from the standard retrieval using a different source of a priori.

  16. Troposphere gradients from the ECMWF in VLBI analysis

    NASA Astrophysics Data System (ADS)

    Boehm, Johannes; Schuh, Harald

    2007-06-01

    Modeling path delays in the neutral atmosphere for the analysis of Very Long Baseline Interferometry (VLBI) observations has been improved significantly in recent years by the use of elevation-dependent mapping functions based on data from numerical weather models. In this paper, we present a fast way of extracting both, hydrostatic and wet, linear horizontal gradients for the troposphere from data of the European Centre for Medium-range Weather Forecasts (ECMWF) model, as it is realized at the Vienna University of Technology on a routine basis for all stations of the International GNSS (Global Navigation Satellite Systems) Service (IGS) and International VLBI Service for Geodesy and Astrometry (IVS) stations. This approach only uses information about the refractivity gradients at the site vertical, but no information from the line-of-sight. VLBI analysis of the CONT02 and CONT05 campaigns, as well as all IVS-R1 and IVS-R4 sessions in the first half of 2006, shows that fixing these a priori gradients improves the repeatability for 74% (40 out of 54) of the VLBI baseline lengths compared to fixing zero or constant a priori gradients, and improves the repeatability for the majority of baselines compared to estimating 24-h offsets for the gradients. Only if 6-h offsets are estimated, the baseline length repeatabilities significantly improve, no matter which a priori gradients are used.

  17. Constraining OCT with Knowledge of Device Design Enables High Accuracy Hemodynamic Assessment of Endovascular Implants.

    PubMed

    O'Brien, Caroline C; Kolandaivelu, Kumaran; Brown, Jonathan; Lopes, Augusto C; Kunio, Mie; Kolachalama, Vijaya B; Edelman, Elazer R

    2016-01-01

    Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D 'clouds' of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments.

  18. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    NASA Astrophysics Data System (ADS)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  19. Joint inversion of apparent resistivity and seismic surface and body wave data

    NASA Astrophysics Data System (ADS)

    Garofalo, Flora; Sauvin, Guillaume; Valentina Socco, Laura; Lecomte, Isabelle

    2013-04-01

    A novel inversion algorithm has been implemented to jointly invert apparent resistivity curves from vertical electric soundings, surface wave dispersion curves, and P-wave travel times. The algorithm works in the case of laterally varying layered sites. Surface wave dispersion curves and P-wave travel times can be extracted from the same seismic dataset and apparent resistivity curves can be obtained from continuous vertical electric sounding acquisition. The inversion scheme is based on a series of local 1D layered models whose unknown parameters are thickness h, S-wave velocity Vs, P-wave velocity Vp, and Resistivity R of each layer. 1D models are linked to surface-wave dispersion curves and apparent resistivity curves through classical 1D forward modelling, while a 2D model is created by interpolating the 1D models and is linked to refracted P-wave hodograms. A priori information can be included in the inversion and a spatial regularization is introduced as a set of constraints between model parameters of adjacent models and layers. Both a priori information and regularization are weighted by covariance matrixes. We show the comparison of individual inversions and joint inversion for a synthetic dataset that presents smooth lateral variations. Performing individual inversions, the poor sensitivity to some model parameters leads to estimation errors up to 62.5 %, whereas for joint inversion the cooperation of different techniques reduces most of the model estimation errors below 5% with few exceptions up to 39 %, with an overall improvement. Even though the final model retrieved by joint inversion is internally consistent and more reliable, the analysis of the results evidences unacceptable values of Vp/Vs ratio for some layers, thus providing negative Poisson's ratio values. To further improve the inversion performances, an additional constraint is added imposing Poisson's ratio in the range 0-0.5. The final results are globally improved by the introduction of this constraint further reducing the maximum error to 30 %. The same test was performed on field data acquired in a landslide-prone area close by the town of Hvittingfoss, Norway. Seismic data were recorded on two 160-m long profiles in roll-along mode using a 5-kg sledgehammer as source and 24 4.5-Hz vertical geophones with 4-m separation. First-arrival travel times were picked at every shot locations and surface wave dispersion curves extracted at 8 locations for each profile. 2D resistivity measurements were carried out on the same profiles using Gradient and Dipole-Dipole arrays with 2-m electrode spacing. The apparent resistivity curves were extracted at the same location as for the dispersion curves. The data were subsequently jointly inverted and the resulting model compared to individual inversions. Although models from both, individual and joint inversions are consistent, the estimation error is smaller for joint inversion, and more especially for first-arrival travel times. The joint inversion exploits different sensitivities of the methods to model parameters and therefore mitigates solution nonuniqueness and the effects of intrinsic limitations of the different techniques. Moreover, it produces an internally consistent multi-parametric final model that can be profitably interpreted to provide a better understanding of subsurface properties.

  20. Global distribution of methane emissions, emission trends, and OH trends inferred from an inversion of GOSAT data for 2010-2015

    NASA Astrophysics Data System (ADS)

    Maasakkers, J. D.; Jacob, D.; Payer Sulprizio, M.; Hersher, M.; Scarpelli, T.; Turner, A. J.; Sheng, J.; Bloom, A. A.; Bowman, K. W.; Parker, R.

    2017-12-01

    We present a global inversion of methane sources and sinks using GOSAT satellite data from 2010 up to 2015. The inversion optimizes emissions and their trends at 4° × 5° resolution as well as the interannual variability of global OH concentrations. It uses an analytical approach that quantifies the information content from the GOSAT observations and provides full error characterization. We show how the analytical approach can be applied in log-space, ensuring the positivity of the posterior. The inversion starts from state-of-science a priori emission inventories including the Gridded EPA inventory for US anthropogenic emissions, detailed oil and gas emissions for Canada and Mexico, EDGAR v4.3.2 for anthropogenic emissions in other countries, the WetCHARTs product for wetlands, and our own estimates for geological seeps. Inversion results show lower emissions over Western Europe and China than predicted by EDGAR v4.3.2 but higher emissions over Japan. In contrast to previous inversions that used incorrect patterns in a priori emissions, we find that the EPA inventory does not underestimate US anthropogenic emissions. Results for trends show increasing emissions in the tropics combined with decreasing emissions in Europe, and a decline in OH concentrations contributing to the global methane trend.

  1. Use of the TM tasseled cap transform for interpretation of spectral contrasts in an urban scene

    NASA Technical Reports Server (NTRS)

    Goward, S. N.; Wharton, S. W.

    1984-01-01

    Investigations are being conducted with the objective to develop automated numerical image analysis procedures. In this context, an examination is performed of physically-based multispectral data transforms as a means to incorporate a priori knowledge of land radiance properties in the analysis process. A physically-based transform of TM observations was developed. This transform extends the Landsat MSS Tasseled Cap transform reported by Kauth and Thomas (1976) to TM data observations. The present study has the aim to examine the utility of the TM Tasseled Cap transform as applied to TM data from an urban landscape. The analysis conducted is based on 512 x 512 subset of the Washington, DC November 2, 1982 TM scene, centered on Springfield, VA. It appears that the TM tasseled cap transformation provides a good means to explain land physical attributes of the Washington scene. This result provides a suggestion regarding a direction by which a priori knowledge of landscape spectral patterns may be incorporated into numerical image analysis.

  2. Sensitivity and Uncertainty Analysis of Plutonium and Cesium Isotopes in Modeling of BR3 Reactor Spent Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conant, Andrew; Erickson, Anna; Robel, Martin

    Nuclear forensics has a broad task to characterize recovered nuclear or radiological material and interpret the results of investigation. One approach to isotopic characterization of nuclear material obtained from a reactor is to chemically separate and perform isotopic measurements on the sample and verify the results with modeling of the sample history, for example, operation of a nuclear reactor. The major actinide plutonium and fission product cesium are commonly measured signatures of the fuel history in a reactor core. This study investigates the uncertainty of the plutonium and cesium isotope ratios of a fuel rod discharged from a research pressurizedmore » water reactor when the location of the sample is not known a priori. A sensitivity analysis showed overpredicted values for the 240Pu/ 239Pu ratio toward the axial center of the rod and revealed a lower probability of the rod of interest (ROI) being on the periphery of the assembly. The uncertainty analysis found the relative errors due to only the rod position and boron concentration to be 17% to 36% and 7% to 15% for the 240Pu/ 239Pu and 137Cs/ 135Cs ratios, respectively. Lastly, this study provides a method for uncertainty quantification of isotope concentrations due to the location of the ROI. Similar analyses can be performed to verify future chemical and isotopic analyses.« less

  3. Short-term memory development: differences in serial position curves between age groups and latent classes.

    PubMed

    Koppenol-Gonzalez, Gabriela V; Bouwmeester, Samantha; Vermunt, Jeroen K

    2014-10-01

    In studies on the development of cognitive processes, children are often grouped based on their ages before analyzing the data. After the analysis, the differences between age groups are interpreted as developmental differences. We argue that this approach is problematic because the variance in cognitive performance within an age group is considered to be measurement error. However, if a part of this variance is systematic, it can provide very useful information about the cognitive processes used by some children of a certain age but not others. In the current study, we presented 210 children aged 5 to 12 years with serial order short-term memory tasks. First we analyze our data according to the approach using age groups, and then we apply latent class analysis to form latent classes of children based on their performance instead of their ages. We display the results of the age groups and the latent classes in terms of serial position curves, and we discuss the differences in results. Our findings show that there are considerable differences in performance between the age groups and the latent classes. We interpret our findings as indicating that the latent class analysis yielded a much more meaningful way of grouping children in terms of cognitive processes than the a priori grouping of children based on their ages. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Sensitivity and Uncertainty Analysis of Plutonium and Cesium Isotopes in Modeling of BR3 Reactor Spent Fuel

    DOE PAGES

    Conant, Andrew; Erickson, Anna; Robel, Martin; ...

    2017-02-03

    Nuclear forensics has a broad task to characterize recovered nuclear or radiological material and interpret the results of investigation. One approach to isotopic characterization of nuclear material obtained from a reactor is to chemically separate and perform isotopic measurements on the sample and verify the results with modeling of the sample history, for example, operation of a nuclear reactor. The major actinide plutonium and fission product cesium are commonly measured signatures of the fuel history in a reactor core. This study investigates the uncertainty of the plutonium and cesium isotope ratios of a fuel rod discharged from a research pressurizedmore » water reactor when the location of the sample is not known a priori. A sensitivity analysis showed overpredicted values for the 240Pu/ 239Pu ratio toward the axial center of the rod and revealed a lower probability of the rod of interest (ROI) being on the periphery of the assembly. The uncertainty analysis found the relative errors due to only the rod position and boron concentration to be 17% to 36% and 7% to 15% for the 240Pu/ 239Pu and 137Cs/ 135Cs ratios, respectively. Lastly, this study provides a method for uncertainty quantification of isotope concentrations due to the location of the ROI. Similar analyses can be performed to verify future chemical and isotopic analyses.« less

  5. Bayesian Orbit Computation Tools for Objects on Geocentric Orbits

    NASA Astrophysics Data System (ADS)

    Virtanen, J.; Granvik, M.; Muinonen, K.; Oszkiewicz, D.

    2013-08-01

    We consider the space-debris orbital inversion problem via the concept of Bayesian inference. The methodology has been put forward for the orbital analysis of solar system small bodies in early 1990's [7] and results in a full solution of the statistical inverse problem given in terms of a posteriori probability density function (PDF) for the orbital parameters. We demonstrate the applicability of our statistical orbital analysis software to Earth orbiting objects, both using well-established Monte Carlo (MC) techniques (for a review, see e.g. [13] as well as recently developed Markov-chain MC (MCMC) techniques (e.g., [9]). In particular, we exploit the novel virtual observation MCMC method [8], which is based on the characterization of the phase-space volume of orbital solutions before the actual MCMC sampling. Our statistical methods and the resulting PDFs immediately enable probabilistic impact predictions to be carried out. Furthermore, this can be readily done also for very sparse data sets and data sets of poor quality - providing that some a priori information on the observational uncertainty is available. For asteroids, impact probabilities with the Earth from the discovery night onwards have been provided, e.g., by [11] and [10], the latter study includes the sampling of the observational-error standard deviation as a random variable.

  6. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting

    NASA Astrophysics Data System (ADS)

    Wu, Cheng; Zhen Yu, Jian

    2018-03-01

    Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.

  7. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  8. [The heuristics of reaching a diagnosis].

    PubMed

    Wainstein, Eduardo

    2009-12-01

    Making a diagnosis in medicine is a complex process in which many cognitive and psychological issues are involved. After the first encounter with the patient, an unconscious process ensues to suspect the presence of a particular disease. Usually, complementary tests are requested to confirm the clinical suspicion. The interpretation of requested tests can be biased by the clinical diagnosis that was considered in the first encounter with the patient. The awareness of these sources of error is essential in the interpretation of the findings that will eventually lead to a final diagnosis. This article discusses some aspects of the heuristics involved in the adjudication of priory probabilities and provides a brief review of current concepts of the reasoning process.

  9. Venus spherical harmonic gravity model to degree and order 60

    NASA Technical Reports Server (NTRS)

    Konopliv, Alex S.; Sjogren, William L.

    1994-01-01

    The Magellan and Pioneer Venus Orbiter radiometric tracking data sets have been combined to produce a 60th degree and order spherical harmonic gravity field. The Magellan data include the high-precision X-band gravity tracking from September 1992 to May 1993 and post-aerobraking data up to January 5, 1994. Gravity models are presented from the application of Kaula's power rule for Venus and an alternative a priori method using surface accelerations. Results are given as vertical gravity acceleration at the reference surface, geoid, vertical Bouguer, and vertical isostatic maps with errors for the vertical gravity and geoid maps included. Correlation of the gravity with topography for the different models is also discussed.

  10. A Discontinuous Galerkin Method for Parabolic Problems with Modified hp-Finite Element Approximation Technique

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.

    2004-01-01

    A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.

  11. Template based rotation: A method for functional connectivity analysis with a priori templates☆

    PubMed Central

    Schultz, Aaron P.; Chhatwal, Jasmeer P.; Huijbers, Willem; Hedden, Trey; van Dijk, Koene R.A.; McLaren, Donald G.; Ward, Andrew M.; Wigman, Sarah; Sperling, Reisa A.

    2014-01-01

    Functional connectivity magnetic resonance imaging (fcMRI) is a powerful tool for understanding the network level organization of the brain in research settings and is increasingly being used to study large-scale neuronal network degeneration in clinical trial settings. Presently, a variety of techniques, including seed-based correlation analysis and group independent components analysis (with either dual regression or back projection) are commonly employed to compute functional connectivity metrics. In the present report, we introduce template based rotation,1 a novel analytic approach optimized for use with a priori network parcellations, which may be particularly useful in clinical trial settings. Template based rotation was designed to leverage the stable spatial patterns of intrinsic connectivity derived from out-of-sample datasets by mapping data from novel sessions onto the previously defined a priori templates. We first demonstrate the feasibility of using previously defined a priori templates in connectivity analyses, and then compare the performance of template based rotation to seed based and dual regression methods by applying these analytic approaches to an fMRI dataset of normal young and elderly subjects. We observed that template based rotation and dual regression are approximately equivalent in detecting fcMRI differences between young and old subjects, demonstrating similar effect sizes for group differences and similar reliability metrics across 12 cortical networks. Both template based rotation and dual-regression demonstrated larger effect sizes and comparable reliabilities as compared to seed based correlation analysis, though all three methods yielded similar patterns of network differences. When performing inter-network and sub-network connectivity analyses, we observed that template based rotation offered greater flexibility, larger group differences, and more stable connectivity estimates as compared to dual regression and seed based analyses. This flexibility owes to the reduced spatial and temporal orthogonality constraints of template based rotation as compared to dual regression. These results suggest that template based rotation can provide a useful alternative to existing fcMRI analytic methods, particularly in clinical trial settings where predefined outcome measures and conserved network descriptions across groups are at a premium. PMID:25150630

  12. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  13. Bayesian inversions of a dynamic vegetation model at four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; Francois, L.

    2015-05-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB (CARbon Assimilation In the Biosphere) dynamic vegetation model (DVM) with 10 unknown parameters, using the DREAM(ZS) (DiffeRential Evolution Adaptive Metropolis) Markov chain Monte Carlo (MCMC) sampler. We focus on comparing model inversions, considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a priori or jointly inferred together with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root mean square errors (RMSEs) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19, 1.04 to 1.56 g C m-2 day-1 and 0.50 to 1.28 mm day-1, respectively. For the calibration period, using a homoscedastic eddy covariance residual error model resulted in a better agreement between measured and modelled data than using a heteroscedastic residual error model. However, a model validation experiment showed that CARAIB models calibrated considering heteroscedastic residual errors perform better. Posterior parameter distributions derived from using a heteroscedastic model of the residuals thus appear to be more robust. This is the case even though the classical linear heteroscedastic error model assumed herein did not fully remove heteroscedasticity of the GPP residuals. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides the residual error treatment, differences between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics.

  14. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  15. Tomographic inversion of time-domain resistivity and chargeability data for the investigation of landfills using a priori information.

    PubMed

    De Donno, Giorgio; Cardarelli, Ettore

    2017-01-01

    In this paper, we present a new code for the modelling and inversion of resistivity and chargeability data using a priori information to improve the accuracy of the reconstructed model for landfill. When a priori information is available in the study area, we can insert them by means of inequality constraints on the whole model or on a single layer or assigning weighting factors for enhancing anomalies elongated in the horizontal or vertical directions. However, when we have to face a multilayered scenario with numerous resistive to conductive transitions (the case of controlled landfills), the effective thickness of the layers can be biased. The presented code includes a model-tuning scheme, which is applied after the inversion of field data, where the inversion of the synthetic data is performed based on an initial guess, and the absolute difference between the field and synthetic inverted models is minimized. The reliability of the proposed approach has been supported in two real-world examples; we were able to identify an unauthorized landfill and to reconstruct the geometrical and physical layout of an old waste dump. The combined analysis of the resistivity and chargeability (normalised) models help us to remove ambiguity due to the presence of the waste mass. Nevertheless, the presence of certain layers can remain hidden without using a priori information, as demonstrated by a comparison of the constrained inversion with a standard inversion. The robustness of the above-cited method (using a priori information in combination with model tuning) has been validated with the cross-section from the construction plans, where the reconstructed model is in agreement with the original design. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, J.E.

    Many robotic operations, e.g., mapping, scanning, feature following, etc., require accurate surface following of arbitrary targets. This paper presents a versatile surface following and mapping system designed to promote hardware, software and application independence, modular development, and upward expandability. These goals are met by: a full, a priori specification of the hardware and software interfaces; a modular system architecture; and a hierarchical surface-data analysis method, permitting application specific tuning at each conceptual level of topological abstraction. This surface following system was fully designed and independently of any specific robotic host, then successfully integrated with and demonstrated on a completely amore » priori unknown, real-time robotic system. 7 refs.« less

  17. A priori discretization quality metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  18. Bayesian dose-response analysis for epidemiological studies with complex uncertainty in dose estimation.

    PubMed

    Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L

    2016-02-10

    Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Construct Validation of a Multidimensional Computerized Adaptive Test for Fatigue in Rheumatoid Arthritis

    PubMed Central

    Nikolaus, Stephanie; Bode, Christina; Taal, Erik; Vonkeman, Harald E.; Glas, Cees A. W.; van de Laar, Mart A. F. J.

    2015-01-01

    Objective Multidimensional computerized adaptive testing enables precise measurements of patient-reported outcomes at an individual level across different dimensions. This study examined the construct validity of a multidimensional computerized adaptive test (CAT) for fatigue in rheumatoid arthritis (RA). Methods The ‘CAT Fatigue RA’ was constructed based on a previously calibrated item bank. It contains 196 items and three dimensions: ‘severity’, ‘impact’ and ‘variability’ of fatigue. The CAT was administered to 166 patients with RA. They also completed a traditional, multidimensional fatigue questionnaire (BRAF-MDQ) and the SF-36 in order to examine the CAT’s construct validity. A priori criterion for construct validity was that 75% of the correlations between the CAT dimensions and the subscales of the other questionnaires were as expected. Furthermore, comprehensive use of the item bank, measurement precision and score distribution were investigated. Results The a priori criterion for construct validity was supported for two of the three CAT dimensions (severity and impact but not for variability). For severity and impact, 87% of the correlations with the subscales of the well-established questionnaires were as expected but for variability, 53% of the hypothesised relations were found. Eighty-nine percent of the items were selected between one and 137 times for CAT administrations. Measurement precision was excellent for the severity and impact dimensions, with more than 90% of the CAT administrations reaching a standard error below 0.32. The variability dimension showed good measurement precision with 90% of the CAT administrations reaching a standard error below 0.44. No floor- or ceiling-effects were found for the three dimensions. Conclusion The CAT Fatigue RA showed good construct validity and excellent measurement precision on the dimensions severity and impact. The dimension variability had less ideal measurement characteristics, pointing to the need to recalibrate the CAT item bank with a two-dimensional model, solely consisting of severity and impact. PMID:26710104

  20. A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement

    NASA Astrophysics Data System (ADS)

    Koner, P.; Battaglia, A.; Simmer, C.

    2009-04-01

    The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.

  1. Hybrid inversions of CO2 fluxes at regional scale applied to network design

    NASA Astrophysics Data System (ADS)

    Kountouris, Panagiotis; Gerbig, Christoph; -Thomas Koch, Frank

    2013-04-01

    Long term observations of atmospheric greenhouse gas measuring stations, located at representative regions over the continent, improve our understanding of greenhouse gas sources and sinks. These mixing ratio measurements can be linked to surface fluxes by atmospheric transport inversions. Within the upcoming years new stations are to be deployed, which requires decision making tools with respect to the location and the density of the network. We are developing a method to assess potential greenhouse gas observing networks in terms of their ability to recover specific target quantities. As target quantities we use CO2 fluxes aggregated to specific spatial and temporal scales. We introduce a high resolution inverse modeling framework, which attempts to combine advantages from pixel based inversions with those of a carbon cycle data assimilation system (CCDAS). The hybrid inversion system consists of the Lagrangian transport model STILT, the diagnostic biosphere model VPRM and a Bayesian inversion scheme. We aim to retrieve the spatiotemporal distribution of net ecosystem exchange (NEE) at a high spatial resolution (10 km x 10 km) by inverting for spatially and temporally varying scaling factors for gross ecosystem exchange (GEE) and respiration (R) rather than solving for the fluxes themselves. Thus the state space includes parameters for controlling photosynthesis and respiration, but unlike in a CCDAS it allows for spatial and temporal variations, which can be expressed as NEE(x,y,t) = λG(x,y,t) GEE(x,y,t) + λR(x,y,t) R(x,y,t) . We apply spatially and temporally correlated uncertainties by using error covariance matrices with non-zero off-diagonal elements. Synthetic experiments will test our system and select the optimal a priori error covariance by using different spatial and temporal correlation lengths on the error statistics of the a priori covariance and comparing the optimized fluxes against the 'known truth'. As 'known truth' we use independent fluxes generated from a different biosphere model (BIOME-BGC). Initially we perform single-station inversions for Ochsenkopf tall tower located in Germany. Further expansion of the inversion framework to multiple stations and its application to network design will address the questions of how well a set of network stations can constrain a given target quantity, and whether there are objective criteria to select an optimal configuration for new stations that maximizes the uncertainty reduction.

  2. WE-G-BRD-02: Characterizing Information Loss in a Sparse-Sampling-Based Dynamic MRI Sequence (k-T BLAST) for Lung Motion Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arai, T; Nofiele, J; Sawant, A

    2015-06-15

    Purpose: Rapid MRI is an attractive, non-ionizing tool for soft-tissue-based monitoring of respiratory motion in thoracic and abdominal radiotherapy. One big challenge is to achieve high temporal resolution while maintaining adequate spatial resolution. K-t BLAST, sparse-sampling and reconstruction sequence based on a-priori information represents a potential solution. In this work, we investigated how much “true” motion information is lost as a-priori information is progressively added for faster imaging. Methods: Lung tumor motions in superior-inferior direction obtained from ten individuals were replayed into an in-house, MRI-compatible, programmable motion platform (50Hz refresh and 100microns precision). Six water-filled 1.5ml tubes were placed onmore » it as fiducial markers. Dynamic marker motion within a coronal slice (FOV: 32×32cm{sup 2}, resolution: 0.67×0.67mm{sup 2}, slice-thickness: 5mm) was collected on 3.0T body scanner (Ingenia, Philips). Balanced-FFE (TE/TR: 1.3ms/2.5ms, flip-angle: 40degrees) was used in conjunction with k-t BLAST. Each motion was repeated four times as four k-t acceleration factors 1, 2, 5, and 16 (corresponding frame rates were 2.5, 4.7, 9.8, and 19.1Hz, respectively) were compared. For each image set, one average motion trajectory was computed from six marker displacements. Root mean square error (RMS) was used as a metric of spatial accuracy where measured trajectories were compared to original data. Results: Tumor motion was approximately 10mm. The mean(standard deviation) of respiratory rates over ten patients was 0.28(0.06)Hz. Cumulative distributions of tumor motion frequency spectra (0–25Hz) obtained from the patients showed that 90% of motion fell on 3.88Hz or less. Therefore, the frame rate must be a double or higher for accurate monitoring. The RMS errors over patients for k-t factors of 1, 2, 5, and 16 were.10(.04),.17(.04), .21(.06) and.26(.06)mm, respectively. Conclusions: K-t factor of 5 or higher can cover the high frequency component of tumor respiratory motion, while the estimated error of spatial accuracy was approximately.2mm.« less

  3. The Community Cloud retrieval for CLimate (CC4CL) - Part 2: The optimal estimation approach

    NASA Astrophysics Data System (ADS)

    McGarragh, Gregory R.; Poulsen, Caroline A.; Thomas, Gareth E.; Povey, Adam C.; Sus, Oliver; Stapelberg, Stefan; Schlundt, Cornelia; Proud, Simon; Christensen, Matthew W.; Stengel, Martin; Hollmann, Rainer; Grainger, Roy G.

    2018-06-01

    The Community Cloud retrieval for Climate (CC4CL) is a cloud property retrieval system for satellite-based multispectral imagers and is an important component of the Cloud Climate Change Initiative (Cloud_cci) project. In this paper we discuss the optimal estimation retrieval of cloud optical thickness, effective radius and cloud top pressure based on the Optimal Retrieval of Aerosol and Cloud (ORAC) algorithm. Key to this method is the forward model, which includes the clear-sky model, the liquid water and ice cloud models, the surface model including a bidirectional reflectance distribution function (BRDF), and the "fast" radiative transfer solution (which includes a multiple scattering treatment). All of these components and their assumptions and limitations will be discussed in detail. The forward model provides the accuracy appropriate for our retrieval method. The errors are comparable to the instrument noise for cloud optical thicknesses greater than 10. At optical thicknesses less than 10 modeling errors become more significant. The retrieval method is then presented describing optimal estimation in general, the nonlinear inversion method employed, measurement and a priori inputs, the propagation of input uncertainties and the calculation of subsidiary quantities that are derived from the retrieval results. An evaluation of the retrieval was performed using measurements simulated with noise levels appropriate for the MODIS instrument. Results show errors less than 10 % for cloud optical thicknesses greater than 10. Results for clouds of optical thicknesses less than 10 have errors up to 20 %.

  4. Visual aftereffects and sensory nonlinearities from a single statistical framework

    PubMed Central

    Laparra, Valero; Malo, Jesús

    2015-01-01

    When adapted to a particular scenery our senses may fool us: colors are misinterpreted, certain spatial patterns seem to fade out, and static objects appear to move in reverse. A mere empirical description of the mechanisms tuned to color, texture, and motion may tell us where these visual illusions come from. However, such empirical models of gain control do not explain why these mechanisms work in this apparently dysfunctional manner. Current normative explanations of aftereffects based on scene statistics derive gain changes by (1) invoking decorrelation and linear manifold matching/equalization, or (2) using nonlinear divisive normalization obtained from parametric scene models. These principled approaches have different drawbacks: the first is not compatible with the known saturation nonlinearities in the sensors and it cannot fully accomplish information maximization due to its linear nature. In the second, gain change is almost determined a priori by the assumed parametric image model linked to divisive normalization. In this study we show that both the response changes that lead to aftereffects and the nonlinear behavior can be simultaneously derived from a single statistical framework: the Sequential Principal Curves Analysis (SPCA). As opposed to mechanistic models, SPCA is not intended to describe how physiological sensors work, but it is focused on explaining why they behave as they do. Nonparametric SPCA has two key advantages as a normative model of adaptation: (i) it is better than linear techniques as it is a flexible equalization that can be tuned for more sensible criteria other than plain decorrelation (either full information maximization or error minimization); and (ii) it makes no a priori functional assumption regarding the nonlinearity, so the saturations emerge directly from the scene data and the goal (and not from the assumed function). It turns out that the optimal responses derived from these more sensible criteria and SPCA are consistent with dysfunctional behaviors such as aftereffects. PMID:26528165

  5. SU-E-J-125: Classification of CBCT Noises in Terms of Their Contribution to Proton Range Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brousmiche, S; Orban de Xivry, J; Macq, B

    2014-06-01

    Purpose: This study assesses the potential use of CBCT images in adaptive protontherapy by estimating the contribution of the main sources of noise and calibration errors to the proton range uncertainty. Methods: Measurements intended to highlight each particular source have been achieved by adapting either the testbench configuration, e.g. use of filtration, fan-beam collimation, beam stop arrays, phantoms and detector reset light, or the sequence of correction algorithms including water precorrection. Additional Monte-Carlo simulations have been performed to complement these measurements, especially for the beam hardening and the scatter cases. Simulations of proton beams penetration through the resulting images havemore » then been carried out to quantify the range change due to these effects. The particular case of a brain irradiation is considered mainly because of the multiple effects that the skull bones have on the internal soft tissues. Results: On top of the range error sources is the undercorrection of scatter. Its influence has been analyzed from a comparison of fan-beam and full axial FOV acquisitions. In this case, large range errors of about 12 mm can be reached if the assumption is made that the scatter has only a constant contribution over the projection images. Even the detector lag, which a priori induces a much smaller effect, has been shown to contribute for up to 2 mm to the overall error if its correction only aims at reducing the skin artefact. This last result can partially be understood by the larger interface between tissues and bones inside the skull. Conclusion: This study has set the basis of a more systematical analysis of the effect CBCT noise on range uncertainties based on a combination of measurements, simulations and theoretical results. With our method, even more subtle effects such as the cone-beam artifact or the detector lag can be assessed. SBR and JOR are financed by iMagX, a public-private partnership between the region Wallone of Belgium and IBA under convention #1217662.« less

  6. Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops

    USGS Publications Warehouse

    Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.

  7. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    NASA Astrophysics Data System (ADS)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).

  8. An A Priori Multiobjective Optimization Model of a Search and Rescue Network

    DTIC Science & Technology

    1992-03-01

    sequences. Classical sensitivity analysis and tolerance analysis were used to analyze the frequency assignments generated by the different weight...function for excess coverage of a frequency. Sensitivity analysis is used to investigate the robustness of the frequency assignments produced by the...interest. The linear program solution is used to produce classical sensitivity analysis for the weight ranges. 17 III. Model Formulation This chapter

  9. Location memory for dots in polygons versus cities in regions: evaluating the category adjustment model.

    PubMed

    Friedman, Alinda; Montello, Daniel R; Burte, Heather

    2012-09-01

    We conducted 3 experiments to examine the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in circumstances in which the category boundaries were irregular schematized polygons made from outlines of maps. For the first time, accuracy was tested when only perceptual and/or existing long-term memory information about identical locations was cued. Participants from Alberta, Canada and California received 1 of 3 conditions: dots-only, in which a dot appeared within the polygon, and after a 4-s dynamic mask the empty polygon appeared and the participant indicated where the dot had been; dots-and-names, in which participants were told that the first polygon represented Alberta/California and that each dot was in the correct location for the city whose name appeared outside the polygon; and names-only, in which there was no first polygon, and participants clicked on the city locations from extant memory alone. Location recall in the dots-only and dots-and-names conditions did not differ from each other and had small but significant directional errors that pointed away from the centroids of the polygons. In contrast, the names-only condition had large and significant directional errors that pointed toward the centroids. Experiments 2 and 3 eliminated the distribution of stimuli and overall screen position as causal factors. The data suggest that in the "classic" category adjustment paradigm, it is difficult to determine a priori when Bayesian cue combination is applicable, making Bayesian analysis less useful as a theoretical approach to location estimation. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  10. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  11. A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing

    PubMed Central

    2018-01-01

    Linear regression is a basic tool in mobile robotics, since it enables accurate estimation of straight lines from range-bearing scans or in digital images, which is a prerequisite for reliable data association and sensor fusing in the context of feature-based SLAM. This paper discusses, extends and compares existing algorithms for line fitting applicable also in the case of strong covariances between the coordinates at each single data point, which must not be neglected if range-bearing sensors are used. Besides, in particular, the determination of the covariance matrix is considered, which is required for stochastic modeling. The main contribution is a new error model of straight lines in closed form for calculating quickly and reliably the covariance matrix dependent on just a few comprehensible and easily-obtainable parameters. The model can be applied widely in any case when a line is fitted from a number of distinct points also without a priori knowledge of the specific measurement noise. By means of extensive simulations, the performance and robustness of the new model in comparison to existing approaches is shown. PMID:29673205

  12. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  13. Detection of anomalies in radio tomography of asteroids: Source count and forward errors

    NASA Astrophysics Data System (ADS)

    Pursiainen, S.; Kaasalainen, M.

    2014-09-01

    The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.

  14. Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia

    NASA Astrophysics Data System (ADS)

    Mather, B.; Moresi, L. N.; Rayner, P. J.

    2017-12-01

    The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.

  15. The Job Dimensions Underlying the Job Elements of the Position Analysis Questionnaire (PAQ) (Form B). Report No. 4.

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    This study was concerned with the identification of the job dimension underlying the job elements of the Position Analysis Questionnaire (PAQ), Form B. The PAQ is a structured job analysis instrument consisting of 187 worker-oriented job elements which are divided into six a priori major divisions. The statistical procedure of principal components…

  16. SOCIODEMOGRAPHIC DOMAINS OF DEPRIVATION AND PRETERM BIRTH

    EPA Science Inventory

    Area-level deprivation is consistently associated with poor health outcomes. Using US census data (2000) and principal components analysis, a priori defined socio-demographic indices of poverty, housing, residential stability, occupation, employment and education were created fo...

  17. Transmission of trisomy decreases with maternal age in mouse models of Down syndrome, mirroring a phenomenon in human Down syndrome mothers.

    PubMed

    Stern, Shani; Biron, David; Moses, Elisha

    2016-07-11

    Down syndrome incidence in humans increases dramatically with maternal age. This is mainly the result of increased meiotic errors, but factors such as differences in abortion rate may play a role as well. Since the meiotic error rate increases almost exponentially after a certain age, its contribution to the overall incidence aneuploidy may mask the contribution of other processes. To focus on such selection mechanisms we investigated transmission in trisomic females, using data from mouse models and from Down syndrome humans. In trisomic females the a-priori probability for trisomy is independent of meiotic errors and thus approximately constant in the early embryo. Despite this, the rate of transmission of the extra chromosome decreases with age in females of the Ts65Dn and, as we show, for the Tc1 mouse models for Down syndrome. Evaluating progeny of 73 Tc1 births and 112 Ts65Dn births from females aged 130 days to 250 days old showed that both models exhibit a 3-fold reduction of the probability to transmit the trisomy with increased maternal ageing. This is concurrent with a 2-fold reduction of litter size with maternal ageing. Furthermore, analysis of previously reported 30 births in Down syndrome women shows a similar tendency with an almost three fold reduction in the probability to have a Down syndrome child between a 20 and 30 years old Down syndrome woman. In the two types of mice models for Down syndrome that were used for this study, and in human Down syndrome, older females have significantly lower probability to transmit the trisomy to the offspring. Our findings, taken together with previous reports of decreased supportive environment of the older uterus, add support to the notion that an older uterus negatively selects the less fit trisomic embryos.

  18. The validation of the Yonsei CArbon Retrieval algorithm with improved aerosol information using GOSAT measurements

    NASA Astrophysics Data System (ADS)

    Jung, Yeonjin; Kim, Jhoon; Kim, Woogyung; Boesch, Hartmut; Goo, Tae-Young; Cho, Chunho

    2017-04-01

    Although several CO2 retrieval algorithms have been developed to improve our understanding about carbon cycle, limitations in spatial coverage and uncertainties due to aerosols and thin cirrus clouds are still remained as a problem for monitoring CO2 concentration globally. Based on an optimal estimation method, the Yonsei CArbon Retrieval (YCAR) algorithm was developed to retrieve the column-averaged dry-air mole fraction of carbon dioxide (XCO2) using the Greenhouse Gases Observing SATellite (GOSAT) measurements with optimized a priori CO2 profiles and aerosol models over East Asia. In previous studies, the aerosol optical properties (AOP) are the most important factors in CO2 retrievals since AOPs are assumed as fixed parameters during retrieval process, resulting in significant XCO2 retrieval error up to 2.5 ppm. In this study, to reduce these errors caused by inaccurate aerosol optical information, the YCAR algorithm improved with taking into account aerosol optical properties as well as aerosol vertical distribution simultaneously. The CO2 retrievals with two difference aerosol approaches have been analyzed using the GOSAT spectra and have been evaluated throughout the comparison with collocated ground-based observations at several Total Carbon Column Observing Network (TCCON) sites. The improved YCAR algorithm has biases of 0.59±0.48 ppm and 2.16±0.87 ppm at Saga and Tsukuba sites, respectively, with smaller biases and higher correlation coefficients compared to the GOSAT operational algorithm. In addition, the XCO2 retrievals will be validated at other TCCON sites and error analysis will be evaluated. These results reveal that considering better aerosol information can improve the accuracy of CO2 retrieval algorithm and provide more useful XCO2 information with reduced uncertainties. This study would be expected to provide useful information in estimating carbon sources and sinks.

  19. Publication Bias in Research Synthesis: Sensitivity Analysis Using A Priori Weight Functions

    ERIC Educational Resources Information Center

    Vevea, Jack L.; Woods, Carol M.

    2005-01-01

    Publication bias, sometimes known as the "file-drawer problem" or "funnel-plot asymmetry," is common in empirical research. The authors review the implications of publication bias for quantitative research synthesis (meta-analysis) and describe existing techniques for detecting and correcting it. A new approach is proposed that is suitable for…

  20. Bayesian inversions of a dynamic vegetation model in four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; François, L.

    2015-01-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.

  1. Formation Flight System Extremum-Seeking-Control Using Blended Performance Parameters

    NASA Technical Reports Server (NTRS)

    Ryan, John J. (Inventor)

    2018-01-01

    An extremum-seeking control system for formation flight that uses blended performance parameters in a conglomerate performance function that better approximates drag reduction than performance functions formed from individual measurements. Generally, a variety of different measurements are taken and fed to a control system, the measurements are weighted, and are then subjected to a peak-seeking control algorithm. As measurements are continually taken, the aircraft will be guided to a relative position which optimizes the drag reduction of the formation. Two embodiments are discussed. Two approaches are shown for determining relative weightings: "a priori" by which they are qualitatively determined (by minimizing the error between the conglomerate function and the drag reduction function), and by periodically updating the weightings as the formation evolves.

  2. High-precision numerical integration of equations in dynamics

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.

  3. Use of the maximum entropy method to retrieve the vertical atmospheric ozone profile and predict atmospheric ozone content

    NASA Technical Reports Server (NTRS)

    Turner, B. Curtis

    1992-01-01

    A method is developed for prediction of ozone levels in planetary atmospheres. This method is formulated in terms of error covariance matrices, and is associated with both direct measurements, a priori first guess profiles, and a weighting function matrix. This is described by the following linearized equation: y = A(matrix) x X + eta, where A is the weighting matrix and eta is noise. The problems to this approach are: (1) the A matrix is near singularity; (2) the number of unknowns in the profile exceeds the number of data points, therefore, the solution may not be unique; and (3) even if a unique solution exists, eta may cause the solution to be ill conditioned.

  4. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  5. Specification-based software sizing: An empirical investigation of function metrics

    NASA Technical Reports Server (NTRS)

    Jeffery, Ross; Stathis, John

    1993-01-01

    For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.

  6. Terrain following of arbitrary surfaces using a high intensity LED proximity sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, J.E.

    1992-01-01

    Many robotic operations, e.g., mapping, scanning, feature following, etc., require accurate surface following of arbitrary targets. This paper presents a versatile surface following and mapping system designed to promote hardware, software and application independence, modular development, and upward expandability. These goals are met by: a full, a priori specification of the hardware and software interfaces; a modular system architecture; and a hierarchical surface-data analysis method, permitting application specific tuning at each conceptual level of topological abstraction. This surface following system was fully designed and independently of any specific robotic host, then successfully integrated with and demonstrated on a completely amore » priori unknown, real-time robotic system. 7 refs.« less

  7. Downdating a time-varying square root information filter

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.

    1990-01-01

    A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.

  8. Foucauldian diagnostics: space, time, and the metaphysics of medicine.

    PubMed

    Bishop, Jeffrey P

    2009-08-01

    This essay places Foucault's work into a philosophical context, recognizing that Foucault is difficult to place and demonstrates that Foucault remains in the Kantian tradition of philosophy, even if he sits at the margins of that tradition. For Kant, the forms of intuition-space and time-are the a priori conditions of the possibility of human experience and knowledge. For Foucault, the a priori conditions are political space and historical time. Foucault sees political space as central to understanding both the subject and objects of medicine, psychiatry, and the social sciences. Through this analysis one can see that medicine's metaphysics is a metaphysics of efficient causation, where medicine's objects are subjected to mechanisms of efficient control.

  9. Behavior-based aggregation of land categories for temporal change analysis

    NASA Astrophysics Data System (ADS)

    Aldwaik, Safaa Zakaria; Onsted, Jeffrey A.; Pontius, Robert Gilmore, Jr.

    2015-03-01

    Comparison between two time points of the same categorical variable for the same study extent can reveal changes among categories over time, such as transitions among land categories. If many categories exist, then analysis can be difficult to interpret. Category aggregation is the procedure that combines two or more categories to create a single broader category. Aggregation can simplify interpretation, and can also influence the sizes and types of changes. Some classifications have an a priori hierarchy to facilitate aggregation, but an a priori aggregation might make researchers blind to important category dynamics. We created an algorithm to aggregate categories in a sequence of steps based on the categories' behaviors in terms of gross losses and gross gains. The behavior-based algorithm aggregates net gaining categories with net gaining categories and aggregates net losing categories with net losing categories, but never aggregates a net gaining category with a net losing category. The behavior-based algorithm at each step in the sequence maintains net change and maximizes swap change. We present a case study where data from 2001 and 2006 for 64 land categories indicate change on 17% of the study extent. The behavior-based algorithm produces a set of 10 categories that maintains nearly the original amount of change. In contrast, an a priori aggregation produces 10 categories while reducing the change to 9%. We offer a free computer program to perform the behavior-based aggregation.

  10. A Priori Analysis of a Compressible Flamelet Model using RANS Data for a Dual-Mode Scramjet Combustor

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Drozda, Tomasz G.; McDaniel, James C.; Lacaze, Guilhem; Oefelein, Joseph

    2015-01-01

    In an effort to make large eddy simulation of hydrocarbon-fueled scramjet combustors more computationally accessible using realistic chemical reaction mechanisms, a compressible flamelet/progress variable (FPV) model was proposed that extends current FPV model formulations to high-speed, compressible flows. Development of this model relied on observations garnered from an a priori analysis of the Reynolds-Averaged Navier-Stokes (RANS) data obtained for the Hypersonic International Flight Research and Experimentation (HI-FiRE) dual-mode scramjet combustor. The RANS data were obtained using a reduced chemical mechanism for the combustion of a JP-7 surrogate and were validated using avail- able experimental data. These RANS data were then post-processed to obtain, in an a priori fashion, the scalar fields corresponding to an FPV-based modeling approach. In the current work, in addition to the proposed compressible flamelet model, a standard incompressible FPV model was also considered. Several candidate progress variables were investigated for their ability to recover static temperature and major and minor product species. The effects of pressure and temperature on the tabulated progress variable source term were characterized, and model coupling terms embedded in the Reynolds- averaged Navier-Stokes equations were studied. Finally, results for the novel compressible flamelet/progress variable model were presented to demonstrate the improvement attained by modeling the effects of pressure and flamelet boundary conditions on the combustion.

  11. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package.

    PubMed

    Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M

    2014-05-01

    Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.

  12. An Inequality Constrained Least-Squares Approach as an Alternative Estimation Procedure for Atmospheric Parameters from VLBI Observations

    NASA Astrophysics Data System (ADS)

    Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel

    2016-12-01

    On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.

  13. The Diagnosis of Error in Histories of Science

    NASA Astrophysics Data System (ADS)

    Thomas, William

    Whether and how to diagnose error in the history of science is a contentious issue. For many scientists, diagnosis is appealing because it allows them to discuss how knowledge can progress most effectively. Many historians disagree. They consider diagnosis inappropriate because it may discard features of past actors' thought that are important to understanding it, and may have even been intellectually productive. Ironically, these historians are apt to diagnose flaws in scientists' histories as proceeding from a misguided desire to idealize scientific method, and from their attendant identification of deviations from the ideal as, ipso facto, a paramount source of error in historical science. While both views have some merit, they should be reconciled if a more harmonious and productive relationship between the disciplines is to prevail. In To Explain the World, Steven Weinberg narrates the slow but definite emergence of what we call science from long traditions of philosophical and mathematical thought. This narrative follows in a historiographical tradition charted by historians such as Alexandre Koyre and Rupert Hall about sixty years ago. It is essentially a history of the emergence of reliable (if fallible) scientific method from more error-prone thought. While some historians such as Steven Shapin view narratives of this type as fundamentally error-prone, I do not view such projects as a priori illegitimate. They are, however, perhaps more difficult than Weinberg supposes. In this presentation, I will focus on two of Weinberg's strong historical claims: that physics became detached from religion as early as the beginning of the eighteenth century, and that physics proved an effective model for placing other fields on scientific grounds. While I disagree with these claims, they represent at most an overestimation of vintage science's interest in discarding theological questions, and an overestimation of that science's ability to function at all reliably.

  14. Prediction of Breakthrough Curves for Conservative and Reactive Transport from the Structural Parameters of Highly Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Hansen, S. K.; Haslauer, C. P.; Cirpka, O. A.; Vesselinov, V. V.

    2016-12-01

    It is desirable to predict the shape of breakthrough curves downgradient of a solute source from subsurface structural parameters (as in the small-perturbation macrodispersion theory) both for realistically heterogeneous fields, and at early time, before any sort of Fickian model is applicable. Using a combination of a priori knowledge, large-scale Monte Carlo simulation, and regression techniques, we have developed closed-form predictive expressions for pre- and post-Fickian flux-weighted solute breakthrough curves as a function of distance from the source (in integral scales) and variance of the log hydraulic conductivity field. Using the ensemble of Monte Carlo realizations, we have simultaneously computed error envelopes for the estimated flux-weighted breakthrough, and for the divergence of point breakthrough curves from the flux-weighted average, as functions of the predictive parameters. We have also obtained implied late-time macrodispersion coefficients for highly heterogeneous environments from the breakthrough statistics. This analysis is relevant for the modelling of reactive as well as conservative transport, since for many kinetic sorption and decay reactions, Laplace-domain modification of the breakthrough curve for conservative solute produces the correct curve for the reactive system.

  15. Method of radiometric quality assessment of NIR images acquired with a custom sensor mounted on an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Wierzbicki, Damian; Fryskowska, Anna; Kedzierski, Michal; Wojtkowska, Michalina; Delis, Paulina

    2018-01-01

    Unmanned aerial vehicles are suited to various photogrammetry and remote sensing missions. Such platforms are equipped with various optoelectronic sensors imaging in the visible and infrared spectral ranges and also thermal sensors. Nowadays, near-infrared (NIR) images acquired from low altitudes are often used for producing orthophoto maps for precision agriculture among other things. One major problem results from the application of low-cost custom and compact NIR cameras with wide-angle lenses introducing vignetting. In numerous cases, such cameras acquire low radiometric quality images depending on the lighting conditions. The paper presents a method of radiometric quality assessment of low-altitude NIR imagery data from a custom sensor. The method utilizes statistical analysis of NIR images. The data used for the analyses were acquired from various altitudes in various weather and lighting conditions. An objective NIR imagery quality index was determined as a result of the research. The results obtained using this index enabled the classification of images into three categories: good, medium, and low radiometric quality. The classification makes it possible to determine the a priori error of the acquired images and assess whether a rerun of the photogrammetric flight is necessary.

  16. Inversion of ground-motion data from a seismometer array for rotation using a modification of Jaeger's method

    USGS Publications Warehouse

    Chi, Wu-Cheng; Lee, W.H.K.; Aston, J.A.D.; Lin, C.J.; Liu, C.-C.

    2011-01-01

    We develop a new way to invert 2D translational waveforms using Jaeger's (1969) formula to derive rotational ground motions about one axis and estimate the errors in them using techniques from statistical multivariate analysis. This procedure can be used to derive rotational ground motions and strains using arrayed translational data, thus providing an efficient way to calibrate the performance of rotational sensors. This approach does not require a priori information about the noise level of the translational data and elastic properties of the media. This new procedure also provides estimates of the standard deviations of the derived rotations and strains. In this study, we validated this code using synthetic translational waveforms from a seismic array. The results after the inversion of the synthetics for rotations were almost identical with the results derived using a well-tested inversion procedure by Spudich and Fletcher (2009). This new 2D procedure can be applied three times to obtain the full, three-component rotations. Additional modifications can be implemented to the code in the future to study different features of the rotational ground motions and strains induced by the passage of seismic waves.

  17. Adaptive extended-state observer-based fault tolerant attitude control for spacecraft with reaction wheels

    NASA Astrophysics Data System (ADS)

    Ran, Dechao; Chen, Xiaoqian; de Ruiter, Anton; Xiao, Bing

    2018-04-01

    This study presents an adaptive second-order sliding control scheme to solve the attitude fault tolerant control problem of spacecraft subject to system uncertainties, external disturbances and reaction wheel faults. A novel fast terminal sliding mode is preliminarily designed to guarantee that finite-time convergence of the attitude errors can be achieved globally. Based on this novel sliding mode, an adaptive second-order observer is then designed to reconstruct the system uncertainties and the actuator faults. One feature of the proposed observer is that the design of the observer does not necessitate any priori information of the upper bounds of the system uncertainties and the actuator faults. In view of the reconstructed information supplied by the designed observer, a second-order sliding mode controller is developed to accomplish attitude maneuvers with great robustness and precise tracking accuracy. Theoretical stability analysis proves that the designed fault tolerant control scheme can achieve finite-time stability of the closed-loop system, even in the presence of reaction wheel faults and system uncertainties. Numerical simulations are also presented to demonstrate the effectiveness and superiority of the proposed control scheme over existing methodologies.

  18. Reporting and methodological quality of meta-analyses in urological literature.

    PubMed

    Xia, Leilei; Xu, Jing; Guzzo, Thomas J

    2017-01-01

    To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, " a priori " design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and " a priori " design were associated with superior reporting quality, following PRISMA guideline and " a priori " design were associated with superior methodological quality. Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having " a priori " protocol.

  19. Sterilization of tumor-positive lymph nodes of esophageal cancer by neo-adjuvant treatment is associated with worse survival compared to tumor-negative lymph nodes treated with surgery first.

    PubMed

    Mantziari, Styliani; Allemann, Pierre; Winiker, Michael; Sempoux, Christine; Demartines, Nicolas; Schäfer, Markus

    2017-09-01

    Lymph node (LN) involvement by esophageal cancer is associated with compromised long-term prognosis. This study assessed whether LN downstaging by neoadjuvant treatment (NAT) might offer a survival benefit compared to patients with a priori negative LN. Patients undergoing esophagectomy for cancer between 2005 and 2014 were screened for inclusion. Group 1 included cN0 patients confirmed as pN0 who were treated with surgery first, whereas group 2 included patients initially cN+ and down-staged to ypN0 after NAT. Survival analysis was performed with the Kaplan-Meier and Cox regression methods. Fifty-seven patients were included in our study, 24 in group 1 and 33 in group 2. Group 2 patients had more locally advanced lesions compared to a priori negative patients, and despite complete LN sterilization by NAT they still had worse long-term survival. Overall 3-year survival was 86.8% for a priori LN negative versus 63.3% for downstaged patients (P = 0.013), while disease-free survival was 79.6% and 57.9%, respectively (P = 0.021). Tumor recurrence was also earlier and more disseminated for the down-staged group. Downstaged LN, despite the systemic effect of NAT, still inherit an increased risk for early tumor recurrence and worse long-term survival compared to a priori negative LN. © 2017 Wiley Periodicals, Inc.

  20. A switched systems approach to image-based estimation

    NASA Astrophysics Data System (ADS)

    Parikh, Anup

    With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.

  1. Information content and sensitivity of the 3β + 2α lidar measurement system for aerosol microphysical retrievals

    NASA Astrophysics Data System (ADS)

    Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.

    2016-11-01

    There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.

  2. Practical interior tomography with radial Hilbert filtering and a priori knowledge in a small round area.

    PubMed

    Tang, Shaojie; Yang, Yi; Tang, Xiangyang

    2012-01-01

    Interior tomography problem can be solved using the so-called differentiated backprojection-projection onto convex sets (DBP-POCS) method, which requires a priori knowledge within a small area interior to the region of interest (ROI) to be imaged. In theory, the small area wherein the a priori knowledge is required can be in any shape, but most of the existing implementations carry out the Hilbert filtering either horizontally or vertically, leading to a vertical or horizontal strip that may be across a large area in the object. In this work, we implement a practical DBP-POCS method with radial Hilbert filtering and thus the small area with the a priori knowledge can be roughly round (e.g., a sinus or ventricles among other anatomic cavities in human or animal body). We also conduct an experimental evaluation to verify the performance of this practical implementation. We specifically re-derive the reconstruction formula in the DBP-POCS fashion with radial Hilbert filtering to assure that only a small round area with the a priori knowledge be needed (namely radial DBP-POCS method henceforth). The performance of the practical DBP-POCS method with radial Hilbert filtering and a priori knowledge in a small round area is evaluated with projection data of the standard and modified Shepp-Logan phantoms simulated by computer, followed by a verification using real projection data acquired by a computed tomography (CT) scanner. The preliminary performance study shows that, if a priori knowledge in a small round area is available, the radial DBP-POCS method can solve the interior tomography problem in a more practical way at high accuracy. In comparison to the implementations of DBP-POCS method demanding the a priori knowledge in horizontal or vertical strip, the radial DBP-POCS method requires the a priori knowledge within a small round area only. Such a relaxed requirement on the availability of a priori knowledge can be readily met in practice, because a variety of small round areas (e.g., air-filled sinuses or fluid-filled ventricles among other anatomic cavities) exist in human or animal body. Therefore, the radial DBP-POCS method with a priori knowledge in a small round area is more feasible in clinical and preclinical practice.

  3. The Advantages of Using Planned Comparisons over Post Hoc Tests.

    ERIC Educational Resources Information Center

    Kuehne, Carolyn C.

    There are advantages to using a priori or planned comparisons rather than omnibus multivariate analysis of variance (MANOVA) tests followed by post hoc or a posteriori testing. A small heuristic data set is used to illustrate these advantages. An omnibus MANOVA test was performed on the data followed by a post hoc test (discriminant analysis). A…

  4. Benefits of Using Planned Comparisons Rather Than Post Hoc Tests: A Brief Review with Examples.

    ERIC Educational Resources Information Center

    DuRapau, Theresa M.

    The rationale behind analysis of variance (including analysis of covariance and multiple analyses of variance and covariance) methods is reviewed, and unplanned and planned methods of evaluating differences between means are briefly described. Two advantages of using planned or a priori tests over unplanned or post hoc tests are presented. In…

  5. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.

  6. Self-organizing radial basis function networks for adaptive flight control and aircraft engine state estimation

    NASA Astrophysics Data System (ADS)

    Shankar, Praveen

    The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.

  7. Effect of Thin Cirrus Clouds on Dust Optical Depth Retrievals From MODIS Observations

    NASA Technical Reports Server (NTRS)

    Feng, Qian; Hsu, N. Christina; Yang, Ping; Tsay, Si-Chee

    2011-01-01

    The effect of thin cirrus clouds in retrieving the dust optical depth from MODIS observations is investigated by using a simplified aerosol retrieval algorithm based on the principles of the Deep Blue aerosol property retrieval method. Specifically, the errors of the retrieved dust optical depth due to thin cirrus contamination are quantified through the comparison of two retrievals by assuming dust-only atmospheres and the counterparts with overlapping mineral dust and thin cirrus clouds. To account for the effect of the polarization state of radiation field on radiance simulation, a vector radiative transfer model is used to generate the lookup tables. In the forward radiative transfer simulations involved in generating the lookup tables, the Rayleigh scattering by atmospheric gaseous molecules and the reflection of the surface assumed to be Lambertian are fully taken into account. Additionally, the spheroid model is utilized to account for the nonsphericity of dust particles In computing their optical properties. For simplicity, the single-scattering albedo, scattering phase matrix, and optical depth are specified a priori for thin cirrus clouds assumed to consist of droxtal ice crystals. The present results indicate that the errors in the retrieved dust optical depths due to the contamination of thin cirrus clouds depend on the scattering angle, underlying surface reflectance, and dust optical depth. Under heavy dusty conditions, the absolute errors are comparable to the predescribed optical depths of thin cirrus clouds.

  8. Precipitation and Diabatic Heating Distributions from TRMM/GPM

    NASA Astrophysics Data System (ADS)

    Olson, W. S.; Grecu, M.; Wu, D.; Tao, W. K.; L'Ecuyer, T.; Jiang, X.

    2016-12-01

    The initial focus of our research effort was the development of a physically-based methodology for estimating 3D precipitation distributions from a combination of spaceborne radar and passive microwave radiometer observations. This estimation methodology was originally developed for applications to Global Precipitation Measurement (GPM) mission sensor data, but it has recently been adapted to Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar and Microwave Imager observations. Precipitation distributions derived from the TRMM sensors are interpreted using cloud-system resolving model simulations to infer atmospheric latent+eddy heating (Q1-QR) distributions in the tropics and subtropics. Further, the estimates of Q1-QR are combined with estimates of radiative heating (QR), derived from TRMM Microwave Imager and Visible and Infrared Scanner data as well as environmental properties from NCEP reanalyses, to yield estimates of the large-scale total diabatic heating (Q1). A thirteen-year database of precipitation and diabatic heating is constructed using TRMM observations from 1998-2010 as part of NASA's Energy and Water cycle Study program. State-dependent errors in precipitation and heating products are evaluated by propagating the potential errors of a priori modeling assumptions through the estimation method framework. Knowledge of these errors is critical for determining the "closure" of global water and energy budgets. Applications of the precipitation/heating products to climate studies will be presented at the conference.

  9. Lower-tropospheric CO 2 from near-infrared ACOS-GOSAT observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulawik, Susan S.; O'Dell, Chris; Payne, Vivienne H.

    We present two new products from near-infrared Greenhouse Gases Observing Satellite (GOSAT) observations: lowermost tropospheric (LMT, from 0 to 2.5 km) and upper tropospheric–stratospheric ( U, above 2.5 km) carbon dioxide partial column mixing ratios. We compare these new products to aircraft profiles and remote surface flask measurements and find that the seasonal and year-to-year variations in the new partial column mixing ratios significantly improve upon the Atmospheric CO 2 Observations from Space (ACOS) and GOSAT (ACOS-GOSAT) initial guess and/or a priori, with distinct patterns in the LMT and U seasonal cycles that match validation data. For land monthly averages,more » we find errors of 1.9, 0.7, and 0.8 ppm for retrieved GOSAT LMT, U, and XCO 2; for ocean monthly averages, we find errors of 0.7, 0.5, and 0.5 ppm for retrieved GOSAT LMT, U, and XCO 2. In the southern hemispheric biomass burning season, the new partial columns show similar patterns to MODIS fire maps and MOPITT multispectral CO for both vertical levels, despite a flat ACOS-GOSAT prior, and a CO–CO 2 emission factor comparable to published values. The difference of LMT and U, useful for evaluation of model transport error, has also been validated with a monthly average error of 0.8 (1.4) ppm for ocean (land). LMT is more locally influenced than U, meaning that local fluxes can now be better separated from CO 2 transported from far away.« less

  10. Lower-tropospheric CO 2 from near-infrared ACOS-GOSAT observations

    DOE PAGES

    Kulawik, Susan S.; O'Dell, Chris; Payne, Vivienne H.; ...

    2017-04-27

    We present two new products from near-infrared Greenhouse Gases Observing Satellite (GOSAT) observations: lowermost tropospheric (LMT, from 0 to 2.5 km) and upper tropospheric–stratospheric ( U, above 2.5 km) carbon dioxide partial column mixing ratios. We compare these new products to aircraft profiles and remote surface flask measurements and find that the seasonal and year-to-year variations in the new partial column mixing ratios significantly improve upon the Atmospheric CO 2 Observations from Space (ACOS) and GOSAT (ACOS-GOSAT) initial guess and/or a priori, with distinct patterns in the LMT and U seasonal cycles that match validation data. For land monthly averages,more » we find errors of 1.9, 0.7, and 0.8 ppm for retrieved GOSAT LMT, U, and XCO 2; for ocean monthly averages, we find errors of 0.7, 0.5, and 0.5 ppm for retrieved GOSAT LMT, U, and XCO 2. In the southern hemispheric biomass burning season, the new partial columns show similar patterns to MODIS fire maps and MOPITT multispectral CO for both vertical levels, despite a flat ACOS-GOSAT prior, and a CO–CO 2 emission factor comparable to published values. The difference of LMT and U, useful for evaluation of model transport error, has also been validated with a monthly average error of 0.8 (1.4) ppm for ocean (land). LMT is more locally influenced than U, meaning that local fluxes can now be better separated from CO 2 transported from far away.« less

  11. An operational retrieval algorithm for determining aerosol optical properties in the ultraviolet

    NASA Astrophysics Data System (ADS)

    Taylor, Thomas E.; L'Ecuyer, Tristan S.; Slusser, James R.; Stephens, Graeme L.; Goering, Christian D.

    2008-02-01

    This paper describes a number of practical considerations concerning the optimization and operational implementation of an algorithm used to characterize the optical properties of aerosols across part of the ultraviolet (UV) spectrum. The algorithm estimates values of aerosol optical depth (AOD) and aerosol single scattering albedo (SSA) at seven wavelengths in the UV, as well as total column ozone (TOC) and wavelength-independent asymmetry factor (g) using direct and diffuse irradiances measured with a UV multifilter rotating shadowband radiometer (UV-MFRSR). A novel method for cloud screening the irradiance data set is introduced, as well as several improvements and optimizations to the retrieval scheme which yield a more realistic physical model for the inversion and increase the efficiency of the algorithm. Introduction of a wavelength-dependent retrieval error budget generated from rigorous forward model analysis as well as broadened covariances on the a priori values of AOD, SSA and g and tightened covariances of TOC allows sufficient retrieval sensitivity and resolution to obtain unique solutions of aerosol optical properties as demonstrated by synthetic retrievals. Analysis of a cloud screened data set (May 2003) from Panther Junction, Texas, demonstrates that the algorithm produces realistic values of the optical properties that compare favorably with pseudo-independent methods for AOD, TOC and calculated Ångstrom exponents. Retrieval errors of all parameters (except TOC) are shown to be negatively correlated to AOD, while the Shannon information content is positively correlated, indicating that retrieval skill improves with increasing atmospheric turbidity. When implemented operationally on more than thirty instruments in the Ultraviolet Monitoring and Research Program's (UVMRP) network, this retrieval algorithm will provide a comprehensive and internally consistent climatology of ground-based aerosol properties in the UV spectral range that can be used for both validation of satellite measurements as well as regional aerosol and ultraviolet transmission studies.

  12. Evaluating a Priori Ozone Profile Information Used in TEMPO (Tropospheric Emissions: Monitoring of Pollution) Tropospheric Ozone Retrievals

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew Stephen

    2017-01-01

    A primary objective for TOLNet is the evaluation and validation of space-based tropospheric O3 retrievals from future systems such as the Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite. This study is designed to evaluate the tropopause-based O3 climatology (TB-Clim) dataset which will be used as the a priori profile information in TEMPO O3 retrievals. This study also evaluates model simulated O3 profiles, which could potentially serve as a priori O3 profile information in TEMPO retrievals, from near-real-time (NRT) data assimilation model products (NASA Global Modeling and Assimilation Office (GMAO) Goddard Earth Observing System (GEOS-5) Forward Processing (FP) and Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA2)) and full chemical transport model (CTM), GEOS-Chem, simulations. The TB-Clim dataset and model products are evaluated with surface (0-2 km) and tropospheric (0-10 km) TOLNet observations to demonstrate the accuracy of the suggested a priori dataset and information which could potentially be used in TEMPO O3 algorithms. This study also presents the impact of individual a priori profile sources on the accuracy of theoretical TEMPO O3 retrievals in the troposphere and at the surface. Preliminary results indicate that while the TB-Clim climatological dataset can replicate seasonally-averaged tropospheric O3 profiles observed by TOLNet, model-simulated profiles from a full CTM (GEOS-Chem is used as a proxy for CTM O3 predictions) resulted in more accurate tropospheric and surface-level O3 retrievals from TEMPO when compared to hourly (diurnal cycle evaluation) and daily-averaged (daily variability evaluation) TOLNet observations. Furthermore, it was determined that when large daily-averaged surface O3 mixing ratios are observed (65 ppb), which are important for air quality purposes, TEMPO retrieval values at the surface display higher correlations and less bias when applying CTM a priori profile information compared to all other data products. The primary reason for this is that CTM predictions better capture the spatio-temporal variability of the vertical profiles of observed tropospheric O3 compared to the TB-Clim dataset and other NRT data assimilation models evaluated during this study.

  13. Time-lapse Inversion of Electrical Resistivity Data

    NASA Astrophysics Data System (ADS)

    Nguyen, F.; Kemna, A.

    2005-12-01

    Time-lapse geophysical measurements (also known as monitoring, repeat or multi-frame survey) now play a critical role for monitoring -non-destructively- changes induced by human, as reservoir compaction, or to study natural processes, as flow and transport in porous media. To invert such data sets into time-varying subsurface properties, several strategies are found in different engineering or scientific fields (e.g., in biomedical, process tomography, or geophysical applications). Indeed, for time-lapse surveys, the data sets and the models at each time frame have the particularity to be closely related to their "neighbors", if the process does not induce chaotic or very high variations. Therefore, the information contained in the different frames can be used for constraining the inversion in the others. A first strategy consists in imposing constraints to the model based on prior estimation, a priori spatiotemporal or temporal behavior (arbitrary or based on a law describing the monitored process), restriction of changes in certain areas, or data changes reproducibility. A second strategy aims to invert directly the model changes, where the objective function penalizes those models whose spatial, temporal, or spatiotemporal behavior differs from a prior assumption or from a computed a priori. Clearly, the incorporation of time-lapse a priori information, determined from data sets or assumed, in the inversion process has been proven to improve significantly the resolving capability, mainly by removing artifacts. However, there is a lack of comparison of these methods. In this paper, we focus on Tikhonov-like inversion approaches for electrical tomography imaging to evaluate the capability of the different existing strategies, and to propose new ones. To evaluate the bias inevitably introduced by time-lapse regularization, we quantified the relative contribution of the different approaches to the resolving power of the method. Furthermore, we incorporated different noise levels and types (random and/or systematic) to determine the strategies' ability to cope with real data. Introducing additional regularization terms yields also more regularization parameters to compute. Since this is a difficult and computationally costly task, we propose that it should be proportional to the velocity of the process. To achieve these objectives, we tested the different methods using synthetic models, and experimental data, taking noise and error propagation into account. Our study shows that the choice of the inversion strategy highly depends on the nature and magnitude of noise, whereas the choice of the regularization term strongly influences the resulting image according to the a priori assumption. This study was developed under the scope of the European project ALERT (GOCE-CT-2004-505329).

  14. Bus lane with intermittent priority (BLIMP) concept simulation analysis final report : November 2009.

    DOT National Transportation Integrated Search

    2009-11-01

    The Lane Transit District, in cooperation with the National Bus Rapid Transit Institute (NBRTI) at the University of South Florida, completed a preliminary implementation study to determine the potential impacts of a new and innovative transit priori...

  15. A priori and a posteriori analysis of the flow around a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Cimarelli, A.; Leonforte, A.; Franciolini, M.; De Angelis, E.; Angeli, D.; Crivellini, A.

    2017-11-01

    The definition of a correct mesh resolution and modelling approach for the Large Eddy Simulation (LES) of the flow around a rectangular cylinder is recognized to be a rather elusive problem as shown by the large scatter of LES results present in the literature. In the present work, we aim at assessing this issue by performing an a priori analysis of Direct Numerical Simulation (DNS) data of the flow. This approach allows us to measure the ability of the LES field on reproducing the main flow features as a function of the resolution employed. Based on these results, we define a mesh resolution which maximize the opposite needs of reducing the computational costs and of adequately resolving the flow dynamics. The effectiveness of the resolution method proposed is then verified by means of an a posteriori analysis of actual LES data obtained by means of the implicit LES approach given by the numerical properties of the Discontinuous Galerkin spatial discretization technique. The present work represents a first step towards a best practice for LES of separating and reattaching flows.

  16. A meta-analysis of country differences in the high-performance work system-business performance relationship: the roles of national culture and managerial discretion.

    PubMed

    Rabl, Tanja; Jayasinghe, Mevan; Gerhart, Barry; Kühlmann, Torsten M

    2014-11-01

    Our article develops a conceptual framework based primarily on national culture perspectives but also incorporating the role of managerial discretion (cultural tightness-looseness, institutional flexibility), which is aimed at achieving a better understanding of how the effectiveness of high-performance work systems (HPWSs) may vary across countries. Based on a meta-analysis of 156 HPWS-business performance effect sizes from 35,767 firms and establishments in 29 countries, we found that the mean HPWS-business performance effect size was positive overall (corrected r = .28) and positive in each country, regardless of its national culture or degree of institutional flexibility. In the case of national culture, the HPWS-business performance relationship was, on average, actually more strongly positive in countries where the degree of a priori hypothesized consistency or fit between an HPWS and national culture (according to national culture perspectives) was lower, except in the case of tight national cultures, where greater a priori fit of an HPWS with national culture was associated with a more positive HPWS-business performance effect size. However, in loose cultures (and in cultures that were neither tight nor loose), less a priori hypothesized consistency between an HPWS and national culture was associated with higher HPWS effectiveness. As such, our findings suggest the importance of not only national culture but also managerial discretion in understanding the HPWS-business performance relationship. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  17. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    PubMed

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  18. Using Electroencephalography for Treatment Guidance in Major Depressive Disorder.

    PubMed

    Wade, Elizabeth C; Iosifescu, Dan V

    2016-09-01

    Given the high prevalence of treatment-resistant depression and the long delays in finding effective treatments via trial and error, valid biomarkers of treatment outcome with the ability to guide treatment selection represent one of the most important unmet needs in mood disorders. A large body of research has investigated, for this purpose, biomarkers derived from electroencephalography (EEG), using resting state EEG or evoked potentials. Most studies have focused on specific EEG features (or combinations thereof), whereas more recently machine-learning approaches have been used to define the EEG features with the best predictive abilities without a priori hypotheses. While reviewing these different approaches, we have focused on the predictor characteristics and the quality of the supporting evidence. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  19. A Flexible and Efficient Method for Solving Ill-Posed Linear Integral Equations of the First Kind for Noisy Data

    NASA Astrophysics Data System (ADS)

    Antokhin, I. I.

    2017-06-01

    We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.

  20. Adaptive near-optimal neuro controller for continuous-time nonaffine nonlinear systems with constrained input.

    PubMed

    Esfandiari, Kasra; Abdollahi, Farzaneh; Talebi, Heidar Ali

    2017-09-01

    In this paper, an identifier-critic structure is introduced to find an online near-optimal controller for continuous-time nonaffine nonlinear systems having saturated control signal. By employing two Neural Networks (NNs), the solution of Hamilton-Jacobi-Bellman (HJB) equation associated with the cost function is derived without requiring a priori knowledge about system dynamics. Weights of the identifier and critic NNs are tuned online and simultaneously such that unknown terms are approximated accurately and the control signal is kept between the saturation bounds. The convergence of NNs' weights, identification error, and system states is guaranteed using Lyapunov's direct method. Finally, simulation results are performed on two nonlinear systems to confirm the effectiveness of the proposed control strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Twenty First Century Cyberbullying Defined: An Analysis of Intent, Repetition and Emotional Response

    ERIC Educational Resources Information Center

    Walker, Carol Marie

    2012-01-01

    The purpose of this study was to analyze the extent and impact that cyberbullying has on the undergraduate college student and provide a current definition for the event. A priori power analysis guided this research to provide an 80 percent probability of detecting a real effect with medium effect size. Adequate research power was essential to…

  2. Wave front sensing for next generation earth observation telescope

    NASA Astrophysics Data System (ADS)

    Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.

    2017-09-01

    High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.

  3. FAMA: An automatic code for stellar parameter and abundance determination

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2013-10-01

    Context. The large amount of spectra obtained during the epoch of extensive spectroscopic surveys of Galactic stars needs the development of automatic procedures to derive their atmospheric parameters and individual element abundances. Aims: Starting from the widely-used code MOOG by C. Sneden, we have developed a new procedure to determine atmospheric parameters and abundances in a fully automatic way. The code FAMA (Fast Automatic MOOG Analysis) is presented describing its approach to derive atmospheric stellar parameters and element abundances. The code, freely distributed, is written in Perl and can be used on different platforms. Methods: The aim of FAMA is to render the computation of the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) as automatic and as independent of any subjective approach as possible. It is based on the simultaneous search for three equilibria: excitation equilibrium, ionization balance, and the relationship between log n(Fe i) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. The convergence criteria are not fixed "a priori" but are based on the quality of the spectra. Results: In this paper we present tests performed on the solar spectrum EWs that assess the method's dependency on the initial parameters and we analyze a sample of stars observed in Galactic open and globular clusters. The current version of FAMA is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/558/A38

  4. Modelling airborne gravity data by means of adapted Space-Wise approach

    NASA Astrophysics Data System (ADS)

    Sampietro, Daniele; Capponi, Martina; Hamdi Mansi, Ahmed; Gatti, Andrea

    2017-04-01

    Regional gravity field modelling by means of remove - restore procedure is nowadays widely applied to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.) in gravimetric geoid determination as well as in exploration geophysics. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are generally adopted. However due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc. airborne data are contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations both in the low and high frequency should be applied to recover valuable information. In this work, a procedure to predict a grid or a set of filtered along track gravity anomalies, by merging GGM and airborne dataset, is presented. The proposed algorithm, like the Space-Wise approach developed by Politecnico di Milano in the framework of GOCE data analysis, is based on a combination of along track Wiener filter and Least Squares Collocation adjustment and properly considers the different altitudes of the gravity observations. Among the main differences with respect to the satellite application of the Space-Wise approach there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data recovering the gravitational signal with a predicted accuracy of about 0.25 mGal.

  5. A high order cell-centered semi-Lagrangian scheme for multi-dimensional kinetic simulations of neutral gas flows

    NASA Astrophysics Data System (ADS)

    Güçlü, Y.; Hitchon, W. N. G.

    2012-04-01

    The term 'Convected Scheme' (CS) refers to a family of algorithms, most usually applied to the solution of Boltzmann's equation, which uses a method of characteristics in an integral form to project an initial cell forward to a group of final cells. As such the CS is a 'forward-trajectory' semi-Lagrangian scheme. For multi-dimensional simulations of neutral gas flows, the cell-centered version of this semi-Lagrangian (CCSL) scheme has advantages over other options due to its implementation simplicity, low memory requirements, and easier treatment of boundary conditions. The main drawback of the CCSL-CS to date has been its high numerical diffusion in physical space, because of the 2nd order remapping that takes place at the end of each time step. By means of a modified equation analysis, it is shown that a high order estimate of the remapping error can be obtained a priori, and a small correction to the final position of the cells can be applied upon remapping, in order to achieve full compensation of this error. The resulting scheme is 4th order accurate in space while retaining the desirable properties of the CS: it is conservative and positivity-preserving, and the overall algorithm complexity is not appreciably increased. Two monotone (i.e. non-oscillating) versions of the fourth order CCSL-CS are also presented: one uses a common flux-limiter approach; the other uses a non-polynomial reconstruction to evaluate the derivatives of the density function. The method is illustrated in simple one- and two-dimensional examples, and a fully 3D solution of the Boltzmann equation describing expansion of a gas into vacuum through a cylindrical tube.

  6. The Job Dimensions Underlying the Job Elements of the Position Analysis Questionnaire (PAQ) (Form B).

    DTIC Science & Technology

    The study was concerned with the identification of the job dimension underlying the job elements of the Position Analysis Questionnaire ( PAQ ), Form B...The PAQ is a structured job analysis instrument consisting of 187 worker-oriented job elements which are divided into six a priori major divisions...The statistical procedure of principal components analysis was used to identify the job dimensions of the PAQ . Forty-five job dimensions were

  7. A model for medical decision making and problem solving.

    PubMed

    Werner, M

    1995-08-01

    Clinicians confront the classical problem of decision making under uncertainty, but a universal procedure by which they deal with this situation, both in diagnosis and therapy, can be defined. This consists in the choice of a specific course of action from available alternatives so as to reduce uncertainty. Formal analysis evidences that the expected value of this process depends on the a priori probabilities confronted, the discriminatory power of the action chosen, and the values and costs associated with possible outcomes. Clinical problem-solving represents the construction of a systematic strategy from multiple decisional building blocks. Depending on the level of uncertainty the physicians attach to their working hypothesis, they can choose among at least four prototype strategies: pattern recognition, the hypothetico-deductive process, arborization, and exhaustion. However, the resolution of real-life problems can involve a combination of these game plans. Formal analysis of each strategy permits definition of its appropriate a priori probabilities, action characteristics, and cost implications.

  8. Effects of two classification strategies on a Benthic Community Index for streams in the Northern Lakes and Forests Ecoregion

    USGS Publications Warehouse

    Butcher, Jason T.; Stewart, Paul M.; Simon, Thomas P.

    2003-01-01

    Ninety-four sites were used to analyze the effects of two different classification strategies on the Benthic Community Index (BCI). The first, a priori classification, reflected the wetland status of the streams; the second, a posteriori classification, used a bio-environmental analysis to select classification variables. Both classifications were examined by measuring classification strength and testing differences in metric values with respect to group membership. The a priori (wetland) classification strength (83.3%) was greater than the a posteriori (bio-environmental) classification strength (76.8%). Both classifications found one metric that had significant differences between groups. The original index was modified to reflect the wetland classification by re-calibrating the scoring criteria for percent Crustacea and Mollusca. A proposed refinement to the original Benthic Community Index is suggested. This study shows the importance of using hypothesis-driven classifications, as well as exploratory statistical analysis, to evaluate alternative ways to reveal environmental variability in biological assessment tools.

  9. Towards Improving Satellite Tropospheric NO2 Retrieval Products: Impacts of the spatial resolution and lighting NOx production from the a priori chemical transport model

    NASA Astrophysics Data System (ADS)

    Smeltzer, C. D.; Wang, Y.; Zhao, C.; Boersma, F.

    2009-12-01

    Polar orbiting satellite retrievals of tropospheric nitrogen dioxide (NO2) columns are important to a variety of scientific applications. These NO2 retrievals rely on a priori profiles from chemical transport models and radiative transfer models to derive the vertical columns (VCs) from slant columns measurements. In this work, we compare the retrieval results using a priori profiles from a global model (TM4) and a higher resolution regional model (REAM) at the OMI overpass hour of 1330 local time, implementing the Dutch OMI NO2 (DOMINO) retrieval. We also compare the retrieval results using a priori profiles from REAM model simulations with and without lightning NOx (NO + NO2) production. A priori model resolution and lightning NOx production are both found to have large impact on satellite retrievals by altering the satellite sensitivity to a particular observation by shifting the NO2 vertical distribution interpreted by the radiation model. The retrieved tropospheric NO2 VCs may increase by 25-100% in urban regions and be reduced by 50% in rural regions if the a priori profiles from REAM simulations are used during the retrievals instead of the profiles from TM4 simulations. The a priori profiles with lightning NOx may result in a 25-50% reduction of the retrieved tropospheric NO2 VCs compared to the a priori profiles without lightning. As first priority, a priori vertical NO2 profiles from a chemical transport model with a high resolution, which can better simulate urban-rural NO2 gradients in the boundary layer and make use of observation-based parameterizations of lightning NOx production, should be first implemented to obtain more accurate NO2 retrievals over the United States, where NOx source regions are spatially separated and lightning NOx production is significant. Then as consequence of a priori NO2 profile variabilities resulting from lightning and model resolution dynamics, geostationary satellite, daylight observations would further promote the next step towards producing a more complete NO2 data product provided sufficient resolution of the observations. Both the corrected retrieval algorithm and the proposed next generation geostationary satellite observations would thus improve emission inventories, better validate model simulations, and advantageously optimize regional specific ozone control strategies.

  10. Inversion of atmospheric optical parameters from elastic-backscatter lidar returns using a Kalman filter

    NASA Astrophysics Data System (ADS)

    Rocadenbosch, Francesc; Comeron, Adolfo; Vazquez, Gregori; Rodriguez-Gomez, Alejandro; Soriano, Cecilia; Baldasano, Jose M.

    1998-12-01

    Up to now, retrieval of the atmospheric extinction and backscatter has mainly relied on standard straightforward non-memory procedures such as slope-method, exponential- curve fitting and Klett's method. Yet, their performance becomes ultimately limited by the inherent lack of adaptability as they only work with present returns and neither past estimations, nor the statistics of the signals or a prior uncertainties are taken into account. In this work, a first inversion of the backscatter and extinction- to-backscatter ratio from pulsed elastic-backscatter lidar returns is tackled by means of an extended Kalman filter (EKF), which overcomes these limitations. Thus, as long as different return signals income,the filter updates itself weighted by the unbalance between the a priori estimates of the optical parameters and the new ones based on a minimum variance criterion. Calibration errors or initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables to retrieve the sought-after optical parameters as time-range-dependent functions and hence, to track the atmospheric evolution, its performance being only limited by the quality and availability of the 'a priori' information and the accuracy of the atmospheric model assumed. The study ends with an encouraging practical inversion of a live-scene measured with the Nd:YAG elastic-backscatter lidar station at our premises in Barcelona.

  11. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.

    2014-05-01

    Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM)more » program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.« less

  12. Maximum likelihood identification and optimal input design for identifying aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Stepner, D. E.; Mehra, R. K.

    1973-01-01

    A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.

  13. The significance of the Skylab altimeter experiment results and potential applications. [measurement of sea surface topography

    NASA Technical Reports Server (NTRS)

    Mourad, A. G.; Gopalapillai, S.; Kuhner, M.

    1975-01-01

    The Skylab Altimeter Experiment has proven the capability of the altimeter for measurement of sea surface topography. The geometric determination of the geoid/mean sea level from satellite altimetry is a new approach having significant applications in many disciplines including geodesy and oceanography. A Generalized Least Squares Collocation Technique was developed for determination of the geoid from altimetry data. The technique solves for the altimetry geoid and determines one bias term for the combined effect of sea state, orbit, tides, geoid, and instrument error using sparse ground truth data. The influence of errors in orbit and a priori geoid values are discussed. Although the Skylab altimeter instrument accuracy is about + or - 1 m, significant results were obtained in identification of large geoidal features such as over the Puerto Rico trench. Comparison of the results of several passes shows that good agreement exists between the general slopes of the altimeter geoid and the ground truth, and that the altimeter appears to be capable of providing more details than are now available with best known geoids. The altimetry geoidal profiles show excellent correlations with bathymetry and gravity. Potential applications of altimetry results to geodesy, oceanography, and geophysics are discussed.

  14. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  15. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  16. On decentralized adaptive full-order sliding mode control of multiple UAVs.

    PubMed

    Xiang, Xianbo; Liu, Chao; Su, Housheng; Zhang, Qin

    2017-11-01

    In this study, a novel decentralized adaptive full-order sliding mode control framework is proposed for the robust synchronized formation motion of multiple unmanned aerial vehicles (UAVs) subject to system uncertainty. First, a full-order sliding mode surface in a decentralized manner is designed to incorporate both the individual position tracking error and the synchronized formation error while the UAV group is engaged in building a certain desired geometric pattern in three dimensional space. Second, a decentralized virtual plant controller is constructed which allows the embedded low-pass filter to attain the chattering free property of the sliding mode controller. In addition, robust adaptive technique is integrated in the decentralized chattering free sliding control design in order to handle unknown bounded uncertainties, without requirements for assuming a priori knowledge of bounds on the system uncertainties as stated in conventional chattering free control methods. Subsequently, system robustness as well as stability of the decentralized full-order sliding mode control of multiple UAVs is synthesized. Numerical simulation results illustrate the effectiveness of the proposed control framework to achieve robust 3D formation flight of the multi-UAV system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  18. Ozone Climatological Profiles for Version 8 TOMS and SBUV Retrievals

    NASA Technical Reports Server (NTRS)

    McPeters, R. D.; Logan, J. A.; Labow, G. J.

    2003-01-01

    A new altitude dependent ozone climatology has been produced for use with the latest Total Ozone Mapping Spectrometer (TOMS) and Solar Backscatter Ultraviolet (SBUV) retrieval algorithms. The climatology consists of monthly average profiles for ten degree latitude zones covering from 0 to 60 km. The climatology was formed by combining data from SAGE II (1988 to 2000) and MLS (1991-1999) with data from balloon sondes (1988-2002). Ozone below about 20 km is based on balloons sondes, while ozone above 30 km is based on satellite measurements. The profiles join smoothly between 20 and 30 km. The ozone climatology in the southern hemisphere and tropics has been greatly enhanced in recent years by the addition of balloon sonde stations under the SHADOZ (Southern Hemisphere Additional Ozonesondes) program. A major source of error in the TOMS and SBUV retrieval of total column ozone comes from their reduced sensitivity to ozone in the lower troposphere. An accurate climatology for the retrieval a priori is important for reducing this error on the average. The new climatology follows the seasonal behavior of tropospheric ozone and reflects its hemispheric asymmetry. Comparisons of TOMS version 8 ozone with ground stations show an improvement due in part to the new climatology.

  19. Outlier analysis of functional genomic profiles enriches for oncology targets and enables precision medicine.

    PubMed

    Zhu, Zhou; Ihle, Nathan T; Rejto, Paul A; Zarrinkar, Patrick P

    2016-06-13

    Genome-scale functional genomic screens across large cell line panels provide a rich resource for discovering tumor vulnerabilities that can lead to the next generation of targeted therapies. Their data analysis typically has focused on identifying genes whose knockdown enhances response in various pre-defined genetic contexts, which are limited by biological complexities as well as the incompleteness of our knowledge. We thus introduce a complementary data mining strategy to identify genes with exceptional sensitivity in subsets, or outlier groups, of cell lines, allowing an unbiased analysis without any a priori assumption about the underlying biology of dependency. Genes with outlier features are strongly and specifically enriched with those known to be associated with cancer and relevant biological processes, despite no a priori knowledge being used to drive the analysis. Identification of exceptional responders (outliers) may not lead only to new candidates for therapeutic intervention, but also tumor indications and response biomarkers for companion precision medicine strategies. Several tumor suppressors have an outlier sensitivity pattern, supporting and generalizing the notion that tumor suppressors can play context-dependent oncogenic roles. The novel application of outlier analysis described here demonstrates a systematic and data-driven analytical strategy to decipher large-scale functional genomic data for oncology target and precision medicine discoveries.

  20. A quantitative visual dashboard to explore exposures to ...

    EPA Pesticide Factsheets

    The Exposure Prioritization (Ex Priori) model features a simplified, quantitative visual dashboard to explore exposures across chemical space. Diverse data streams are integrated within the interface such that different exposure scenarios for “individual,” “population,” or “professional” time-use profiles can be interchanged to tailor exposure and quantitatively explore multi-chemical signatures of exposure, internalized dose (uptake), body burden, and elimination. Ex Priori will quantitatively extrapolate single-point estimates of both exposure and internal dose for multiple exposure scenarios, factors, products, and pathways. Currently, EPA is investigating its usefulness in life cycle analysis, insofar as its ability to enhance exposure factors used in calculating characterization factors for human health. Presented at 2016 Annual ISES Meeting held in Utrecht, The Netherlands, from 9-13 October 2016.

  1. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  2. Estimating Asian terrestrial carbon fluxes from CONTRAIL aircraft and surface CO2 observations for the period 2006-2010

    NASA Astrophysics Data System (ADS)

    Zhang, H. F.; Chen, B. Z.; Machida, T.; Matsueda, H.; Sawa, Y.; Fukuyama, Y.; Langenfelds, R.; van der Schoot, M.; Xu, G.; Yan, J. W.; Cheng, M. L.; Zhou, L. X.; Tans, P. P.; Peters, W.

    2014-06-01

    Current estimates of the terrestrial carbon fluxes in Asia show large uncertainties particularly in the boreal and mid-latitudes and in China. In this paper, we present an updated carbon flux estimate for Asia ("Asia" refers to lands as far west as the Urals and is divided into boreal Eurasia, temperate Eurasia and tropical Asia based on TransCom regions) by introducing aircraft CO2 measurements from the CONTRAIL (Comprehensive Observation Network for Trace gases by Airline) program into an inversion modeling system based on the CarbonTracker framework. We estimated the averaged annual total Asian terrestrial land CO2 sink was about -1.56 Pg C yr-1 over the period 2006-2010, which offsets about one-third of the fossil fuel emission from Asia (+4.15 Pg C yr-1). The uncertainty of the terrestrial uptake estimate was derived from a set of sensitivity tests and ranged from -1.07 to -1.80 Pg C yr-1, comparable to the formal Gaussian error of ±1.18 Pg C yr-1 (1-sigma). The largest sink was found in forests, predominantly in coniferous forests (-0.64 ± 0.70 Pg C yr-1) and mixed forests (-0.14 ± 0.27 Pg C yr-1); and the second and third large carbon sinks were found in grass/shrub lands and croplands, accounting for -0.44 ± 0.48 Pg C yr-1 and -0.20 ± 0.48 Pg C yr-1, respectively. The carbon fluxes per ecosystem type have large a priori Gaussian uncertainties, and the reduction of uncertainty based on assimilation of sparse observations over Asia is modest (8.7-25.5%) for most individual ecosystems. The ecosystem flux adjustments follow the detailed a priori spatial patterns by design, which further increases the reliance on the a priori biosphere exchange model. The peak-to-peak amplitude of inter-annual variability (IAV) was 0.57 Pg C yr-1 ranging from -1.71 Pg C yr-1 to -2.28 Pg C yr-1. The IAV analysis reveals that the Asian CO2 sink was sensitive to climate variations, with the lowest uptake in 2010 concurrent with a summer flood and autumn drought and the largest CO2 sink in 2009 owing to favorable temperature and plentiful precipitation conditions. We also found the inclusion of the CONTRAIL data in the inversion modeling system reduced the uncertainty by 11% over the whole Asian region, with a large reduction in the southeast of boreal Eurasia, southeast of temperate Eurasia and most tropical Asian areas.

  3. Demonstration of Orbit Determination for the Lunar Reconnaissance Orbiter Using One-Way Laser Ranging Data

    NASA Technical Reports Server (NTRS)

    Bauer, S.; Hussmann, H.; Oberst, J.; Dirkx, D.; Mao, D.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.; McGarry, J. F.; Smith, D. E.; hide

    2016-01-01

    We used one-way laser ranging data from International Laser Ranging Service (ILRS) ground stations to NASA's Lunar Reconnaissance Orbiter (LRO) for a demonstration of orbit determination. In the one-way setup, the state of LRO and the parameters of the spacecraft and all involved ground station clocks must be estimated simultaneously. This setup introduces many correlated parameters that are resolved by using a priori constraints. More over the observation data coverage and errors accumulating from the dynamical and the clock modeling limit the maximum arc length. The objective of this paper is to investigate the effect of the arc length, the dynamical and modeling accuracy and the observation data coverage on the accuracy of the results. We analyzed multiple arcs using lengths of 2 and 7 days during a one-week period in Science Mission phase 02 (SM02,November2010) and compared the trajectories, the post-fit measurement residuals and the estimated clock parameters. We further incorporated simultaneous passes from multiple stations within the observation data to investigate the expected improvement in positioning. The estimated trajectories were compared to the nominal LRO trajectory and the clock parameters (offset, rate and aging) to the results found in the literature. Arcs estimated with one-way ranging data had differences of 5-30 m compared to the nominal LRO trajectory. While the estimated LRO clock rates agreed closely with the a priori constraints, the aging parameters absorbed clock modeling errors with increasing clock arc length. Because of high correlations between the different ground station clocks and due to limited clock modeling accuracy, their differences only agreed at the order of magnitude with the literature. We found that the incorporation of simultaneous passes requires improved modeling in particular to enable the expected improvement in positioning. We found that gaps in the observation data coverage over 12h (approximately equals 6 successive LRO orbits) prevented the successful estimation of arcs with lengths shorter or longer than 2 or 7 days with our given modeling.

  4. Validation of Diagnostic Measures Based on Latent Class Analysis: A Step Forward in Response Bias Research

    ERIC Educational Resources Information Center

    Thomas, Michael L.; Lanyon, Richard I.; Millsap, Roger E.

    2009-01-01

    The use of criterion group validation is hindered by the difficulty of classifying individuals on latent constructs. Latent class analysis (LCA) is a method that can be used for determining the validity of scales meant to assess latent constructs without such a priori classifications. The authors used this method to examine the ability of the L…

  5. Some New Mathematical Methods for Variational Objective Analysis

    NASA Technical Reports Server (NTRS)

    Wahba, G.; Johnson, D. R.

    1984-01-01

    New and/or improved variational methods for simultaneously combining forecast, heterogeneous observational data, a priori climatology, and physics to obtain improved estimates of the initial state of the atmosphere for the purpose of numerical weather prediction are developed. Cross validated spline methods are applied to atmospheric data for the purpose of improved description and analysis of atmospheric phenomena such as the tropopause and frontal boundary surfaces.

  6. Space-based surface wind vectors to aid understanding of air-sea interactions

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Bloom, S. C.; Hoffman, R. N.; Ardizzone, J. V.; Brin, G.

    1991-01-01

    A novel and unique ocean-surface wind data-set has been derived by combining the Defense Meteorological Satellite Program Special Sensor Microwave Imager data with additional conventional data. The variational analysis used generates a gridded surface wind analysis that minimizes an objective function measuring the misfit of the analysis to the background, the data, and certain a priori constraints. In the present case, the European Center for Medium-Range Weather Forecasts surface-wind analysis is used as the background.

  7. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  8. GOSAT CO2 retrieval results using TANSO-CAI aerosol information over East Asia

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, W.; Jung, Y.; Lee, S.; Kim, J.; Lee, H.; Boesch, H.; Goo, T. Y.

    2015-12-01

    In the satellite remote sensing of CO2, incorrect aerosol information could induce large errors as previous studies suggested. Many factors, such as, aerosol type, wavelength dependency of AOD, aerosol polarization effect and etc. have been main error sources. Due to these aerosol effects, large number of data retrieved are screened out in quality control, or retrieval errors tend to increase if not screened out, especially in East Asia where aerosol concentrations are fairly high. To reduce these aerosol induced errors, a CO2 retrieval algorithm using the simultaneous TANSO-CAI aerosol information is developed. This algorithm adopts AOD and aerosol type information as a priori information from the CAI aerosol retrieval algorithm. The CO2 retrieval algorithm based on optimal estimation method and VLIDORT, a vector discrete ordinate radiative transfer model. The CO2 algorithm, developed with various state vectors to find accurate CO2 concentration, shows reasonable results when compared with other dataset. This study concentrates on the validation of retrieved results with the ground-based TCCON measurements in East Asia and the comparison with the previous retrieval from ACOS, NIES, and UoL. Although, the retrieved CO2 concentration is lower than previous results by ppm's, it shows similar trend and high correlation with previous results. Retrieved data and TCCON measurements data are compared at three stations of Tsukuba, Saga, Anmyeondo in East Asia, with the collocation criteria of ±2°in latitude/longitude and ±1 hours of GOSAT passing time. Compared results also show similar trend with good correlation. Based on the TCCON comparison results, bias correction equation is calculated and applied to the East Asia data.

  9. Error analyses of JEM/SMILES standard products on L2 operational system

    NASA Astrophysics Data System (ADS)

    Mitsuda, C.; Takahashi, C.; Suzuki, M.; Hayashi, H.; Imai, K.; Sano, T.; Takayanagi, M.; Iwata, Y.; Taniguchi, H.

    2009-12-01

    SMILES (Superconducting Submillimeter-wave Limb-Emission Sounder) , which has been developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), is planned to be launched in September, 2009 and will be on board the Japanese Experiment Module (JEM) of the International Space Station (ISS). The SMILES measures the atmospheric limb emission from stratospheric minor constituents in 640 GHz band. Target species on L2 operational system are O3, ClO, HCl, HNO3, HOCl, CH3CN, HO2, BrO, and O3 isotopes (18OOO, 17OOO and O17OO). The SMILES carries 4 K cooled Superconductor-Insulator-Superconductor mixers to carry out high-sensitivity observations. In sub-millimeter band, water vapor absorption is an important factor to decide the tropospheric and stratospheric brightness temperature. The uncertainty of water vapor absorption influences the accuracy of molecular vertical profiles. Since the SMILES bands are narrow and far from H2O lines, it is a good approximation to assume this uncertainly as linear function of frequency. We include 0th and 1st coefficients of ‘baseline’ function, not water vapor profile, in state vector and retrieve them to remove influence of the water vapor uncertainty. We performed retrieval simulations using spectra computed by L2 operatinal forward model for various H2O conditions (-/+ 5, 10% difference between true profile and a priori profile in the stratosphere and -/+ 10, 20% one in the troposphere). The results show that the incremental errors of molecules are smaller than 10% of measurements errors when height correlation of baseline coefficients and temperature are assumed to be 10 km. In conclusion, the retrieval of the baseline coefficients effectively suppresses profile error due to bias of water vapor profile.

  10. Testing the hypothesis of neurodegeneracy in respiratory network function with a priori transected arterially perfused brain stem preparation of rat

    PubMed Central

    Jones, Sarah E.

    2016-01-01

    Degeneracy of respiratory network function would imply that anatomically discrete aspects of the brain stem are capable of producing respiratory rhythm. To test this theory we a priori transected brain stem preparations before reperfusion and reoxygenation at 4 rostrocaudal levels: 1.5 mm caudal to obex (n = 5), at obex (n = 5), and 1.5 (n = 7) and 3 mm (n = 6) rostral to obex. The respiratory activity of these preparations was assessed via recordings of phrenic and vagal nerves and lumbar spinal expiratory motor output. Preparations with a priori transection at level of the caudal brain stem did not produce stable rhythmic respiratory bursting, even when the arterial chemoreceptors were stimulated with sodium cyanide (NaCN). Reperfusion of brain stems that preserved the pre-Bötzinger complex (pre-BötC) showed spontaneous and sustained rhythmic respiratory bursting at low phrenic nerve activity (PNA) amplitude that occurred simultaneously in all respiratory motor outputs. We refer to this rhythm as the pre-BötC burstlet-type rhythm. Conserving circuitry up to the pontomedullary junction consistently produced robust high-amplitude PNA at lower burst rates, whereas sequential motor patterning across the respiratory motor outputs remained absent. Some of the rostrally transected preparations expressed both burstlet-type and regular PNA amplitude rhythms. Further analysis showed that the burstlet-type rhythm and high-amplitude PNA had 1:2 quantal relation, with burstlets appearing to trigger high-amplitude bursts. We conclude that no degenerate rhythmogenic circuits are located in the caudal medulla oblongata and confirm the pre-BötC as the primary rhythmogenic kernel. The absence of sequential motor patterning in a priori transected preparations suggests that pontine circuits govern respiratory pattern formation. PMID:26888109

  11. Testing the hypothesis of neurodegeneracy in respiratory network function with a priori transected arterially perfused brain stem preparation of rat.

    PubMed

    Jones, Sarah E; Dutschmann, Mathias

    2016-05-01

    Degeneracy of respiratory network function would imply that anatomically discrete aspects of the brain stem are capable of producing respiratory rhythm. To test this theory we a priori transected brain stem preparations before reperfusion and reoxygenation at 4 rostrocaudal levels: 1.5 mm caudal to obex (n = 5), at obex (n = 5), and 1.5 (n = 7) and 3 mm (n = 6) rostral to obex. The respiratory activity of these preparations was assessed via recordings of phrenic and vagal nerves and lumbar spinal expiratory motor output. Preparations with a priori transection at level of the caudal brain stem did not produce stable rhythmic respiratory bursting, even when the arterial chemoreceptors were stimulated with sodium cyanide (NaCN). Reperfusion of brain stems that preserved the pre-Bötzinger complex (pre-BötC) showed spontaneous and sustained rhythmic respiratory bursting at low phrenic nerve activity (PNA) amplitude that occurred simultaneously in all respiratory motor outputs. We refer to this rhythm as the pre-BötC burstlet-type rhythm. Conserving circuitry up to the pontomedullary junction consistently produced robust high-amplitude PNA at lower burst rates, whereas sequential motor patterning across the respiratory motor outputs remained absent. Some of the rostrally transected preparations expressed both burstlet-type and regular PNA amplitude rhythms. Further analysis showed that the burstlet-type rhythm and high-amplitude PNA had 1:2 quantal relation, with burstlets appearing to trigger high-amplitude bursts. We conclude that no degenerate rhythmogenic circuits are located in the caudal medulla oblongata and confirm the pre-BötC as the primary rhythmogenic kernel. The absence of sequential motor patterning in a priori transected preparations suggests that pontine circuits govern respiratory pattern formation. Copyright © 2016 the American Physiological Society.

  12. Mean bond-length variations in crystals for ions bonded to oxygen

    PubMed Central

    2017-01-01

    Variations in mean bond length are examined in oxide and oxysalt crystals for 55 cation configurations bonded to O2−. Stepwise multiple regression analysis shows that mean bond length is correlated to bond-length distortion in 42 ion configurations at the 95% confidence level, with a mean coefficient of determination (〈R 2〉) of 0.35. Previously published correlations between mean bond length and mean coordination number of the bonded anions are found not to be of general applicability to inorganic oxide and oxysalt structures. For two of 11 ions tested for the 95% confidence level, mean bond lengths predicted using a fixed radius for O2− are significantly more accurate as those predicted using an O2− radius dependent on coordination number, and are statistically identical otherwise. As a result, the currently accepted ionic radii for O2− in different coordinations are not justified by experimental data. Previously reported correlation between mean bond length and the mean electronegativity of the cations bonded to the oxygen atoms of the coordination polyhedron is shown to be statistically insignificant; similar results are obtained with regard to ionization energy. It is shown that a priori bond lengths calculated for many ion configurations in a single structure-type leads to a high correlation between a priori and observed mean bond lengths, but a priori bond lengths calculated for a single ion configuration in many different structure-types leads to negligible correlation between a priori and observed mean bond lengths. This indicates that structure type has a major effect on mean bond length, the magnitude of which goes beyond that of the other variables analyzed here.

  13. Conventional Principles in Science: On the foundations and development of the relativized a priori

    NASA Astrophysics Data System (ADS)

    Ivanova, Milena; Farr, Matt

    2015-11-01

    The present volume consists of a collection of papers originally presented at the conference Conventional Principles in Science, held at the University of Bristol, August 2011, which featured contributions on the history and contemporary development of the notion of 'relativized a priori' principles in science, from Henri Poincaré's conventionalism to Michael Friedman's contemporary defence of the relativized a priori. In Science and Hypothesis, Poincaré assessed the problematic epistemic status of Euclidean geometry and Newton's laws of motion, famously arguing that each has the status of 'convention' in that their justification is neither analytic nor empirical in nature. In The Theory of Relativity and A Priori Knowledge, Hans Reichenbach, in light of the general theory of relativity, proposed an updated notion of the Kantian synthetic a priori to account for the dynamic inter-theoretic status of geometry and other non-empirical physical principles. Reichenbach noted that one may reject the 'necessarily true' aspect of the synthetic a priori whilst preserving the feature of being constitutive of the object of knowledge. Such constitutive principles are theory-relative, as illustrated by the privileged role of non-Euclidean geometry in general relativity theory. This idea of relativized a priori principles in spacetime physics has been analysed and developed at great length in the modern literature in the work of Michael Friedman, in particular the roles played by the light postulate and the equivalence principle - in special and general relativity respectively - in defining the central terms of their respective theories and connecting the abstract mathematical formalism of the theories with their empirical content. The papers in this volume guide the reader through the historical development of conventional and constitutive principles in science, from the foundational work of Poincaré, Reichenbach and others, to contemporary issues and applications of the relativized a priori concerning the notion of measurement, physical possibility, and the interpretation of scientific theories.

  14. Semi-Supervised Clustering for High-Dimensional and Sparse Features

    ERIC Educational Resources Information Center

    Yan, Su

    2010-01-01

    Clustering is one of the most common data mining tasks, used frequently for data organization and analysis in various application domains. Traditional machine learning approaches to clustering are fully automated and unsupervised where class labels are unknown a priori. In real application domains, however, some "weak" form of side…

  15. A Factor Analytic Validation of Holland's Vocational Preference Inventory

    ERIC Educational Resources Information Center

    Di Scipio, William J.

    1974-01-01

    A principal components analysis was applied to a 135-item pool of the Holland Vocational Preference Inventory, Sixth Revision. The a priori clinical scales were partially upheld with differences attributed to the characteristics of the sample and sociopolitical time context during which the test was administered. (Author)

  16. Operator for object recognition and scene analysis by estimation of set occupancy with noisy and incomplete data sets

    NASA Astrophysics Data System (ADS)

    Rees, S. J.; Jones, Bryan F.

    1992-11-01

    Once feature extraction has occurred in a processed image, the recognition problem becomes one of defining a set of features which maps sufficiently well onto one of the defined shape/object models to permit a claimed recognition. This process is usually handled by aggregating features until a large enough weighting is obtained to claim membership, or an adequate number of located features are matched to the reference set. A requirement has existed for an operator or measure capable of a more direct assessment of membership/occupancy between feature sets, particularly where the feature sets may be defective representations. Such feature set errors may be caused by noise, by overlapping of objects, and by partial obscuration of features. These problems occur at the point of acquisition: repairing the data would then assume a priori knowledge of the solution. The technique described in this paper offers a set theoretical measure for partial occupancy defined in terms of the set of minimum additions to permit full occupancy and the set of locations of occupancy if such additions are made. As is shown, this technique permits recognition of partial feature sets with quantifiable degrees of uncertainty. A solution to the problems of obscuration and overlapping is therefore available.

  17. Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty

    PubMed Central

    Lu, Yang; Loizou, Philipos C.

    2011-01-01

    Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543

  18. Presentation of growth velocities of rural Haitian children using smoothing spline techniques.

    PubMed

    Waternaux, C; Hebert, J R; Dawson, R; Berggren, G G

    1987-01-01

    The examination of monthly (or quarterly) increments in weight or length is important for assessing the nutritional and health status of children. Growth velocities are widely thought to be more important than actual weight or length measurements per se. However, there are no standards by which clinicians, researchers, or parents can gauge a child's growth. This paper describes a method for computing growth velocities (monthly increments) for physical growth measurements with substantial measurement error and irregular spacing over time. These features are characteristic of data collected in the field where conditions are less than ideal. The technique of smoothing by splines provides a powerful tool to deal with the variability and irregularity of the measurements. The technique consists of approximating the observed data by a smooth curve as a clinician might have drawn on the child's growth chart. Spline functions are particularly appropriate to describe bio-physical processes such as growth, for which no model can be postulated a priori. This paper describes how the technique was used for the analysis of a large data base collected on pre-school aged children in rural Haiti. The sex-specific length and weight velocities derived from the spline-smoothed data are presented as reference data for researchers and others interested in longitudinal growth of children in the Third World.

  19. Highly correlated configuration interaction calculations on water with large orbital bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almora-Díaz, César X., E-mail: xalmora@fisica.unam.mx

    2014-05-14

    A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupledmore » cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)« less

  20. Beyond crisis resource management: new frontiers in human factors training for acute care medicine.

    PubMed

    Petrosoniak, Andrew; Hicks, Christopher M

    2013-12-01

    Error is ubiquitous in medicine, particularly during critical events and resuscitation. A significant proportion of adverse events can be attributed to inadequate team-based skills such as communication, leadership, situation awareness and resource utilization. Aviation-based crisis resource management (CRM) training using high-fidelity simulation has been proposed as a strategy to improve team behaviours. This review will address key considerations in CRM training and outline recommendations for the future of human factors education in healthcare. A critical examination of the current literature yields several important considerations to guide the development and implementation of effective simulation-based CRM training. These include defining a priori domain-specific objectives, creating an immersive environment that encourages deliberate practice and transfer-appropriate processing, and the importance of effective team debriefing. Building on research from high-risk industry, we suggest that traditional CRM training may be augmented with new training techniques that promote the development of shared mental models for team and task processes, address the effect of acute stress on team performance, and integrate strategies to improve clinical reasoning and the detection of cognitive errors. The evolution of CRM training involves a 'Triple Threat' approach that integrates mental model theory for team and task processes, training for stressful situations and metacognition and error theory towards a more comprehensive training paradigm, with roots in high-risk industry and cognitive psychology. Further research is required to evaluate the impact of this approach on patient-oriented outcomes.

  1. Multivariate matching pursuit in optimal Gabor dictionaries: theory and software with interface for EEG/MEG via Svarog

    PubMed Central

    2013-01-01

    Background Matching pursuit algorithm (MP), especially with recent multivariate extensions, offers unique advantages in analysis of EEG and MEG. Methods We propose a novel construction of an optimal Gabor dictionary, based upon the metrics introduced in this paper. We implement this construction in a freely available software for MP decomposition of multivariate time series, with a user friendly interface via the Svarog package (Signal Viewer, Analyzer and Recorder On GPL, http://braintech.pl/svarog), and provide a hands-on introduction to its application to EEG. Finally, we describe numerical and mathematical optimizations used in this implementation. Results Optimal Gabor dictionaries, based on the metric introduced in this paper, for the first time allowed for a priori assessment of maximum one-step error of the MP algorithm. Variants of multivariate MP, implemented in the accompanying software, are organized according to the mathematical properties of the algorithms, relevant in the light of EEG/MEG analysis. Some of these variants have been successfully applied to both multichannel and multitrial EEG and MEG in previous studies, improving preprocessing for EEG/MEG inverse solutions and parameterization of evoked potentials in single trials; we mention also ongoing work and possible novel applications. Conclusions Mathematical results presented in this paper improve our understanding of the basics of the MP algorithm. Simple introduction of its properties and advantages, together with the accompanying stable and user-friendly Open Source software package, pave the way for a widespread and reproducible analysis of multivariate EEG and MEG time series and novel applications, while retaining a high degree of compatibility with the traditional, visual analysis of EEG. PMID:24059247

  2. Method For Determining And Modifying Protein/Peptide Solubilty

    DOEpatents

    Waldo, Geoffrey S.

    2005-03-15

    A solubility reporter for measuring a protein's solubility in vivo or in vitro is described. The reporter, which can be used in a single living cell, gives a specific signal suitable for determining whether the cell bears a soluble version of the protein of interest. A pool of random mutants of an arbitrary protein, generated using error-prone in vitro recombination, may also be screened for more soluble versions using the reporter, and these versions may be recombined to yield variants having further-enhanced solubility. The method of the present invention includes "irrational" (random mutagenesis) methods, which do not require a priori knowledge of the three-dimensional structure of the protein of interest. Multiple sequences of mutation/genetic recombination and selection for improved solubility are demonstrated to yield versions of the protein which display enhanced solubility.

  3. A Simultaneous Equation Demand Model for Block Rates

    NASA Astrophysics Data System (ADS)

    Agthe, Donald E.; Billings, R. Bruce; Dobra, John L.; Raffiee, Kambiz

    1986-01-01

    This paper examines the problem of simultaneous-equations bias in estimation of the water demand function under an increasing block rate structure. The Hausman specification test is used to detect the presence of simultaneous-equations bias arising from correlation of the price measures with the regression error term in the results of a previously published study of water demand in Tucson, Arizona. An alternative simultaneous equation model is proposed for estimating the elasticity of demand in the presence of block rate pricing structures and availability of service charges. This model is used to reestimate the price and rate premium elasticities of demand in Tucson, Arizona for both the usual long-run static model and for a simple short-run demand model. The results from these simultaneous equation models are consistent with a priori expectations and are unbiased.

  4. Novel multireceiver communication systems configurations based on optimal estimation theory

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1992-01-01

    A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.

  5. Information Content of Bistatic Lidar Observations of Aerosols from Space

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail D.; Mishchenko, Michael I.

    2017-01-01

    We present, for the first time, a quantitative retrieval error-propagation study for a bistatic high spectral resolution lidar (HSRL) system intended for detailed quasi-global monitoring of aerosol properties from space. Our results demonstrate that supplementing a conventional monostatic HSRL with an additional receiver flown in formation at a scattering angle close to 165 degrees dramatically increases the information content of the measurements and allows for a sufficiently accurate characterization of tropospheric aerosols. We conclude that a bistatic HSRL system would far exceed the capabilities of currently flown or planned orbital instruments in monitoring global aerosol effects on the environment and on the Earth's climate. We also demonstrate how the commonly used a priori 'regularization' methodology can artificially reduce the propagated uncertainties and can thereby be misleading as to the real retrieval capabilities of a measurement system.

  6. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  7. Constraints on Pacific plate kinematics and dynamics with global positioning system measurements

    NASA Technical Reports Server (NTRS)

    Dixon, T. H.; Golombek, M. P.; Thornton, C. L.

    1985-01-01

    A measurement program designed to investigate kinematic and dynamic aspects of plate tectonics in the Pacific region by means of satellite observations is proposed. Accuracy studies are summarized showing that for short baselines (less than 100 km), the measuring accuracy of global positioning system (GPS) receivers can be in the centimeter range. For longer baselines, uncertainty in the orbital ephemerides of the GPS satellites could be a major source of error. Simultaneous observations at widely (about 300 km) separated fiducial stations over the Pacific region, should permit an accuracy in the centimeter range for baselines of up to several thousand kilometers. The optimum performance level is based on the assumption of that fiducial baselines are known a priori to the centimeter range. An example fiducial network for a GPS study of the South Pacific region is described.

  8. Dual-modality imaging

    NASA Astrophysics Data System (ADS)

    Hasegawa, Bruce; Tang, H. Roger; Da Silva, Angela J.; Wong, Kenneth H.; Iwata, Koji; Wu, Max C.

    2001-09-01

    In comparison to conventional medical imaging techniques, dual-modality imaging offers the advantage of correlating anatomical information from X-ray computed tomography (CT) with functional measurements from single-photon emission computed tomography (SPECT) or with positron emission tomography (PET). The combined X-ray/radionuclide images from dual-modality imaging can help the clinician to differentiate disease from normal uptake of radiopharmaceuticals, and to improve diagnosis and staging of disease. In addition, phantom and animal studies have demonstrated that a priori structural information from CT can be used to improve quantification of tissue uptake and organ function by correcting the radionuclide data for errors due to photon attenuation, partial volume effects, scatter radiation, and other physical effects. Dual-modality imaging therefore is emerging as a method of improving the visual quality and the quantitative accuracy of radionuclide imaging for diagnosis of patients with cancer and heart disease.

  9. DAISY: a new software tool to test global identifiability of biological and physiological systems.

    PubMed

    Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D'Angiò, Leontina

    2007-10-01

    A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/.

  10. Global Monitoring of Clouds and Aerosols Using a Network of Micro-Pulse Lidar Systems

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Spinhirne, James D.; Scott, V. Stanley

    2000-01-01

    Long-term global radiation programs, such as AERONET and BSRN, have shown success in monitoring column averaged cloud and aerosol optical properties. Little attention has been focused on global measurements of vertically resolved optical properties. Lidar systems are the preferred instrument for such measurements. However, global usage of lidar systems has not been achieved because of limits imposed by older systems that were large, expensive, and logistically difficult to use in the field. Small, eye-safe, and autonomous lidar systems are now currently available and overcome problems associated with older systems. The first such lidar to be developed is the Micro-pulse lidar System (MPL). The MPL has proven to be useful in the field because it can be automated, runs continuously (day and night), is eye-safe, can easily be transported and set up, and has a small field-of-view which removes multiple scattering concerns. We have developed successful protocols to operate and calibrate MPL systems. We have also developed a data analysis algorithm that produces data products such as cloud and aerosol layer heights, optical depths, extinction profiles, and the extinction-backscatter ratio. The algorithm minimizes the use of a priori assumptions and also produces error bars for all data products. Here we present an overview of our MPL protocols and data analysis techniques. We also discuss the ongoing construction of a global MPL network in conjunction with the AERONET program. Finally, we present some early results from the MPL network.

  11. An accurate evaluation of the performance of asynchronous DS-CDMA systems with zero-correlation-zone coding in Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.

    2010-04-01

    An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.

  12. Geological hazard zonation in a marble exploitation area (Apuan Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Francioni, M.; Salvini, R.; Riccucci, S.

    2011-12-01

    The present paper describes the hazard mapping of an exploitation area sited in the Apuan Alps marble district (Italy) carried out by the integration of various survey and analysis methodologies. The research, supported by the Massa and Carrara Local Sanitary Agency responsible for workplace health and safety activities, aimed to reduce the high degree hazard of rock fall caused by the presence of potentially unstable blocks located on slopes overhanging the marble quarries. The study of rocky fronts bases on the knowledge of both the structural setting and the physical-mechanical properties of intact material and its discontinuities. In this work the main difficulty in obtaining this information was the inaccessibility of the slope overhanging the area (up to 500 meters high). For this reason, the structural and geological-engineering surveys were integrated by outcomes from digital photogrammetry carried out through terrestrial stereoscopic photos acquired from an aerostatic balloon and a helicopter. In this way, it was possible to derive the geometrical characteristics of joints (such as discontinuities dip, dip direction, spacing and persistence), blocks volumes and slopes morphology also in inaccessible areas. This information, combined with data coming from the geological-engineering survey, was used to perform the stability analysis of the slope. Subsequently, using the topographic map at the scale of 1:2,000, the Digital Terrain Model (DTM) of the slopes and several topographic profiles along it were produced. Assuming that there is a good correspondence between travelling paths and maximum down slope angle, probable trajectories of rock fall along the slope were calculated on the DTM by means of a GIS procedure which utilizes the ArcHydro module of EsriTM ArcMap software. When performing such a 2D numerical modelling of rock falls, lateral dispersion of trajectories has often been hampered by the "a priori" choice of the travelling path. Such a choice can be assessed largely subjective and it leads to possible errors. Thus, rock fall hazard zonation needs spatially distributed analyses including a reliable modelling of lateral dispersion. In this research Conefall software, a freeware QuanterraTM code that estimates the potential run out areas by means of a "so-called" cone method, was used to compute the spatial distribution of rock falls frequency, velocities and kinetic energies. In this way, a modelling approach based on local morphologies was employed to assess the accuracy of the 2D analysis by profiles created "a priori" along the maximum down slope angle. Final results about slope stability and run out analysis allowed to create rock fall hazard map and to advise the most suitable protection works to mitigate the hazard in the most risky sites.

  13. Optimal phase estimation with arbitrary a priori knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz-Dobrzanski, Rafal

    2011-06-15

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less

  14. Sources of difficulty in assessment: example of PISA science items

    NASA Astrophysics Data System (ADS)

    Le Hebel, Florence; Montpied, Pascale; Tiberghien, Andrée; Fontanieu, Valérie

    2017-03-01

    The understanding of what makes a question difficult is a crucial concern in assessment. To study the difficulty of test questions, we focus on the case of PISA, which assesses to what degree 15-year-old students have acquired knowledge and skills essential for full participation in society. Our research question is to identify PISA science item characteristics that could influence the item's proficiency level. It is based on an a-priori item analysis and a statistical analysis. Results show that only the cognitive complexity and the format out of the different characteristics of PISA science items determined in our a-priori analysis have an explanatory power on an item's proficiency levels. The proficiency level cannot be explained by the dependence/independence of the information provided in the unit and/or item introduction and the competence. We conclude that in PISA, it appears possible to anticipate a high proficiency level, that is, students' low scores for items displaying a high cognitive complexity. In the case of a middle or low cognitive complexity level item, the cognitive complexity level is not sufficient to predict item difficulty. Other characteristics play a crucial role in item difficulty. We discuss anticipating the difficulties in assessment in a broader perspective.

  15. Deformation Time-Series of the Lost-Hills Oil Field using a Multi-Baseline Interferometric SAR Inversion Algorithm with Finite Difference Smoothing Constraints

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmüller, U.; Strozzi, T.

    2012-12-01

    The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.

  16. Collecting Kinematic Data on a Ski Track with Optoelectronic Stereophotogrammetry: A Methodological Study Assessing the Feasibility of Bringing the Biomechanics Lab to the Field.

    PubMed

    Spörri, Jörg; Schiefermüller, Christian; Müller, Erich

    2016-01-01

    In the laboratory, optoelectronic stereophotogrammetry is one of the most commonly used motion capture systems; particularly, when position- or orientation-related analyses of human movements are intended. However, for many applied research questions, field experiments are indispensable, and it is not a priori clear whether optoelectronic stereophotogrammetric systems can be expected to perform similarly to in-lab experiments. This study aimed to assess the instrumental errors of kinematic data collected on a ski track using optoelectronic stereophotogrammetry, and to investigate the magnitudes of additional skiing-specific errors and soft tissue/suit artifacts. During a field experiment, the kinematic data of different static and dynamic tasks were captured by the use of 24 infrared-cameras. The distances between three passive markers attached to a rigid bar were stereophotogrammetrically reconstructed and, subsequently, were compared to the manufacturer-specified exact values. While at rest or skiing at low speed, the optoelectronic stereophotogrammetric system's accuracy and precision for determining inter-marker distances were found to be comparable to those known for in-lab experiments (< 1 mm). However, when measuring a skier's kinematics under "typical" skiing conditions (i.e., high speeds, inclined/angulated postures and moderate snow spraying), additional errors were found to occur for distances between equipment-fixed markers (total measurement errors: 2.3 ± 2.2 mm). Moreover, for distances between skin-fixed markers, such as the anterior hip markers, additional artifacts were observed (total measurement errors: 8.3 ± 7.1 mm). In summary, these values can be considered sufficient for the detection of meaningful position- or orientation-related differences in alpine skiing. However, it must be emphasized that the use of optoelectronic stereophotogrammetry on a ski track is seriously constrained by limited practical usability, small-sized capture volumes and the occurrence of extensive snow spraying (which results in marker obscuration). The latter limitation possibly might be overcome by the use of more sophisticated cluster-based marker sets.

  17. Characteristics of subgrid-resolved-scale dynamics in anisotropic turbulence, with application to rough-wall boundary layers

    NASA Astrophysics Data System (ADS)

    Juneja, Anurag; Brasseur, James G.

    1999-10-01

    Large-eddy simulation (LES) of the atmospheric boundary layer (ABL) using eddy viscosity subgrid-scale (SGS) models is known to poorly predict mean shear at the first few grid cells near the ground, a rough surface with no viscous sublayer. It has recently been shown that convective motions carry this localized error vertically to infect the entire ABL, and that the error is more a consequence of the SGS model than grid resolution in the near-surface inertial layer. Our goal was to determine what first-order errors in the predicted SGS terms lead to spurious expectation values, and what basic dynamics in the filtered equation for resolved scale (RS) velocity must be captured by SGS models to correct the deficiencies. Our analysis is of general relevance to LES of rough-wall high Reynolds number boundary layers, where the essential difficulty in the closure is the importance of the SGS acceleration terms, a consequence of necessary under-resolution of relevant energy-containing motions at the first few grid levels, leading to potentially strong couplings between the anisotropies in resolved velocity and predicted SGS dynamics. We analyze these two issues (under-resolution and anisotropy) in the absence of a wall using two direct numerical simulation datasets of homogeneous turbulence with very different anisotropic structure characteristic of the near-surface ABL: shear- and buoyancy-generated turbulence. We uncover three important issues which should be addressed in the design of SGS closures near rough walls and we provide a priori tests for the SGS model. First, we identify a strong spurious coupling between the anisotropic structure of the resolved velocity field and predicted SGS dynamics which can create a feedback loop to incorrectly enhance certain components of the predicted velocity field. Second, we find that eddy viscosity and "similarity" SGS models do not contain enough degrees of freedom to capture, at a sufficient level of accuracy, both RS-SGS energy flux and SGS-RS dynamics. Third, to correctly capture pressure transport near a wall, closures must be made more flexible to accommodate proper partitioning between SGS stress divergence and SGS pressure gradient.

  18. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  19. Fuzzy logic of Aristotelian forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perlovsky, L.I.

    1996-12-31

    Model-based approaches to pattern recognition and machine vision have been proposed to overcome the exorbitant training requirements of earlier computational paradigms. However, uncertainties in data were found to lead to a combinatorial explosion of the computational complexity. This issue is related here to the roles of a priori knowledge vs. adaptive learning. What is the a-priori knowledge representation that supports learning? I introduce Modeling Field Theory (MFT), a model-based neural network whose adaptive learning is based on a priori models. These models combine deterministic, fuzzy, and statistical aspects to account for a priori knowledge, its fuzzy nature, and data uncertainties.more » In the process of learning, a priori fuzzy concepts converge to crisp or probabilistic concepts. The MFT is a convergent dynamical system of only linear computational complexity. Fuzzy logic turns out to be essential for reducing the combinatorial complexity to linear one. I will discuss the relationship of the new computational paradigm to two theories due to Aristotle: theory of Forms and logic. While theory of Forms argued that the mind cannot be based on ready-made a priori concepts, Aristotelian logic operated with just such concepts. I discuss an interpretation of MFT suggesting that its fuzzy logic, combining a-priority and adaptivity, implements Aristotelian theory of Forms (theory of mind). Thus, 2300 years after Aristotle, a logic is developed suitable for his theory of mind.« less

  20. Measuring Greenland Ice Mass Variation With Gravity Recovery and the Climate Experiment Gravity and GPS

    NASA Technical Reports Server (NTRS)

    Wu, Xiao-Ping

    1999-01-01

    The response of the Greenland ice sheet to climate change could significantly alter sea level. The ice sheet was much thicker at the last glacial maximum. To gain insight into the global change process and the future trend, it is important to evaluate the ice mass variation as a function of time and space. The Gravity Recovery and Climate Experiment (GRACE) mission to fly in 2001 for 5 years will measure gravity changes associated with the current ice variation and the solid earth's response to past variations. Our objective is to assess the separability of different change sources, accuracy and resolution in the mass variation determination by the new gravity data and possible Global Positioning System (GPS) bedrock uplift measurements. We use a reference parameter state that follows a dynamic ice model for current mass variation and a variant of the Tushingham and Peltier ICE-3G deglaciation model for historical deglaciation. The current linear trend is also assumed to have started 5 kyr ago. The Earth model is fixed as preliminary reference Earth model (PREM) with four viscoelastic layers. A discrete Bayesian inverse algorithm is developed employing an isotropic Gaussian a priori covariance function over the ice sheet and time. We use data noise predicted by the University of Texas and JPL for major GRACE error sources. A 2 mm/yr uplift uncertainty is assumed for GPS occupation time of 5 years. We then carry out covariance analysis and inverse simulation using GRACE geoid coefficients up to degree 180 in conjunction with a number of GPS uplift rates. Present-day ice mass variation and historical deglaciation are solved simultaneously over 146 grids of roughly 110 km x 110 km and with 6 time increments of 3 kyr each, along with a common starting epoch of the current trend. For present-day ice thickness change, the covariance analysis using GRACE geoid data alone results in a root mean square (RMS) posterior root variance of 2.6 cm/yr, with fairly large a priori uncertainties in the parameters and a Gaussian correlation length of 350 km. Simulated inverse can successfully recover most features in the reference present-day change. The RMS difference between them over the grids is 2.8 cm/yr. The RMS difference becomes 1.1 cm/yr when both are averaged with a half Gaussian wavelength of 150 km. With a fixed Earth model, GRACE alone can separate the geoid signals due to past and current load fairly well. Shown are the reference geoid signatures of direct and elastic effects of the current trend, the viscoelastic effect of the same trend starting from 5 kyr ago, the Post Glacial Rebound (PGR), and the predicted GRACE geoid error. The difference between the reference and inverse modeled total viscoelastic signatures is also shown. Although past and current ice mass variations are allowed the same spatial scale, their geoid signals have different spatial patterns. GPS data can contribute to the ice mass determination as well. Additional information is contained in the original.

  1. The Space-Wise Global Gravity Model from GOCE Nominal Mission Data

    NASA Astrophysics Data System (ADS)

    Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.

    2011-12-01

    In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.

  2. Planned Comparisons as Better Alternatives to ANOVA Omnibus Tests.

    ERIC Educational Resources Information Center

    Benton, Roberta L.

    Analyses of data are presented to illustrate the advantages of using a priori or planned comparisons rather than omnibus analysis of variance (ANOVA) tests followed by post hoc or posteriori testing. The two types of planned comparisons considered are planned orthogonal non-trend coding contrasts and orthogonal polynomial or trend contrast coding.…

  3. Development and Validation of the Sorokin Psychosocial Love Inventory for Divorced Individuals

    ERIC Educational Resources Information Center

    D'Ambrosio, Joseph G.; Faul, Anna C.

    2013-01-01

    Objective: This study describes the development and validation of the Sorokin Psychosocial Love Inventory (SPSLI) measuring love actions toward a former spouse. Method: Classical measurement theory and confirmatory factor analysis (CFA) were utilized with an a priori theory and factor model to validate the SPSLI. Results: A 15-item scale…

  4. Autonomous self-configuration of artificial neural networks for data classification or system control

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang

    2009-05-01

    Artificial neural networks (ANNs) are powerful methods for the classification of multi-dimensional data as well as for the control of dynamic systems. In general terms, ANNs consist of neurons that are, e.g., arranged in layers and interconnected by real-valued or binary neural couplings or weights. ANNs try mimicking the processing taking place in biological brains. The classification and generalization capabilities of ANNs are given by the interconnection architecture and the coupling strengths. To perform a certain classification or control task with a particular ANN architecture (i.e., number of neurons, number of layers, etc.), the inter-neuron couplings and their accordant coupling strengths must be determined (1) either by a priori design (i.e., manually) or (2) using training algorithms such as error back-propagation. The more complex the classification or control task, the less obvious it is how to determine an a priori design of an ANN, and, as a consequence, the architecture choice becomes somewhat arbitrary. Furthermore, rather than being able to determine for a given architecture directly the corresponding coupling strengths necessary to perform the classification or control task, these have to be obtained/learned through training of the ANN on test data. We report on the use of a Stochastic Optimization Framework (SOF; Fink, SPIE 2008) for the autonomous self-configuration of Artificial Neural Networks (i.e., the determination of number of hidden layers, number of neurons per hidden layer, interconnections between neurons, and respective coupling strengths) for performing classification or control tasks. This may provide an approach towards cognizant and self-adapting computing architectures and systems.

  5. Team interaction during surgery: a systematic review of communication coding schemes.

    PubMed

    Tiferes, Judith; Bisantz, Ann M; Guru, Khurshid A

    2015-05-15

    Communication problems have been systematically linked to human errors in surgery and a deep understanding of the underlying processes is essential. Although a number of tools exist to assess nontechnical skills, methods to study communication and other team-related processes are far from being standardized, making comparisons challenging. We conducted a systematic review to analyze methods used to study events in the operating room (OR) and to develop a synthesized coding scheme for OR team communication. Six electronic databases were accessed to search for articles that collected individual events during surgery and included detailed coding schemes. Additional articles were added based on cross-referencing. That collection was then classified based on type of events collected, environment type (real or simulated), number of procedures, type of surgical task, team characteristics, method of data collection, and coding scheme characteristics. All dimensions within each coding scheme were grouped based on emergent content similarity. Categories drawn from articles, which focused on communication events, were further analyzed and synthesized into one common coding scheme. A total of 34 of 949 articles met the inclusion criteria. The methodological characteristics and coding dimensions of the articles were summarized. A priori coding was used in nine studies. The synthesized coding scheme for OR communication included six dimensions as follows: information flow, period, statement type, topic, communication breakdown, and effects of communication breakdown. The coding scheme provides a standardized coding method for OR communication, which can be used to develop a priori codes for future studies especially in comparative effectiveness research. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Dietary patterns and bone mineral status in young adults: the Northern Ireland Young Hearts Project.

    PubMed

    Whittle, Claire R; Woodside, Jayne V; Cardwell, Chris R; McCourt, Hannah J; Young, Ian S; Murray, Liam J; Boreham, Colin A; Gallagher, Alison M; Neville, Charlotte E; McKinley, Michelle C

    2012-10-28

    Studies of individual nutrients or foods have revealed much about dietary influences on bone. Multiple food or nutrient approaches, such as dietary pattern analysis, could offer further insight but research is limited and largely confined to older adults. We examined the relationship between dietary patterns, obtained by a posteriori and a priori methods, and bone mineral status (BMS; collective term for bone mineral content (BMC) and bone mineral density (BMD)) in young adults (20-25 years; n 489). Diet was assessed by 7 d diet history and BMD and BMC were determined at the lumbar spine and femoral neck (FN). A posteriori dietary patterns were derived using principal component analysis (PCA) and three a priori dietary quality scores were applied (dietary diversity score (DDS), nutritional risk score and Mediterranean diet score). For the PCA-derived dietary patterns, women in the top compared to the bottom fifth of the 'Nuts and Meat' pattern had greater FN BMD by 0·074 g/cm(2) (P = 0·049) and FN BMC by 0·40 g (P = 0·034) after adjustment for confounders. Similarly, men in the top compared to the bottom fifth of the 'Refined' pattern had lower FN BMC by 0·41 g (P = 0·049). For the a priori DDS, women in the top compared to the bottom third had lower FN BMD by 0·05 g/cm(2) after adjustments (P = 0·052), but no other relationships with BMS were identified. In conclusion, adherence to a 'Nuts and Meat' dietary pattern may be associated with greater BMS in young women and a 'Refined' dietary pattern may be detrimental for bone health in young men.

  7. Structural brain correlates of executive engagement in working memory: children's inter-individual differences are reflected in the anterior insular cortex.

    PubMed

    Rossi, Sandrine; Lubin, Amélie; Simon, Grégory; Lanoë, Céline; Poirel, Nicolas; Cachia, Arnaud; Pineau, Arlette; Houdé, Olivier

    2013-06-01

    Although the development of executive functions has been extensively investigated at a neurofunctional level, studies of the structural relationships between executive functions and brain anatomy are still scarce. Based on our previous meta-analysis of functional neuroimaging studies examining executive functions in children (Houdé, Rossi, Lubin, and Joliot, (2010). Developmental Science, 13, 876-885), we investigated six a priori regions of interest: the left anterior insular cortex (AIC), the left and the right supplementary motor areas, the right middle and superior frontal gyri, and the left precentral gyrus. Structural magnetic resonance imaging scans were acquired from 22 to 10-year-old children. Local gray matter volumes, assessed automatically using a standard voxel-based morphometry approach, were correlated with executive and storage working memory capacities evaluated using backward and forward digit span tasks, respectively. We found an association between smaller gray matter volume--i.e., an index of neural maturation--in the left AIC and high backward memory span while gray matter volumes in the a priori selected regions of interest were not linked with forward memory span. These results were corroborated by a whole-brain a priori free analysis that revealed a significant negative correlation in the frontal and prefrontal regions, including the left AIC, with the backward memory span, and in the right inferior parietal lobe, with the forward memory span. Taken together, these results suggest a distinct and specific association between regional gray matter volume and the executive component vs. the storage component of working memory. Moreover, they support a key role for the AIC in the executive network of children. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Application of ray-traced tropospheric slant delays to geodetic VLBI analysis

    NASA Astrophysics Data System (ADS)

    Hofmeister, Armin; Böhm, Johannes

    2017-08-01

    The correction of tropospheric influences via so-called path delays is critical for the analysis of observations from space geodetic techniques like the very long baseline interferometry (VLBI). In standard VLBI analysis, the a priori slant path delays are determined using the concept of zenith delays, mapping functions and gradients. The a priori use of ray-traced delays, i.e., tropospheric slant path delays determined with the technique of ray-tracing through the meteorological data of numerical weather models (NWM), serves as an alternative way of correcting the influences of the troposphere on the VLBI observations within the analysis. In the presented research, the application of ray-traced delays to the VLBI analysis of sessions in a time span of 16.5 years is investigated. Ray-traced delays have been determined with program RADIATE (see Hofmeister in Ph.D. thesis, Department of Geodesy and Geophysics, Faculty of Mathematics and Geoinformation, Technische Universität Wien. http://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-3444, 2016) utilizing meteorological data provided by NWM of the European Centre for Medium-Range Weather Forecasts (ECMWF). In comparison with a standard VLBI analysis, which includes the tropospheric gradient estimation, the application of the ray-traced delays to an analysis, which uses the same parameterization except for the a priori slant path delay handling and the used wet mapping factors for the zenith wet delay (ZWD) estimation, improves the baseline length repeatability (BLR) at 55.9% of the baselines at sub-mm level. If no tropospheric gradients are estimated within the compared analyses, 90.6% of all baselines benefit from the application of the ray-traced delays, which leads to an average improvement of the BLR of 1 mm. The effects of the ray-traced delays on the terrestrial reference frame are also investigated. A separate assessment of the RADIATE ray-traced delays is carried out by comparison to the ray-traced delays from the National Aeronautics and Space Administration Goddard Space Flight Center (NASA GSFC) (Eriksson and MacMillan in http://lacerta.gsfc.nasa.gov/tropodelays, 2016) with respect to the analysis performances in terms of BLR results. If tropospheric gradient estimation is included in the analysis, 51.3% of the baselines benefit from the RADIATE ray-traced delays at sub-mm difference level. If no tropospheric gradients are estimated within the analysis, the RADIATE ray-traced delays deliver a better BLR at 63% of the baselines compared to the NASA GSFC ray-traced delays.

  9. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  10. The Effects of Rainfall Inhomogeneity on Climate Variability of Rainfall Estimated from Passive Microwave Sensors

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Poyner, Philip; Berg, Wesley; Thomas-Stahle, Jody

    2007-01-01

    Passive microwave rainfall estimates that exploit the emission signal of raindrops in the atmosphere are sensitive to the inhomogeneity of rainfall within the satellite field of view (FOV). In particular, the concave nature of the brightness temperature (T(sub b)) versus rainfall relations at frequencies capable of detecting the blackbody emission of raindrops cause retrieval algorithms to systematically underestimate precipitation unless the rainfall is homogeneous within a radiometer FOV, or the inhomogeneity is accounted for explicitly. This problem has a long history in the passive microwave community and has been termed the beam-filling error. While not a true error, correcting for it requires a priori knowledge about the actual distribution of the rainfall within the satellite FOV, or at least a statistical representation of this inhomogeneity. This study first examines the magnitude of this beam-filling correction when slant-path radiative transfer calculations are used to account for the oblique incidence of current radiometers. Because of the horizontal averaging that occurs away from the nadir direction, the beam-filling error is found to be only a fraction of what has been reported previously in the literature based upon plane-parallel calculations. For a FOV representative of the 19-GHz radiometer channel (18 km X 28 km) aboard the Tropical Rainfall Measuring Mission (TRMM), the mean beam-filling correction computed in this study for tropical atmospheres is 1.26 instead of 1.52 computed from plane-parallel techniques. The slant-path solution is also less sensitive to finescale rainfall inhomogeneity and is, thus, able to make use of 4-km radar data from the TRMM Precipitation Radar (PR) in order to map regional and seasonal distributions of observed rainfall inhomogeneity in the Tropics. The data are examined to assess the expected errors introduced into climate rainfall records by unresolved changes in rainfall inhomogeneity. Results show that global mean monthly errors introduced by not explicitly accounting for rainfall inhomogeneity do not exceed 0.5% if the beam-filling error is allowed to be a function of rainfall rate and freezing level and does not exceed 2% if a universal beam-filling correction is applied that depends only upon the freezing level. Monthly regional errors can be significantly larger. Over the Indian Ocean, errors as large as 8% were found if the beam-filling correction is allowed to vary with rainfall rate and freezing level while errors of 15% were found if a universal correction is used.

  11. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  12. A system to use electromagnetic tracking for the quality assurance of brachytherapy catheter digitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.

    2014-10-15

    Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less

  13. Estimating the impact of birth control on fertility rate in sub-Saharan Africa.

    PubMed

    Ijaiya, Gafar T; Raheem, Usman A; Olatinwo, Abdulwaheed O; Ijaiya, Munir-Deen A; Ijaiya, Mukaila A

    2009-12-01

    Using a cross-country data drawn from 40 countries and a multiple regression analysis, this paper examines the impact of birth control devices on the rate of fertility in sub-Saharan Africa. Our a-priori expectations are that the more women used birth control devices, the less will be the fertility rate in sub-Saharan Africa. The result obtained from the study indicates that except for withdrawal method that fall contrary to our expectation other variables (methods) like the use of pills, injection, intra uterine device (IUD), condom/diaphragm and cervical cap, female sterilization and periodic abstinence/rhythm fulfilled our a-priori expectations. These results notwithstanding, the paper suggests measures, such as the need for massive enlightenment campaign on the benefit of these birth control devices, the frequent checking of the potency of the devices and good governance in the delivery of the devices

  14. Stochastic dynamics for idiotypic immune networks

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Agliari, Elena

    2010-12-01

    In this work we introduce and analyze the stochastic dynamics obeyed by a model of an immune network recently introduced by the authors. We develop Fokker-Planck equations for the single lymphocyte behavior and coarse grained Langevin schemes for the averaged clone behavior. After showing agreement with real systems (as a short path Jerne cascade), we suggest, both with analytical and numerical arguments, explanations for the generation of (metastable) memory cells, improvement of the secondary response (both in the quality and quantity) and bell shaped modulation against infections as a natural behavior. The whole emerges from the model without being postulated a-priori as it often occurs in second generation immune networks: so the aim of the work is to present some out-of-equilibrium features of this model and to highlight mechanisms which can replace a-priori assumptions in view of further detailed analysis in theoretical systemic immunology.

  15. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE PAGES

    Huang, Hongying; Chen, Zheng; Li, Jin; ...

    2016-08-23

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  16. The Impact of Gene-Environment Dependence and Misclassification in Genetic Association Studies Incorporating Gene-Environment Interactions

    PubMed Central

    Lindström, Sara; Yen, Yu-Chun; Spiegelman, Donna; Kraft, Peter

    2009-01-01

    The possibility of gene-environment interaction can be exploited to identify genetic variants associated with disease using a joint test of genetic main effect and gene-environment interaction. We consider how exposure misclassification and dependence between the true exposure E and the tested genetic variant G affect this joint test in absolute terms and relative to three other tests: the marginal test (G), the standard test for multiplicative gene-environment interaction (GE), and the case-only test for interaction (GE-CO). All tests can have inflated Type I error rate when E and G are correlated in the underlying population. For the GE and G-GE tests this inflation is only noticeable when the gene-environment dependence is unusually strong; the inflation can be large for the GE-CO test even for modest correlation. The joint G-GE test has greater power than the GE test generally, and greater power than the G test when there is no genetic main effect and the measurement error is small to moderate. The joint G-GE test is an attractive test for assessing genetic association when there is limited knowledge about casual mechanisms a priori, even in the presence of misclassification in environmental exposure measurement and correlation between exposure and genetic variants. PMID:19521099

  17. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hongying; Chen, Zheng; Li, Jin

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  18. Mortality risk score prediction in an elderly population using machine learning.

    PubMed

    Rose, Sherri

    2013-03-01

    Standard practice for prediction often relies on parametric regression methods. Interesting new methods from the machine learning literature have been introduced in epidemiologic studies, such as random forest and neural networks. However, a priori, an investigator will not know which algorithm to select and may wish to try several. Here I apply the super learner, an ensembling machine learning approach that combines multiple algorithms into a single algorithm and returns a prediction function with the best cross-validated mean squared error. Super learning is a generalization of stacking methods. I used super learning in the Study of Physical Performance and Age-Related Changes in Sonomans (SPPARCS) to predict death among 2,066 residents of Sonoma, California, aged 54 years or more during the period 1993-1999. The super learner for predicting death (risk score) improved upon all single algorithms in the collection of algorithms, although its performance was similar to that of several algorithms. Super learner outperformed the worst algorithm (neural networks) by 44% with respect to estimated cross-validated mean squared error and had an R2 value of 0.201. The improvement of super learner over random forest with respect to R2 was approximately 2-fold. Alternatives for risk score prediction include the super learner, which can provide improved performance.

  19. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions.

    PubMed

    Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram

    2014-04-01

    Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.

  20. Two prospective Li-based half-Heusler alloys for spintronic applications based on structural stability and spin–orbit effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, R. L.; Damewood, L.; Zeng, Y. J.

    To search for half-metallic materials for spintronic applications, instead of using an expensive trial-and-error experimental scheme, it is more efficient to use first-principles calculations to design materials first, and then grow them. In particular, using a priori information of the structural stability and the effect of the spin–orbit interaction (SOI) enables experimentalists to focus on favorable properties that make growing half-metals easier. We suggest that using acoustic phonon spectra is the best way to address the stability of promising half-metallic materials. Additionally, by carrying out accurate first-principles calculations, we propose two criteria for neglecting the SOI so the half-metallicity persists.more » As a result, based on the mechanical stability and the negligible SOI, we identified two half-metals, β-LiCrAs and β-LiMnSi, as promising half-Heusler alloys worth growing.« less

  1. Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Chang-hui; Wei, Kai

    Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.

  2. Joint penalized-likelihood reconstruction of time-activity curves and regions-of-interest from projection data in brain PET

    NASA Astrophysics Data System (ADS)

    Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.

    2008-06-01

    This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.

  3. Bidirectional composition on lie groups for gradient-based image alignment.

    PubMed

    Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick

    2010-09-01

    In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.

  4. Real-time Bayesian anomaly detection in streaming environmental data

    NASA Astrophysics Data System (ADS)

    Hill, David J.; Minsker, Barbara S.; Amir, Eyal

    2009-04-01

    With large volumes of data arriving in near real time from environmental sensors, there is a need for automated detection of anomalous data caused by sensor or transmission errors or by infrequent system behaviors. This study develops and evaluates three automated anomaly detection methods using dynamic Bayesian networks (DBNs), which perform fast, incremental evaluation of data as they become available, scale to large quantities of data, and require no a priori information regarding process variables or types of anomalies that may be encountered. This study investigates these methods' abilities to identify anomalies in eight meteorological data streams from Corpus Christi, Texas. The results indicate that DBN-based detectors, using either robust Kalman filtering or Rao-Blackwellized particle filtering, outperform a DBN-based detector using Kalman filtering, with the former having false positive/negative rates of less than 2%. These methods were successful at identifying data anomalies caused by two real events: a sensor failure and a large storm.

  5. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  6. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  7. Active impulsive noise control using maximum correntropy with adaptive kernel size

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Zhao, Haiquan

    2017-03-01

    The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.

  8. An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter

    NASA Astrophysics Data System (ADS)

    Chang, M.; Kang, Z.

    2017-09-01

    Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.

  9. Control of a flexible link by shaping the closed loop frequency response function through optimised feedback filters

    NASA Astrophysics Data System (ADS)

    Del Vescovo, D.; D'Ambrogio, W.

    1995-01-01

    A frequency domain method is presented to design a closed-loop control for vibration reduction flexible mechanisms. The procedure is developed on a single-link flexible arm, driven by one rotary degree of freedom servomotor, although the same technique may be applied to similar systems such as supports for aerospace antennae or solar panels. The method uses the structural frequency response functions (FRFs), thus avoiding system identification, that produces modeling uncertainties. Two closed-loops are implemented: the inner loop uses acceleration feedback with the aim of making the FRF similar to that of an equivalent rigid link; the outer loop feeds back displacements to achieve a fast positioning response and null steady state error. In both cases, the controller type is established a priori, while actual characteristics are defined by an optimisation procedure in which the relevant FRF is constrained into prescribed bounds and stability is taken into account.

  10. Reliable multicast protocol specifications flow control and NACK policy

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Whetten, Brian

    1995-01-01

    This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.

  11. Generic distortion model for metrology under optical microscopes

    NASA Astrophysics Data System (ADS)

    Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng

    2018-04-01

    For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.

  12. Data-Rate Estimation for Autonomous Receiver Operation

    NASA Technical Reports Server (NTRS)

    Tkacenko, A.; Simon, M. K.

    2005-01-01

    In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.

  13. An adaptive tracking observer for failure-detection systems

    NASA Technical Reports Server (NTRS)

    Sidar, M.

    1982-01-01

    The design problem of adaptive observers applied to linear, constant and variable parameters, multi-input, multi-output systems, is considered. It is shown that, in order to keep the observer's (or Kalman filter) false-alarm rate (FAR) under a certain specified value, it is necessary to have an acceptable proper matching between the observer (or KF) model and the system parameters. An adaptive observer algorithm is introduced in order to maintain desired system-observer model matching, despite initial mismatching and/or system parameter variations. Only a properly designed adaptive observer is able to detect abrupt changes in the system (actuator, sensor failures, etc.) with adequate reliability and FAR. Conditions for convergence for the adaptive process were obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors and accurate and fast parameter identification, in both deterministic and stochastic cases.

  14. Expanding the Nucleotide and Sugar 1-Phosphate Promiscuity of Nucleotidyltransferase RmlA via Directed Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moretti, Rocco; Chang, Aram; Peltier-Pain, Pauline

    2012-03-15

    Directed evolution is a valuable technique to improve enzyme activity in the absence of a priori structural knowledge, which can be typically enhanced via structure-guided strategies. In this study, a combination of both whole-gene error-prone polymerase chain reaction and site-saturation mutagenesis enabled the rapid identification of mutations that improved RmlA activity toward non-native substrates. These mutations have been shown to improve activities over 10-fold for several targeted substrates, including non-native pyrimidine- and purine-based NTPs as well as non-native d- and l-sugars (both a- and b-isomers). This study highlights the first broadly applicable high throughput sugar-1-phosphate nucleotidyltransferase screen and the firstmore » proof of concept for the directed evolution of this enzyme class toward the identification of uniquely permissive RmlA variants.« less

  15. Adaptive fuzzy prescribed performance control for MIMO nonlinear systems with unknown control direction and unknown dead-zone inputs.

    PubMed

    Shi, Wuxi; Luo, Rui; Li, Baoquan

    2017-01-01

    In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Two prospective Li-based half-Heusler alloys for spintronic applications based on structural stability and spin–orbit effect

    DOE PAGES

    Zhang, R. L.; Damewood, L.; Zeng, Y. J.; ...

    2017-07-07

    To search for half-metallic materials for spintronic applications, instead of using an expensive trial-and-error experimental scheme, it is more efficient to use first-principles calculations to design materials first, and then grow them. In particular, using a priori information of the structural stability and the effect of the spin–orbit interaction (SOI) enables experimentalists to focus on favorable properties that make growing half-metals easier. We suggest that using acoustic phonon spectra is the best way to address the stability of promising half-metallic materials. Additionally, by carrying out accurate first-principles calculations, we propose two criteria for neglecting the SOI so the half-metallicity persists.more » As a result, based on the mechanical stability and the negligible SOI, we identified two half-metals, β-LiCrAs and β-LiMnSi, as promising half-Heusler alloys worth growing.« less

  17. Spectroscopic approach for dynamic bioanalyte tracking with minimal concentration information

    NASA Astrophysics Data System (ADS)

    Spegazzini, Nicolas; Barman, Ishan; Dingari, Narahara Chari; Pandey, Rishikesh; Soares, Jaqueline S.; Ozaki, Yukihiro; Dasari, Ramachandra Rao

    2014-11-01

    Vibrational spectroscopy has emerged as a promising tool for non-invasive, multiplexed measurement of blood constituents - an outstanding problem in biophotonics. Here, we propose a novel analytical framework that enables spectroscopy-based longitudinal tracking of chemical concentration without necessitating extensive a priori concentration information. The principal idea is to employ a concentration space transformation acquired from the spectral information, where these estimates are used together with the concentration profiles generated from the system kinetic model. Using blood glucose monitoring by Raman spectroscopy as an illustrative example, we demonstrate the efficacy of the proposed approach as compared to conventional calibration methods. Specifically, our approach exhibits a 35% reduction in error over partial least squares regression when applied to a dataset acquired from human subjects undergoing glucose tolerance tests. This method offers a new route at screening gestational diabetes and opens doors for continuous process monitoring without sample perturbation at intermediate time points.

  18. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    NASA Astrophysics Data System (ADS)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  19. What "Exactly" Do You Want Me to Do? Analysis of a Criterion Referenced Assessment Project

    ERIC Educational Resources Information Center

    Jewels, Tony; Ford, Marilyn; Jones, Wendy

    2007-01-01

    In tertiary institutions in Australia, and no doubt elsewhere, there is increasing pressure for accountability. No longer are academics assumed "a priori" to be responsible and capable of self management in teaching and assessing the subjects they run. Procedures are being dictated more from the "top down". Although academics…

  20. Analysis of Reaction Products and Conversion Time in the Pyrolisis of Cellulose and Wood Particles

    NASA Technical Reports Server (NTRS)

    Miller, R. S.; Bellan, J.

    1996-01-01

    A detailed mathematical model is presented for the temporal and spatial accurate modeling of solid-fluid reactions in porous particles for which volumetric reaction rate data is known a priori and both the porosity and the permeability of the particle are large enough to allow for continuous gas flow.

  1. The Latent Curve ARMA (P, Q) Panel Model: Longitudinal Data Analysis in Educational Research and Evaluation

    ERIC Educational Resources Information Center

    Sivo, Stephen; Fan, Xitao

    2008-01-01

    Autocorrelated residuals in longitudinal data are widely reported as common to longitudinal data. Yet few, if any, researchers modeling growth processes evaluate a priori whether their data have this feature. Sivo, Fan, and Witta (2005) found that not modeling autocorrelated residuals present in longitudinal data severely biases latent curve…

  2. A Cross-National Study of the Relationship between Elderly Suicide Rates and Urbanization

    ERIC Educational Resources Information Center

    Shah, Ajit

    2008-01-01

    There is mixed evidence of a relationship between suicide rates in the general population and urbanization, and a paucity of studies examining this relationship in the elderly. A cross-national study with curve estimation regression model analysis, was undertaken to examine the a priori hypothesis that the relationship between elderly suicide…

  3. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  4. Implementation of dynamic bias for neutron-photon pulse shape discrimination by using neural network classifiers

    NASA Astrophysics Data System (ADS)

    Cao, Zhong; Miller, L. F.; Buckner, M.

    In order to accurately determine dose equivalent in radiation fields that include both neutrons and photons, it is necessary to measure the relative number of neutrons to photons and to characterize the energy dependence of the neutrons. The relationship between dose and dose equivalent begins to increase rapidly at about 100 keV; thus, it is necessary to separate neutrons from photons for neutron energies as low as about 100 keV in order to measure dose equivalent in a mixed radiation field that includes both neutrons and photons. Preceptron and back propagation neural networks that use pulse amplitude and pulse rise time information obtain separation of neutron and photons with about 5% error for neutrons with energies as low as 100 keV, and this is accomplished for neutrons with energies that range from 100 keV to several MeV. If the ratio of neutrons to photons is changed by a factor of 10, the classification error increases to about 15% for the neural networks tested. A technique that uses the output from the preceptron as a priori for a Bayesian classifier is more robust to changes in the relative number of neutrons to photons, and it obtains a 5% classification error when this ratio is changed by a factor of ten. Results from this research demonstrate that it is feasible to use commercially available instrumentation in combination with artificial intelligence techniques to develop a practical detector that will accurately measure dose equivalent in mixed neutron-photon radiation fields.

  5. Quantum annealing correction with minor embedding

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.

    2015-10-01

    Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.

  6. Adapting the McMaster-Ottawa scale and developing behavioral anchors for assessing performance in an interprofessional Team Observed Structured Clinical Encounter.

    PubMed

    Lie, Désirée; May, Win; Richter-Lagha, Regina; Forest, Christopher; Banzali, Yvonne; Lohenry, Kevin

    2015-01-01

    Current scales for interprofessional team performance do not provide adequate behavioral anchors for performance evaluation. The Team Observed Structured Clinical Encounter (TOSCE) provides an opportunity to adapt and develop an existing scale for this purpose. We aimed to test the feasibility of using a retooled scale to rate performance in a standardized patient encounter and to assess faculty ability to accurately rate both individual students and teams. The 9-point McMaster-Ottawa Scale developed for a TOSCE was converted to a 3-point scale with behavioral anchors. Students from four professions were trained a priori to perform in teams of four at three different levels as individuals and teams. Blinded faculty raters were trained to use the scale to evaluate individual and team performances. G-theory was used to analyze ability of faculty to accurately rate individual students and teams using the retooled scale. Sixteen faculty, in groups of four, rated four student teams, each participating in the same TOSCE station. Faculty expressed comfort rating up to four students in a team within a 35-min timeframe. Accuracy of faculty raters varied (38-81% individuals, 50-100% teams), with errors in the direction of over-rating individual, but not team performance. There was no consistent pattern of error for raters. The TOSCE can be administered as an evaluation method for interprofessional teams. However, faculty demonstrate a 'leniency error' in rating students, even with prior training using behavioral anchors. To improve consistency, we recommend two trained faculty raters per station.

  7. Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-05-01

    The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  8. Spacecraft-spacecraft radio-metric tracking: Signal acquisition requirements and application to Mars approach navigation

    NASA Technical Reports Server (NTRS)

    Kahn, R. D.; Thurman, S.; Edwards, C.

    1994-01-01

    Doppler and ranging measurements between spacecraft can be obtained only when the ratio of the total received signal power to noise power density (P(sub t)/N(sub 0)) at the receiving spacecraft is sufficiently large that reliable signal detection can be achieved within a reasonable time period. In this article, the requirement on P(sub t)/N(sub 0) for reliable carrier signal detection is calculated as a function of various system parameters, including characteristics of the spacecraft computing hardware and a priori uncertainty in spacecraft-spacecraft relative velocity and acceleration. Also calculated is the P(sub t)/N(sub 0) requirements for reliable detection of a ranging signal, consisting of a carrier with pseudonoise (PN) phase modulation. Once the P(sub t)/N(sub 0) requirement is determined, then for a given set of assumed spacecraft telecommunication characteristics (transmitted signal power, antenna gains, and receiver noise temperatures) it is possible to calculate the maximum range at which a carrier signal or ranging signal may be acquired. For example, if a Mars lander and a spacecraft approaching Mars are each equipped with 1-m-diameter antennas, the transmitted power is 5 W, and the receiver noise temperatures are 350 K, then S-band carrier signal acquisition can be achieved at ranges exceeding 10 million km. An error covariance analysis illustrates the utility of in situ Doppler and ranging measurements for Mars approach navigation. Covariance analysis results indicate that navigation accuracies of a few km can be achieved with either data type. The analysis also illustrates dependency of the achievable accuracy on the approach trajectory velocity.

  9. Study protocol for a framework analysis using video review to identify latent safety threats: trauma resuscitation using in situ simulation team training (TRUST)

    PubMed Central

    Petrosoniak, Andrew; Pinkney, Sonia; Hicks, Christopher; White, Kari; Almeida, Ana Paula Siquiera Silva; Campbell, Douglas; McGowan, Melissa; Gray, Alice; Trbovich, Patricia

    2016-01-01

    Introduction Errors in trauma resuscitation are common and have been attributed to breakdowns in the coordination of system elements (eg, tools/technology, physical environment and layout, individual skills/knowledge, team interaction). These breakdowns are triggered by unique circumstances and may go unrecognised by trauma team members or hospital administrators; they can be described as latent safety threats (LSTs). Retrospective approaches to identifying LSTs (ie, after they occur) are likely to be incomplete and prone to bias. To date, prospective studies have not used video review as the primary mechanism to identify any and all LSTs in trauma resuscitation. Methods and analysis A series of 12 unannounced in situ simulations (ISS) will be conducted to prospectively identify LSTs at a level 1 Canadian trauma centre (over 800 dedicated trauma team activations annually). 4 scenarios have already been designed as part of this protocol based on 5 recurring themes found in the hospital's mortality and morbidity process. The actual trauma team will be activated to participate in the study. Each simulation will be audio/video recorded from 4 different camera angles and transcribed to conduct a framework analysis. Video reviewers will code the videos deductively based on a priori themes of LSTs identified from the literature, and/or inductively based on the events occurring in the simulation. LSTs will be prioritised to target interventions in future work. Ethics and dissemination Institutional research ethics approval has been acquired (SMH REB #15-046). Results will be published in peer-reviewed journals and presented at relevant conferences. Findings will also be presented to key institutional stakeholders to inform mitigation strategies for improved patient safety. PMID:27821600

  10. Developing and establishing the validity and reliability of the perceptions toward Aviation Safety Action Program (ASAP) and Line Operations Safety Audit (LOSA) questionnaires

    NASA Astrophysics Data System (ADS)

    Steckel, Richard J.

    Aviation Safety Action Program (ASAP) and Line Operations Safety Audits (LOSA) are voluntary safety reporting programs developed by the Federal Aviation Administration (FAA) to assist air carriers in discovering and fixing threats, errors and undesired aircraft states during normal flights that could result in a serious or fatal accident. These programs depend on voluntary participation of and reporting by air carrier pilots to be successful. The purpose of the study was to develop and validate a measurement scale to measure U.S. air carrier pilots' perceived benefits and/or barriers to participating in ASAP and LOSA programs. Data from these surveys could be used to make changes to or correct pilot misperceptions of these programs to improve participation and the flow of data. ASAP and LOSA a priori models were developed based on previous research in aviation and healthcare. Sixty thousand ASAP and LOSA paper surveys were sent to 60,000 current U.S. air carrier pilots selected at random from an FAA database of pilot certificates. Two thousand usable ASAP and 1,970 usable LOSA surveys were returned and analyzed using Confirmatory Factor Analysis. Analysis of the data using confirmatory actor analysis and model generation resulted in a five factor ASAP model (Ease of use, Value, Improve, Trust and Risk) and a five factor LOSA model (Value, Improve, Program Trust, Risk and Management Trust). ASAP and LOSA data were not normally distributed, so bootstrapping was used. While both final models exhibited acceptable fit with approximate fit indices, the exact fit hypothesis and the Bollen-Stine p value indicated possible model mis-specification for both ASAP and LOSA models.

  11. Mediterranean Diet and Cardiovascular Disease: A Critical Evaluation of A Priori Dietary Indexes

    PubMed Central

    D’Alessandro, Annunziata; De Pergola, Giovanni

    2015-01-01

    The aim of this paper is to analyze the a priori dietary indexes used in the studies that have evaluated the role of the Mediterranean Diet in influencing the risk of developing cardiovascular disease. All the studies show that this dietary pattern protects against cardiovascular disease, but studies show quite different effects on specific conditions such as coronary heart disease or cerebrovascular disease. A priori dietary indexes used to measure dietary exposure imply quantitative and/or qualitative divergences from the traditional Mediterranean Diet of the early 1960s, and, therefore, it is very difficult to compare the results of different studies. Based on real cultural heritage and traditions, we believe that the a priori indexes used to evaluate adherence to the Mediterranean Diet should consider classifying whole grains and refined grains, olive oil and monounsaturated fats, and wine and alcohol differently. PMID:26389950

  12. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    NASA Astrophysics Data System (ADS)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  13. Inferring river properties with SWOT like data

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre-André; Monnier, Jérôme; Roux, Hélène

    2014-05-01

    Inverse problems in hydraulics are still open questions such as the estimation of river discharges. Remotely sensed measurements of hydrosystems can provide valuable information but adequate methods are still required to exploit it. The future Surface Water and Ocean Topography (SWOT) mission would provide new cartographic measurements of inland water surfaces. The highlight of SWOT will be its almost global coverage and temporal revisits on the order of 1 to 4 times per 22 days repeat cycle [1]. Lots of studies have shown the possibility of retrieving discharge given the river bathymetry or roughness and/or in situ time series. The new challenge is to use SWOT type data to inverse the triplet formed by the roughness, the bathymetry and the discharge. The method presented here is composed of two steps: following an inverse formulation from [2], the first step consists in retrieving an equivalent bathymetry profile of a river given one in situ depth measurement and SWOT like data of the water surface, that is to say water elevation, free surface slope and width. From this equivalent bathymetry, the second step consists in solving mass and Manning equation in the least square sense [3]. Nevertheless, for cases where no in situ measurement of water depth is available, it is still possible to solve a system formed by mass and Manning equations in the least square sense (or with other methods such as Bayesian ones, see e.g. [4]). We show that a good a priori knowledge of bathymetry and roughness is compulsory for such methods. Depending on this a priori knowledge, the inversion of the triplet (roughness, bathymetry, discharge) in SWOT context was evaluated on the Garonne River [5, 6]. The results are presented on 80 km of the Garonne River downstream of Toulouse in France [7]. An equivalent bathymetry is retrieved with less than 10% relative error with SWOT like observations. After that, encouraging results are obtained with less than 10% relative error on the identified discharge. References [1] E. Rodriguez, SWOT science requirements document, JPL document, JPL, 2012. [2] A. Gessese, K. Wa, and M. Sellier, Bathymetry reconstruction based on the zero-inertia shallow water approximation, Theoretical and Computational Fluid Dynamics, vol. 27, no. 5, pp. 721-732, 2013. [3] P. A. Garambois and J. Monnier, Inference of river properties from remotly sensed observations of water surface, under final redaction for HESS, 2014. [4] M. Durand, Sacramento river airswot discharge estimation scenario. http://swotdawg.wordpress.com/2013/04/18/sacramento-river-airswot-discharge-estimation-scenario/, 2013. [5] P. A. Garambois and H. Roux, Garonne River discharge estimation. http://swotdawg.wordpress.com/2013/07/01/garonne-river-discharge-estimation/, 2013. [6] P. A. Garambois and H. Roux, Sensitivity of discharge uncertainty to measurement errors, case of the Garonne River. http://swotdawg.wordpress.com/2013/07/01/sensitivity-of-discharge-uncertainty-to-measurement-errors-case-of-the-garonne-river/, 2013. [7] H. Roux and P. A. Garambois, Tests of reach averaging and manning equation on the Garonne River. http://swotdawg.wordpress.com/2013/07/01/tests-of-reach-averaging-and-manning-equation-on-the-garonne-river/, 2013.

  14. Adaptive Photothermal Emission Analysis Techniques for Robust Thermal Property Measurements of Thermal Barrier Coatings

    NASA Astrophysics Data System (ADS)

    Valdes, Raymond

    The characterization of thermal barrier coating (TBC) systems is increasingly important because they enable gas turbine engines to operate at high temperatures and efficiency. Phase of photothermal emission analysis (PopTea) has been developed to analyze the thermal behavior of the ceramic top-coat of TBCs, as a nondestructive and noncontact method for measuring thermal diffusivity and thermal conductivity. Most TBC allocations are on actively-cooled high temperature turbine blades, which makes it difficult to precisely model heat transfer in the metallic subsystem. This reduces the ability of rote thermal modeling to reflect the actual physical conditions of the system and can lead to higher uncertainty in measured thermal properties. This dissertation investigates fundamental issues underpinning robust thermal property measurements that are adaptive to non-specific, complex, and evolving system characteristics using the PopTea method. A generic and adaptive subsystem PopTea thermal model was developed to account for complex geometry beyond a well-defined coating and substrate system. Without a priori knowledge of the subsystem characteristics, two different measurement techniques were implemented using the subsystem model. In the first technique, the properties of the subsystem were resolved as part of the PopTea parameter estimation algorithm; and, the second technique independently resolved the subsystem properties using a differential "bare" subsystem. The confidence in thermal properties measured using the generic subsystem model is similar to that from a standard PopTea measurement on a "well-defined" TBC system. Non-systematic bias-error on experimental observations in PopTea measurements due to generic thermal model discrepancies was also mitigated using a regression-based sensitivity analysis. The sensitivity analysis reported measurement uncertainty and was developed into a data reduction method to filter out these "erroneous" observations. It was found that the adverse impact of bias-error can be greatly reduced, leaving measurement observations with only random Gaussian noise in PopTea thermal property measurements. Quantifying the influence of the coating-substrate interface in PopTea measurements is important to resolving the thermal conductivity of the coating. However, the reduced significance of this interface in thicker coating systems can give rise to large uncertainties in thermal conductivity measurements. A first step towards improving PopTea measurements for such circumstances has been taken by implementing absolute temperature measurements using harmonically-sustained two-color pyrometry. Although promising, even small uncertainties in thermal emission observations were found to lead to significant noise in temperature measurements. However, PopTea analysis on bulk graphite samples were able to resolve its thermal conductivity to the expected literature values.

  15. Evaluating A Priori Ozone Profile Information Used in TEMPO Tropospheric Ozone Retrievals

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew S.; Sullivan, John T.; Liu, Xiong; Newchurch, Mike; Kuang, Shi; McGee, Thomas J.; Langford, Andrew O'Neil; Senff, Christoph J.; Leblanc, Thierry; Berkoff, Timothy; hide

    2016-01-01

    Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality. Typically, monitoring of surface air quality and O3 mixing ratios is primarily conducted using in situ measurement networks. This is partially due to high-quality information related to air quality being limited from space-borne platforms due to coarse spatial resolution, limited temporal frequency, and minimal sensitivity to lower tropospheric and surface-level O3. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite is designed to address these limitations of current space-based platforms and to improve our ability to monitor North American air quality. TEMPO will provide hourly data of total column and vertical profiles of O3 with high spatial resolution to be used as a near-real-time air quality product. TEMPO O3 retrievals will apply the Smithsonian Astrophysical Observatory profile algorithm developed based on work from GOME, GOME-2, and OMI. This algorithm uses a priori O3 profile information from a climatological data-base developed from long-term ozone-sonde measurements (tropopause-based (TB) O3 climatology). It has been shown that satellite O3 retrievals are sensitive to a priori O3 profiles and covariance matrices. During this work we investigate the climatological data to be used in TEMPO algorithms (TB O3) and simulated data from the NASA GMAO Goddard Earth Observing System (GEOS-5) Forward Processing (FP) near-real-time (NRT) model products. These two data products will be evaluated with ground-based lidar data from the Tropospheric Ozone Lidar Network (TOLNet) at various locations of the US. This study evaluates the TB climatology, GEOS-5 climatology, and 3-hourly GEOS-5 data compared to lower tropospheric observations to demonstrate the accuracy of a priori information to potentially be used in TEMPO O3 algorithms. Here we present our initial analysis and the theoretical impact on TEMPO retrievals in the lower troposphere.

  16. Evaluating a Priori Ozone Profile Information Used in TEMPO Tropospheric Ozone Retrievals

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew S.; Sullivan, John; Liu, Xiong; Newchurch, Mike; Kuang, Shi; McGee, Thomas; Langford, Andrew; Senff, Chris; Leblanc, Thierry; Berkoff, Timothy; hide

    2016-01-01

    Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality. Typically, monitoring of surface air quality and O3 mixing ratios is primarily conducted using in situ measurement networks. This is partially due to high-quality information related to air quality being limited from space-borne platforms due to coarse spatial resolution, limited temporal frequency, and minimal sensitivity to lower tropospheric and surface-level O3. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite is designed to address these limitations of current space-based platforms and to improve our ability to monitor North American air quality. TEMPO will provide hourly data of total column and vertical profiles of O3 with high spatial resolution to be used as a near-real-time air quality product.TEMPO O3 retrievals will apply the Smithsonian Astrophysical Observatory profile algorithm developed based on work from GOME, GOME-2, and OMI. This algorithm uses a priori O3 profile information from a climatological data-base developed from long-term ozone-sonde measurements (tropopause-based (TB) O3 climatology). It has been shown that satellite O3 retrievals are sensitive to a priori O3 profiles and covariance matrices. During this work we investigate the climatological data to be used in TEMPO algorithms (TB O3) and simulated data from the NASA GMAO Goddard Earth Observing System (GEOS-5) Forward Processing (FP) near-real-time (NRT) model products. These two data products will be evaluated with ground-based lidar data from the Tropospheric Ozone Lidar Network (TOLNet) at various locations of the US. This study evaluates the TB climatology, GEOS-5 climatology, and 3-hourly GEOS-5 data compared to lower tropospheric observations to demonstrate the accuracy of a priori information to potentially be used in TEMPO O3 algorithms. Here we present our initial analysis and the theoretical impact on TEMPO retrievals in the lower troposphere.

  17. Evaluating A Priori Ozone Profile Information Used in TEMPO Tropospheric Ozone Retrievals

    NASA Astrophysics Data System (ADS)

    Johnson, M. S.; Sullivan, J. T.; Liu, X.; Newchurch, M.; Kuang, S.; McGee, T. J.; Langford, A. O.; Senff, C. J.; Leblanc, T.; Berkoff, T.; Gronoff, G.; Chen, G.; Strawbridge, K. B.

    2016-12-01

    Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality. Typically, monitoring of surface air quality and O3 mixing ratios is primarily conducted using in situ measurement networks. This is partially due to high-quality information related to air quality being limited from space-borne platforms due to coarse spatial resolution, limited temporal frequency, and minimal sensitivity to lower tropospheric and surface-level O3. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite is designed to address these limitations of current space-based platforms and to improve our ability to monitor North American air quality. TEMPO will provide hourly data of total column and vertical profiles of O3 with high spatial resolution to be used as a near-real-time air quality product. TEMPO O3 retrievals will apply the Smithsonian Astrophysical Observatory profile algorithm developed based on work from GOME, GOME-2, and OMI. This algorithm uses a priori O3 profile information from a climatological data-base developed from long-term ozone-sonde measurements (tropopause-based (TB) O3 climatology). It has been shown that satellite O3 retrievals are sensitive to a priori O3 profiles and covariance matrices. During this work we investigate the climatological data to be used in TEMPO algorithms (TB O3) and simulated data from the NASA GMAO Goddard Earth Observing System (GEOS-5) Forward Processing (FP) near-real-time (NRT) model products. These two data products will be evaluated with ground-based lidar data from the Tropospheric Ozone Lidar Network (TOLNet) at various locations of the US. This study evaluates the TB climatology, GEOS-5 climatology, and 3-hourly GEOS-5 data compared to lower tropospheric observations to demonstrate the accuracy of a priori information to potentially be used in TEMPO O3 algorithms. Here we present our initial analysis and the theoretical impact on TEMPO retrievals in the lower troposphere.

  18. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  19. SCGICAR: Spatial concatenation based group ICA with reference for fMRI data analysis.

    PubMed

    Shi, Yuhu; Zeng, Weiming; Wang, Nizhuan

    2017-09-01

    With the rapid development of big data, the functional magnetic resonance imaging (fMRI) data analysis of multi-subject is becoming more and more important. As a kind of blind source separation technique, group independent component analysis (GICA) has been widely applied for the multi-subject fMRI data analysis. However, spatial concatenated GICA is rarely used compared with temporal concatenated GICA due to its disadvantages. In this paper, in order to overcome these issues and to consider that the ability of GICA for fMRI data analysis can be improved by adding a priori information, we propose a novel spatial concatenation based GICA with reference (SCGICAR) method to take advantage of the priori information extracted from the group subjects, and then the multi-objective optimization strategy is used to implement this method. Finally, the post-processing means of principal component analysis and anti-reconstruction are used to obtain group spatial component and individual temporal component in the group, respectively. The experimental results show that the proposed SCGICAR method has a better performance on both single-subject and multi-subject fMRI data analysis compared with classical methods. It not only can detect more accurate spatial and temporal component for each subject of the group, but also can obtain a better group component on both temporal and spatial domains. These results demonstrate that the proposed SCGICAR method has its own advantages in comparison with classical methods, and it can better reflect the commonness of subjects in the group. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. North American CO2 fluxes for 2007-2015 from NOAA's CarbonTracker-Lagrange Regional Inverse Modeling Framework

    NASA Astrophysics Data System (ADS)

    Andrews, A. E.; Hu, L.; Thoning, K. W.; Nehrkorn, T.; Mountain, M. E.; Jacobson, A. R.; Michalak, A.; Dlugokencky, E. J.; Sweeney, C.; Worthy, D. E. J.; Miller, J. B.; Fischer, M. L.; Biraud, S.; van der Velde, I. R.; Basu, S.; Tans, P. P.

    2017-12-01

    CarbonTracker-Lagrange (CT-L) is a new high-resolution regional inverse modeling system for improved estimation of North American CO2 fluxes. CT-L uses footprints from the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by high-resolution (10 to 30 km) meteorological fields from the Weather Research and Forecasting (WRF) model. We performed a suite of synthetic-data experiments to evaluate a variety of inversion configurations, including (1) solving for scaling factors to an a priori flux versus additive corrections, (2) solving for fluxes at 3-hrly resolution versus at coarser temporal resolution, (3) solving for fluxes at 1o × 1o resolution versus at large eco-regional scales. Our framework explicitly and objectively solves for the optimal solution with a full error covariance matrix with maximum likelihood estimation, thereby enabling rigorous uncertainty estimates for the derived fluxes. In the synthetic-data inversions, we find that solving for weekly scaling factors of a priori Net Ecosystem Exchange (NEE) at 1o × 1o resolution with optimization of diurnal cycles of CO2 fluxes yields faithful retrieval of the specified "true" fluxes as those solved at 3-hrly resolution. In contrast, a scheme that does not allow for optimization of diurnal cycles of CO2 fluxes suffered from larger aggregation errors. We then applied the optimal inversion setup to estimate North American fluxes for 2007-2015 using real atmospheric CO2 observations, multiple prior estimates of NEE, and multiple boundary values estimated from the NOAA's global Eulerian CarbonTracker (CarbonTracker) and from an empirical approach. Our derived North American land CO2 fluxes show larger seasonal amplitude than those estimated from the CarbonTracker, removing seasonal biases in the CarbonTracker's simulated CO2 mole fractions. Independent evaluations using in-situ CO2 eddy covariance flux measurements and independent aircraft profiles also suggest an improved estimation on North American CO2 fluxes from CT-L. Furthermore, our derived CO2 flux anomalies over North America corresponding to the 2012 North American drought and the 2015 El Niño are larger than derived by the CarbonTracker. They also indicate different responses of ecosystems to those anomalous climatic events.

  1. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.

  2. DAISY: a new software tool to test global identifiability of biological and physiological systems

    PubMed Central

    Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D’Angiò, Leontina

    2009-01-01

    A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/. PMID:17707944

  3. Psychometric properties of the World Health Organization Disability Assessment Schedule used in the European Study of the Epidemiology of Mental Disorders.

    PubMed

    Buist-Bouwman, M A; Ormel, J; De Graaf, R; Vilagut, G; Alonso, J; Van Sonderen, E; Vollebergh, W A M

    2008-01-01

    This study assessed the factor structure, internal consistency, and discriminatory validity of the World Health Organization Disability Assessment Schedule (WHODAS) version used in the European Study of the Epidemiology of Mental Disorders (ESEMeD). In total 8796 adults were assessed using the ESEMeD WHODAS (22 severity and 8 frequency items). An Exploratory Factor Analysis (EFA) with promax rotation was done with a random 50%. The other half was used for confirmatory factor analysis (CFA) comparing models (a) suggested by EFA; (b) hypothesized a priori; and (c) reduced with four items. A CFA model with covariates was conducted in the whole sample to assess invariance across Mediterranean (Spain, France and Italy) and non-Mediterranean (Belgium, Germany and the Netherlands) countries. Cronbach's alphas and discriminatory validity were also examined. EFA identified seven factors (explained variance: 80%). The reduced model (six factors, four frequency items excluded) presented the best fit [Confirmatory Fit Index (CFI) = 0.992, Tucker-Lewis Index (TLI) = 0.996, Root Mean Square Error of Approximation (RMSEA) = 0.024]. The second-order factor structure also fitted well (CFI = 0.987, TLI = 0.991, RMSEA = 0.036). Measurement non-invariance was found for Embarrassment. Cronbach's alphas ranged from 0.84 for Participation to 0.93 for Mobility. Preliminary data suggest acceptable discriminatory validity. Thus, the ESEMeD WHODAS may well be a valuable shortened version of the WHODAS-II, but future users should reconsider the filter questions. Copyright (c) 2008 John Wiley & Sons, Ltd.

  4. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  5. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  6. Light emitting diode excitation emission matrix fluorescence spectroscopy.

    PubMed

    Hart, Sean J; JiJi, Renée D

    2002-12-01

    An excitation emission matrix (EEM) fluorescence instrument has been developed using a linear array of light emitting diodes (LED). The wavelengths covered extend from the upper UV through the visible spectrum: 370-640 nm. Using an LED array to excite fluorescence emission at multiple excitation wavelengths is a low-cost alternative to an expensive high power lamp and imaging spectrograph. The LED-EEM system is a departure from other EEM spectroscopy systems in that LEDs often have broad excitation ranges which may overlap with neighboring channels. The LED array can be considered a hybrid between a spectroscopic and sensor system, as the broad LED excitation range produces a partially selective optical measurement. The instrument has been tested and characterized using fluorescent dyes: limits of detection (LOD) for 9,10-bis(phenylethynyl)-anthracene and rhodamine B were in the mid parts-per-trillion range; detection limits for the other compounds were in the low parts-per-billion range (< 5 ppb). The LED-EEMs were analyzed using parallel factor analysis (PARAFAC), which allowed the mathematical resolution of the individual contributions of the mono- and dianion fluorescein tautomers a priori. Correct identification and quantitation of six fluorescent dyes in two to six component mixtures (concentrations between 12.5 and 500 ppb) has been achieved with root mean squared errors of prediction (RMSEP) of less than 4.0 ppb for all components.

  7. Ex Priori: Exposure-based Prioritization across Chemical Space

    EPA Science Inventory

    EPA's Exposure Prioritization (Ex Priori) is a simplified, quantitative visual dashboard that makes use of data from various inputs to provide rank-ordered internalized dose metric. This complements other high throughput screening by viewing exposures within all chemical space si...

  8. Reporting and methodological quality of sample size calculations in cluster randomized trials could be improved: a review.

    PubMed

    Rutterford, Clare; Taljaard, Monica; Dixon, Stephanie; Copas, Andrew; Eldridge, Sandra

    2015-06-01

    To assess the quality of reporting and accuracy of a priori estimates used in sample size calculations for cluster randomized trials (CRTs). We reviewed 300 CRTs published between 2000 and 2008. The prevalence of reporting sample size elements from the 2004 CONSORT recommendations was evaluated and a priori estimates compared with those observed in the trial. Of the 300 trials, 166 (55%) reported a sample size calculation. Only 36 of 166 (22%) reported all recommended descriptive elements. Elements specific to CRTs were the worst reported: a measure of within-cluster correlation was specified in only 58 of 166 (35%). Only 18 of 166 articles (11%) reported both a priori and observed within-cluster correlation values. Except in two cases, observed within-cluster correlation values were either close to or less than a priori values. Even with the CONSORT extension for cluster randomization, the reporting of sample size elements specific to these trials remains below that necessary for transparent reporting. Journal editors and peer reviewers should implement stricter requirements for authors to follow CONSORT recommendations. Authors should report observed and a priori within-cluster correlation values to enable comparisons between these over a wider range of trials. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Performance and quality assessment of the recent updated CMEMS global ocean monitoring and forecasting real-time system

    NASA Astrophysics Data System (ADS)

    Le Galloudec, Olivier; Lellouche, Jean-Michel; Greiner, Eric; Garric, Gilles; Régnier, Charly; Drévillon, Marie; Drillet, Yann

    2017-04-01

    Since May 2015, Mercator Ocean opened the Copernicus Marine Environment and Monitoring Service (CMEMS) and is in charge of the global eddy resolving ocean analyses and forecast. In this context, Mercator Ocean currently delivers in real-time daily services (weekly analyses and daily forecast) with a global 1/12° high resolution system. The model component is the NEMO platform driven at the surface by the IFS ECMWF atmospheric analyses and forecasts. Observations are assimilated by means of a reduced-order Kalman filter with a 3D multivariate modal decomposition of the forecast error. It includes an adaptive-error estimate and a localization algorithm. Along track altimeter data, satellite Sea Surface Temperature and in situ temperature and salinity vertical profiles are jointly assimilated to estimate the initial conditions for numerical ocean forecasting. A 3D-Var scheme provides a correction for the slowly-evolving large-scale biases in temperature and salinity. R&D activities have been conducted at Mercator Ocean these last years to improve the real-time 1/12° global system for recent updated CMEMS version in 2016. The ocean/sea-ice model and the assimilation scheme benefited of the following improvements: large-scale and objective correction of atmospheric quantities with satellite data, new Mean Dynamic Topography taking into account the last version of GOCE geoid, new adaptive tuning of some observational errors, new Quality Control on the assimilated temperature and salinity vertical profiles based on dynamic height criteria, assimilation of satellite sea-ice concentration, new freshwater runoff from ice sheets melting, … This presentation will show the impact of some updates separately, with a particular focus on adaptive tuning experiments of satellite Sea Level Anomaly (SLA) and Sea Surface Temperature (SST) observations errors. For the SLA, the a priori prescribed observation error is globally greatly reduced. The median value of the error changed from 5cm to 2.5cm in a few assimilation cycles. For the SST, we chose to maintain the median value of the error to 0.4°C. The spatial distribution of the SST error follows the model physics and atmospheric variability. Either for SLA or SST, we improve the performances of the system using this adaptive tuning. The overall behavior of the system integrating all updates reporting on the products quality improvements will be also discussed, highlighting the level of performance and the reliability of the new system.

  10. Measurement Invariance of Big-Five Factors over the Life Span: ESEM Tests of Gender, Age, Plasticity, Maturity, and La Dolce Vita Effects

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Nagengast, Benjamin; Morin, Alexandre J. S.

    2013-01-01

    This substantive-methodological synergy applies evolving approaches to factor analysis to substantively important developmental issues of how five-factor-approach (FFA) personality measures vary with gender, age, and their interaction. Confirmatory factor analyses (CFAs) conducted at the item level often do not support a priori FFA structures, due…

  11. Adult Judgments and Fine-Grained Analysis of Infant Facial Expressions: Testing the Validity of A Priori Coding Formulas.

    ERIC Educational Resources Information Center

    Oster, Harriet; And Others

    1992-01-01

    Compared subjects' judgments about emotions expressed by the faces of infants pictured in slides to predictions made by the Max system of measuring emotional expression. Judgments did not coincide with Max predictions for fear, anger, sadness, and disgust. Results indicated that expressions of negative affect by infants are not fully…

  12. An Information Analysis of 2-, 3-, and 4-Word Verbal Discrimination Learning.

    ERIC Educational Resources Information Center

    Arima, James K.; Gray, Francis D.

    Information theory was used to qualify the difficulty of verbal discrimination (VD) learning tasks and to measure VD performance. Words for VD items were selected with high background frequency and equal a priori probabilities of being selected as a first response. Three VD lists containing only 2-, 3-, or 4-word items were created and equated for…

  13. Prospective Elementary Teachers' Perceptions of the Processes of Modeling: A Case Study

    ERIC Educational Resources Information Center

    Fazio, Claudio; Di Paola, Benedetto; Guastella, Ivan

    2012-01-01

    In this paper we discuss a study on the approaches to modeling of students of the 4-year elementary school teacher program at the University of Palermo, Italy. The answers to a specially designed questionnaire are analyzed on the basis of an "a priori" analysis made using a general scheme of reference on the epistemology of mathematics…

  14. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  15. Space-Wise approach for airborne gravity data modelling

    NASA Astrophysics Data System (ADS)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data retrieving the gravitational signal with a predicted accuracy of about 0.4 mGal.

  16. MR-based keyhole SPECT for small animal imaging

    PubMed Central

    Lee, Keum Sil; Roeck, Werner W; Gullberg, Grant T; Nalcioglu, Orhan

    2011-01-01

    The rationale for multi-modality imaging is to integrate the strengths of different imaging technologies while reducing the shortcomings of an individual modality. The work presented here proposes a limited-field-of-view (LFOV) SPECT reconstruction technique that can be implemented on a multi-modality MR/SPECT system that can be used to obtain simultaneous MRI and SPECT images for small animal imaging. The reason for using a combined MR/SPECT system in this work is to eliminate any possible misregistration between the two sets of images when MR images are used as a priori information for SPECT. In nuclear imaging the target area is usually smaller than the entire object; thus, focusing the detector on the LFOV results in various advantages including the use of a smaller nuclear detector (less cost), smaller reconstruction region (faster reconstruction) and higher spatial resolution when used in conjunction with pinhole collimators with magnification. The MR/SPECT system can be used to choose a region of interest (ROI) for SPECT. A priori information obtained by the full field-of-view (FOV) MRI combined with the preliminary SPECT image can be used to reduce the dimensions of the SPECT reconstruction by limiting the computation to the smaller FOV while reducing artifacts resulting from the truncated data. Since the technique is based on SPECT imaging within the LFOV it will be called the keyhole SPECT (K-SPECT) method. At first MRI images of the entire object using a larger FOV are obtained to determine the location of the ROI covering the target organ. Once the ROI is determined, the animal is moved inside the radiofrequency (rf) coil to bring the target area inside the LFOV and then simultaneous MRI and SPECT are performed. The spatial resolution of the SPECT image is improved by employing a pinhole collimator with magnification >1 by having carefully calculated acceptance angles for each pinhole to avoid multiplexing. In our design all the pinholes are focused to the center of the LFOV. K-SPECT reconstruction is accomplished by generating an adaptive weighting matrix using a priori information obtained by simultaneously acquired MR images and the radioactivity distribution obtained from the ROI region of the SPECT image that is reconstructed without any a priori input. Preliminary results using simulations with numerical phantoms show that the image resolution of the SPECT image within the LFOV is improved while minimizing artifacts arising from parts of the object outside the LFOV due to the chosen magnification and the new reconstruction technique. The root-mean-square-error (RMSE) in the out-of-field artifacts was reduced by 60% for spherical phantoms using the K-SPECT reconstruction technique and by 48.5–52.6% for the heart in the case with the MOBY phantom. The KSPECT reconstruction technique significantly improved the spatial resolution and quantification while reducing artifacts from the contributions outside the LFOV as well as reducing the dimension of the reconstruction matrix. PMID:21220840

  17. Gray Matter Alterations in Adults with Attention-Deficit/Hyperactivity Disorder Identified by Voxel Based Morphometry

    PubMed Central

    Seidman, Larry J.; Biederman, Joseph; Liang, Lichen; Valera, Eve M.; Monuteaux, Michael C.; Brown, Ariel; Kaiser, Jonathan; Spencer, Thomas; Faraone, Stephen V.; Makris, Nikos

    2014-01-01

    Background Gray and white matter volume deficits have been reported in many structural magnetic resonance imaging (MRI) studies of children with attention-deficit/hyperactivity disorder (ADHD); however, there is a paucity of structural MRI studies of adults with ADHD. This study used voxel based morphometry and applied an a priori region of interest approach based on our previous work, as well as from well-developed neuroanatomical theories of ADHD. Methods Seventy-four adults with DSM-IV ADHD and 54 healthy control subjects comparable on age, sex, race, handedness, IQ, reading achievement, frequency of learning disabilities, and whole brain volume had an MRI on a 1.5T Siemens scanner. A priori region of interest hypotheses focused on reduced volumes in ADHD in dorsolateral prefrontal cortex, anterior cingulate cortex, caudate, putamen, inferior parietal lobule, and cerebellum. Analyses were carried out by FSL-VBM 1.1. Results Relative to control subjects, ADHD adults had significantly smaller gray matter volumes in parts of six of these regions at p ≤ .01, whereas parts of the dorsolateral prefrontal cortex and inferior parietal lobule were significantly larger in ADHD at this threshold. However, a number of other regions were smaller and larger in ADHD (especially fronto-orbital cortex) at this threshold. Only the caudate remained significantly smaller at the family-wise error rate. Conclusions Adults with ADHD have subtle volume reductions in the caudate and possibly other brain regions involved in attention and executive control supporting frontostriatal models of ADHD. Modest group brain volume differences are discussed in the context of the nature of the samples studied and voxel based morphometry methodology. PMID:21183160

  18. Evaluation of Space-Based Constraints on Global Nitrogen Oxide Emissions with Regional Aircraft Measurements over and Downwind of Eastern North America

    NASA Technical Reports Server (NTRS)

    Martin, Randall V.; Sioris, Christopher E.; Chance, Kelly; Ryerson, Thomas B.; Flocke, Frank M.; Bertram, Timothy H.; Wooldridge, Paul J.; Cohen, Ronald C.; Neuman, J. Andy; Swanson, Aaron

    2006-01-01

    We retrieve tropospheric nitrogen dioxide (NO 2) columns for May 2004 to April 2005 from the SCIAMACHY satellite instrument to derive top-down emissions of nitrogen oxides (NO(x) = NO + NO2) via inverse modeling with a global chemical transport model (GEOS-Chem). Simulated NO 2 vertical profiles used in the retrieval are evaluated with airborne measurements over and downwind of North America (ICARTT); a northern midlatitude lightning source of 1.6 Tg N/yr minimizes bias in the retrieval. Retrieved NO2 columns are validated (r2 = 0.60, slope = 0.82) with coincident airborne in situ measurements. The top-down emissions are combined with a priori information from a bottom-up emission inventory with error weighting to achieve an improved a posteriori estimate of the global distribution of surface NOx emissions. Our a posteriori NOx emission inventory for land surface NOx emissions (46.1 Tg N/yr) is 22% larger than the GEIA-based a priori bottom-up inventory for 1998, a difference that reflects rising anthropogenic emissions, especially from East Asia A posteriori NOx emissions for East Asia (9.8 Tg N/yr) exceed those from other continents. The a posteriori inventory improves the GEOS-Chem simulation of NOx, peroxyacetylnitrate, and nitric acid with respect to airborne in situ measurements over and downwind of New York City. The a posteriori is 7% larger than the EDGAR 3.2FT2000 global inventory, 3% larger than the NEI99 inventory for the United States, and 68% larger than a regional inventory for 2000 for eastern Asia. SCIAMACHY NO2 columns over the North Atlantic show a weak plume from lightning NO(x).

  19. Prototypic Development and Evaluation of a Medium Format Metric Camera

    NASA Astrophysics Data System (ADS)

    Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.

    2018-05-01

    Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  20. The performance of Yonsei CArbon Retrieval (YCAR) algorithm with improved aerosol information using GOSAT measurements over East Asia

    NASA Astrophysics Data System (ADS)

    Jung, Y.; Kim, J.; Kim, W.; Boesch, H.; Yoshida, Y.; Cho, C.; Lee, H.; Goo, T. Y.

    2016-12-01

    The Greenhouse Gases Observing SATellite (GOSAT) is the first satellite dedicated to measure atmospheric CO2 concentrations from space that can able to improve our knowledge about carbon cycle. Several studies have performed to develop the CO2 retrieval algorithms using GOSAT measurements, but limitations in spatial coverage and uncertainties due to aerosols and thin cirrus clouds are still remained as a problem for monitoring CO2 concentration globally. In this study, we develop the Yonsei CArbon Retrieval (YCAR) algorithm based on optimal estimation method to retrieve the column-averaged dry-air mole fraction of carbon dioxide (XCO2) with optimized a priori CO2 profiles and aerosol models over East Asia. In previous studies, the aerosol optical properties (AOP) and the aerosol top height used to cause significant errors in retrieved XCO2 up to 2.5 ppm. Since this bias comes from a rough assumption of aerosol information in the forward model used in CO2 retrieval process, the YCAR algorithm improves the process to take into account AOPs as well as aerosol vertical distribution; total AOD and the fine mode fraction (FMF) are obtained from the ground-based measurements closely located, and other parameters are obtained from a priori information. Comparing to ground-based XCO2 measurements, the YCAR XCO2 product has a bias of 0.59±0.48 ppm and 2.16±0.87 ppm at Saga and Tsukuba sites, respectively, showing lower biases and higher correlations rather than the GOSAT standard products. These results reveal that considering better aerosol information can improve the accuracy of CO2 retrieval algorithm and provide more useful XCO2 information with reduced uncertainties.

  1. Self-Regulation and Executive Functioning as Related to Survival in Motor Neuron Disease: Preliminary Findings.

    PubMed

    Garcia-Willingham, Natasha E; Roach, Abbey R; Kasarskis, Edward J; Segerstrom, Suzanne C

    2018-05-16

    Disease progression varies widely among patients with motor neuron disease (MND). Patients with MND and coexisting dementia have shorter survival. However, implications of mild cognitive and behavioral difficulties are unclear. The present study examined the relative contribution of executive functioning and self-regulation difficulties on survival over a 6-year period among patients with MND, who scored largely within normal limits on cognitive and behavioral indices. Patients with MND (N=37, age=59.97±11.57, 46% female) completed the Wisconsin Card Sorting Task (WCST) as an executive functioning perseveration index. The Behavior Rating Inventory of Executive Functions (BRIEF-A) was used as a behavioral measure of self-regulation in two subdomains self-regulatory behavior (Behavioral Regulation) and self-regulatory problem-solving (Metacognition). Cox proportional hazard regression analyses were used. In total, 23 patients died during follow-up. In Cox proportional hazard regressions adjusted for a priori covariates, each 10-point T-score increment in patient-reported BRIEF-A self-regulatory behavior and problem-solving difficulties increased mortality risk by 94% and103%, respectively (adjusted HR=1.94, 95% CI [1.07, 3.52]; adjusted HR=2.03, 95% CI [1.19, 3.48]). In sensitivity analyses, patient-reported self-regulatory problem-solving remained significant independent of disease severity and a priori covariates (adjusted HR=1.68, 95% CI [1.01, 2.78], though the predictive value of self-regulatory behavior was attenuated in adjusted models (HR=1.67, 95% CI [0.85, 3.27). Caregiver-reported BRIEF-A ratings of patients and WCST perseverative errors did not significantly predict survival. Preliminary evidence suggests patient-reported self-regulatory problem-solving difficulties indicate poorer prognosis in MND. Further research is needed to uncover mechanisms that negatively affect patient survival.

  2. Re-assessing Present Day Global Mass Transport and Glacial Isostatic Adjustment From a Data Driven Approach

    NASA Astrophysics Data System (ADS)

    Wu, X.; Jiang, Y.; Simonsen, S.; van den Broeke, M. R.; Ligtenberg, S.; Kuipers Munneke, P.; van der Wal, W.; Vermeersen, B. L. A.

    2017-12-01

    Determining present-day mass transport (PDMT) is complicated by the fact that most observations contain signals from both present day ice melting and Glacial Isostatic Adjustment (GIA). Despite decades of progress in geodynamic modeling and new observations, significant uncertainties remain in both. The key to separate present-day ice mass change and signals from GIA is to include data of different physical characteristics. We designed an approach to separate PDMT and GIA signatures by estimating them simultaneously using globally distributed interdisciplinary data with distinct physical information and a dynamically constructed a priori GIA model. We conducted a high-resolution global reappraisal of present-day ice mass balance with focus on Earth's polar regions and its contribution to global sea-level rise using a combination of ICESat, GRACE gravity, surface geodetic velocity data, and an ocean bottom pressure model. Adding ice altimetry supplies critically needed dual data types over the interiors of ice covered regions to enhance separation of PDMT and GIA signatures, and achieve half an order of magnitude expected higher accuracies for GIA and consequently ice mass balance estimates. The global data based approach can adequately address issues of PDMT and GIA induced geocenter motion and long-wavelength signatures important for large areas such as Antarctica and global mean sea level. In conjunction with the dense altimetry data, we solved for PDMT coefficients up to degree and order 180 by using a higher-resolution GRACE data set, and a high-resolution a priori PDMT model that includes detailed geographic boundaries. The high-resolution approach solves the problem of multiple resolutions in various data types, greatly reduces aliased errors from a low-degree truncation, and at the same time, enhances separation of signatures from adjacent regions such as Greenland and Canadian Arctic territories.

  3. Terrain clutter simulation using physics-based scattering model and digital terrain profile data

    NASA Astrophysics Data System (ADS)

    Park, James; Johnson, Joel T.; Ding, Kung-Hau; Kim, Kristopher; Tenbarge, Joseph

    2015-05-01

    Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.

  4. High-degree Gravity Models from GRAIL Primary Mission Data

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Goossens, Sander J.; Sabaka, Terence J.; Nicholas, Joseph B.; Mazarico, Erwan; Rowlands, David D.; Loomis, Bryant D.; Chinn, Douglas S.; Caprette, Douglas S.; Neumann, Gregory A.; hide

    2013-01-01

    We have analyzed Ka?band range rate (KBRR) and Deep Space Network (DSN) data from the Gravity Recovery and Interior Laboratory (GRAIL) primary mission (1 March to 29 May 2012) to derive gravity models of the Moon to degree 420, 540, and 660 in spherical harmonics. For these models, GRGM420A, GRGM540A, and GRGM660PRIM, a Kaula constraint was applied only beyond degree 330. Variance?component estimation (VCE) was used to adjust the a priori weights and obtain a calibrated error covariance. The global root?mean?square error in the gravity anomalies computed from the error covariance to 320×320 is 0.77 mGal, compared to 29.0 mGal with the pre?GRAIL model derived with the SELENE mission data, SGM150J, only to 140×140. The global correlations with the Lunar Orbiter Laser Altimeter?derived topography are larger than 0.985 between l = 120 and 330. The free?air gravity anomalies, especially over the lunar farside, display a dramatic increase in detail compared to the pre?GRAIL models (SGM150J and LP150Q) and, through degree 320, are free of the orbit?track?related artifacts present in the earlier models. For GRAIL, we obtain an a posteriori fit to the S?band DSN data of 0.13 mm/s. The a posteriori fits to the KBRR data range from 0.08 to 1.5 micrometers/s for GRGM420A and from 0.03 to 0.06 micrometers/s for GRGM660PRIM. Using the GRAIL data, we obtain solutions for the degree 2 Love numbers, k20=0.024615+/-0.0000914, k21=0.023915+/-0.0000132, and k22=0.024852+/-0.0000167, and a preliminary solution for the k30 Love number of k30=0.00734+/-0.0015, where the Love number error sigmas are those obtained with VCE.

  5. Collecting Kinematic Data on a Ski Track with Optoelectronic Stereophotogrammetry: A Methodological Study Assessing the Feasibility of Bringing the Biomechanics Lab to the Field

    PubMed Central

    Müller, Erich

    2016-01-01

    In the laboratory, optoelectronic stereophotogrammetry is one of the most commonly used motion capture systems; particularly, when position- or orientation-related analyses of human movements are intended. However, for many applied research questions, field experiments are indispensable, and it is not a priori clear whether optoelectronic stereophotogrammetric systems can be expected to perform similarly to in-lab experiments. This study aimed to assess the instrumental errors of kinematic data collected on a ski track using optoelectronic stereophotogrammetry, and to investigate the magnitudes of additional skiing-specific errors and soft tissue/suit artifacts. During a field experiment, the kinematic data of different static and dynamic tasks were captured by the use of 24 infrared-cameras. The distances between three passive markers attached to a rigid bar were stereophotogrammetrically reconstructed and, subsequently, were compared to the manufacturer-specified exact values. While at rest or skiing at low speed, the optoelectronic stereophotogrammetric system’s accuracy and precision for determining inter-marker distances were found to be comparable to those known for in-lab experiments (< 1 mm). However, when measuring a skier’s kinematics under “typical” skiing conditions (i.e., high speeds, inclined/angulated postures and moderate snow spraying), additional errors were found to occur for distances between equipment-fixed markers (total measurement errors: 2.3 ± 2.2 mm). Moreover, for distances between skin-fixed markers, such as the anterior hip markers, additional artifacts were observed (total measurement errors: 8.3 ± 7.1 mm). In summary, these values can be considered sufficient for the detection of meaningful position- or orientation-related differences in alpine skiing. However, it must be emphasized that the use of optoelectronic stereophotogrammetry on a ski track is seriously constrained by limited practical usability, small-sized capture volumes and the occurrence of extensive snow spraying (which results in marker obscuration). The latter limitation possibly might be overcome by the use of more sophisticated cluster-based marker sets. PMID:27560498

  6. Discharge prediction in the Upper Senegal River using remote sensing data

    NASA Astrophysics Data System (ADS)

    Ceccarini, Iacopo; Raso, Luciano; Steele-Dunne, Susan; Hrachowitz, Markus; Nijzink, Remko; Bodian, Ansoumana; Claps, Pierluigi

    2017-04-01

    The Upper Senegal River, West Africa, is a poorly gauged basin. Nevertheless, discharge predictions are required in this river for the optimal operation of the downstream Manantali reservoir, flood forecasting, development plans for the entire basin and studies for adaptation to climate change. Despite the need for reliable discharge predictions, currently available rainfall-runoff models for this basin provide only poor performances, particularly during extreme regimes, both low-flow and high-flow. In this research we develop a rainfall-runoff model that combines remote-sensing input data and a-priori knowledge on catchment physical characteristics. This semi-distributed model, is based on conceptual numerical descriptions of hydrological processes at the catchment scale. Because of the lack of reliable input data from ground observations, we use the Tropical Rainfall Measuring Mission (TRMM) remote-sensing data for precipitation and the Global Land Evaporation Amsterdam Model (GLEAM) for the terrestrial potential evaporation. The model parameters are selected by a combination of calibration, by match of observed output and considering a large set of hydrological signatures, as well as a-priori knowledge on the catchment. The Generalized Likelihood Uncertainty Estimation (GLUE) method was used to choose the most likely range in which the parameter sets belong. Analysis of different experiments enhances our understanding on the added value of distributed remote-sensing data and a-priori information in rainfall-runoff modelling. Results of this research will be used for decision making at different scales, contributing to a rational use of water resources in this river.

  7. Mapping the Moho with seismic surface waves: Sensitivity, resolution, and recommended inversion strategies

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergei; Adam, Joanne; Meier, Thomas

    2013-04-01

    Seismic surface waves have been used to study the Earth's crust since the early days of modern seismology. In the last decade, surface-wave crustal imaging has been rejuvenated by the emergence of new, array techniques (ambient-noise and teleseismic interferometry). The strong sensitivity of both Rayleigh and Love waves to the Moho is evident from a mere visual inspection of their dispersion curves or waveforms. Yet, strong trade-offs between the Moho depth and crustal and mantle structure in surface-wave inversions have prompted doubts regarding their capacity to resolve the Moho. Although the Moho depth has been an inversion parameter in numerous surface-wave studies, the resolution of Moho properties yielded by a surface-wave inversion is still somewhat uncertain and controversial. We use model-space mapping in order to elucidate surface waves' sensitivity to the Moho depth and the resolution of their inversion for it. If seismic wavespeeds within the crust and upper mantle are known, then Moho-depth variations of a few kilometres produce large (over 1 per cent) perturbations in phase velocities. However, in inversions of surface-wave data with no a priori information (wavespeeds not known), strong Moho-depth/shear-speed trade-offs will mask about 90 per cent of the Moho-depth signal, with remaining phase-velocity perturbations 0.1-0.2 per cent only. In order to resolve the Moho with surface waves alone, errors in the data must thus be small (up to 0.2 per cent for resolving continental Moho). If the errors are larger, Moho-depth resolution is not warranted and depends on error distribution with period, with errors that persist over broad period ranges particularly damaging. An effective strategy for the inversion of surface-wave data alone for the Moho depth is to, first, constrain the crustal and upper-mantle structure by inversion in a broad period range and then determine the Moho depth in inversion in a narrow period range most sensitive to it, with the first-step results used as reference. We illustrate this strategy with an application to data from the Kaapvaal Craton. Prior information on crustal and mantle structure reduces the trade-offs and thus enables resolving the Moho depth with noisier data; such information should be sought and used whenever available (as has been done, explicitly or implicitly, in many previous studies). Joint analysis or inversion of surface-wave and other data (receiver functions, topography, gravity) can reduce uncertainties further and facilitate Moho mapping. Alone or as a part of multi-disciplinary datasets, surface-wave data offer unique sensitivity to the crustal and upper-mantle structure and are becoming increasingly important in the seismic imaging of the crust and the Moho. Reference Lebedev, S., J. Adam, T. Meier. Mapping the Moho with seismic surface waves: A review, resolution analysis, and recommended inversion strategies. Tectonophysics, "Moho" special issue, 10.1016/j.tecto.2012.12.030, 2013.

  8. Self-blame-Selective Hyperconnectivity Between Anterior Temporal and Subgenual Cortices and Prediction of Recurrent Depressive Episodes.

    PubMed

    Lythe, Karen E; Moll, Jorge; Gethin, Jennifer A; Workman, Clifford I; Green, Sophie; Lambon Ralph, Matthew A; Deakin, John F W; Zahn, Roland

    2015-11-01

    Patients with remitted major depressive disorder (MDD) were previously found to display abnormal functional magnetic resonance imaging connectivity (fMRI) between the right superior anterior temporal lobe (RSATL) and the subgenual cingulate cortex and adjacent septal region (SCSR) when experiencing self-blaming emotions relative to emotions related to blaming others (eg, "indignation or anger toward others"). This finding provided the first neural signature of biases toward overgeneralized self-blaming emotions (eg, "feeling guilty for everything"), known to have a key role in cognitive vulnerability to MDD. It is unknown whether this neural signature predicts risk of recurrence, a crucial step in establishing its potential as a prognostic biomarker, which is urgently needed for stratification into pathophysiologically more homogeneous subgroups and for novel treatments. To use fMRI in remitted MDD at baseline to test the hypothesis that RSATL-SCSR connectivity for self-blaming relative to other-blaming emotions predicts subsequent recurrence of depressive episodes. A prospective cohort study from June 16, 2011, to October 10, 2014, in a clinical research facility completed by 75 psychotropic medication-free patients with remitted MDD and no relevant comorbidity. In total, 31 remained in stable remission, and 25 developed a recurring episode over the 14 months of clinical follow-up and were included in the primary analysis. Thirty-nine control participants with no personal or family history of MDD were recruited for further comparison. Between-group difference (recurring vs stable MDD) in RSATL connectivity, with an a priori SCSR region of interest for self-blaming vs other-blaming emotions. We corroborated our hypothesis that during the experience of self-blaming vs other-blaming emotions, RSATL-SCSR connectivity predicted risk of subsequent recurrence. The recurring MDD group showed higher connectivity than the stable MDD group (familywise error-corrected P < .05 over the a priori SCSR region of interest) and the control group. In addition, the recurring MDD group also exhibited RSATL hyperconnectivity with the right ventral putamen and claustrum and the temporoparietal junction. Together, these regions predicted recurrence with 75% accuracy. To our knowledge, this study is the first to provide a robust demonstration of an fMRI signature of recurrence risk in remitted MDD. Additional studies are needed for its further optimization and validation as a prognostic biomarker.

  9. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  10. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  11. Multivariate analysis of volatile compounds detected by headspace solid-phase microextraction/gas chromatography: A tool for sensory classification of cork stoppers.

    PubMed

    Prat, Chantal; Besalú, Emili; Bañeras, Lluís; Anticó, Enriqueta

    2011-06-15

    The volatile fraction of aqueous cork macerates of tainted and non-tainted agglomerate cork stoppers was analysed by headspace solid-phase microextraction (HS-SPME)/gas chromatography. Twenty compounds containing terpenoids, aliphatic alcohols, lignin-related compounds and others were selected and analysed in individual corks. Cork stoppers were previously classified in six different classes according to sensory descriptions including, 2,4,6-trichloroanisole taint and other frequent, non-characteristic odours found in cork. A multivariate analysis of the chromatographic data of 20 selected chemical compounds using linear discriminant analysis models helped in the differentiation of the a priori made groups. The discriminant model selected five compounds as the best combination. Selected compounds appear in the model in the following order; 2,4,6 TCA, fenchyl alcohol, 1-octen-3-ol, benzyl alcohol and benzothiazole. Unfortunately, not all six a priori differentiated sensory classes were clearly discriminated in the model, probably indicating that no measurable differences exist in the chromatographic data for some categories. The predictive analyses of a refined model in which two sensory classes were fused together resulted in a good classification. Prediction rates of control (non-tainted), TCA, musty-earthy-vegetative, vegetative and chemical descriptions were 100%, 100%, 85%, 67.3% and 100%, respectively, when the modified model was used. The multivariate analysis of chromatographic data will help in the classification of stoppers and provide a perfect complement to sensory analyses. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Ecoregions and ecodistricts: Ecological regionalizations for the Netherlands' environmental policy

    NASA Astrophysics Data System (ADS)

    Klijn, Frans; de Waal, Rein W.; Oude Voshaar, Jan H.

    1995-11-01

    For communicating data on the state of the environment to policy makers, various integrative frameworks are used, including regional integration. For this kind of integration we have developed two related ecological regionalizations, ecoregions and ecodistricts, which are two levels in a series of classifications for hierarchically nested ecosystems at different spatial scale levels. We explain the compilation of the maps from existing geographical data, demonstrating the relatively holistic, a priori integrated approach. The resulting maps are submitted to discriminant analysis to test the consistancy of the use of mapping characteristics, using data on individual abiotic ecosystem components from a national database on a 1-km2 grid. This reveals that the spatial patterns of soil, groundwater, and geomorphology correspond with the ecoregion and ecodistrict maps. Differences between the original maps and maps formed by automatically reclassifying 1-km2 cells with these discriminant components are found to be few. These differences are discussed against the background of the principal dilemma between deductive, a priori integrated, and inductive, a posteriori, classification.

  13. Effect of filter type on the statistics of energy transfer between resolved and subfilter scales from a-priori analysis of direct numerical simulations of isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Buzzicotti, M.; Linkmann, M.; Aluie, H.; Biferale, L.; Brasseur, J.; Meneveau, C.

    2018-02-01

    The effects of different filtering strategies on the statistical properties of the resolved-to-subfilter scale (SFS) energy transfer are analysed in forced homogeneous and isotropic turbulence. We carry out a-priori analyses of the statistical characteristics of SFS energy transfer by filtering data obtained from direct numerical simulations with up to 20483 grid points as a function of the filter cutoff scale. In order to quantify the dependence of extreme events and anomalous scaling on the filter, we compare a sharp Fourier Galerkin projector, a Gaussian filter and a novel class of Galerkin projectors with non-sharp spectral filter profiles. Of interest is the importance of Galilean invariance and we confirm that local SFS energy transfer displays intermittency scaling in both skewness and flatness as a function of the cutoff scale. Furthermore, we quantify the robustness of scaling as a function of the filtering type.

  14. A Regional CO2 Observing System Simulation Experiment Using ASCENDS Observations and WRF-STILT Footprints

    NASA Technical Reports Server (NTRS)

    Wang, James S.; Kawa, S. Randolph; Eluszkiewicz, Janusz; Collatz, G. J.; Mountain, Marikate; Henderson, John; Nehrkorn, Thomas; Aschbrenner, Ryan; Zaccheo, T. Scott

    2012-01-01

    Knowledge of the spatiotemporal variations in emissions and uptake of CO2 is hampered by sparse measurements. The recent advent of satellite measurements of CO2 concentrations is increasing the density of measurements, and the future mission ASCENDS (Active Sensing of CO2 Emissions over Nights, Days and Seasons) will provide even greater coverage and precision. Lagrangian atmospheric transport models run backward in time can quantify surface influences ("footprints") of diverse measurement platforms and are particularly well suited for inverse estimation of regional surface CO2 fluxes at high resolution based on satellite observations. We utilize the STILT Lagrangian particle dispersion model, driven by WRF meteorological fields at 40-km resolution, in a Bayesian synthesis inversion approach to quantify the ability of ASCENDS column CO2 observations to constrain fluxes at high resolution. This study focuses on land-based biospheric fluxes, whose uncertainties are especially large, in a domain encompassing North America. We present results based on realistic input fields for 2007. Pseudo-observation random errors are estimated from backscatter and optical depth measured by the CALIPSO satellite. We estimate a priori flux uncertainties based on output from the CASA-GFED (v.3) biosphere model and make simple assumptions about spatial and temporal error correlations. WRF-STILT footprints are convolved with candidate vertical weighting functions for ASCENDS. We find that at a horizontal flux resolution of 1 degree x 1 degree, ASCENDS observations are potentially able to reduce average weekly flux uncertainties by 0-8% in July, and 0-0.5% in January (assuming an error of 0.5 ppm at the Railroad Valley reference site). Aggregated to coarser resolutions, e.g. 5 degrees x 5 degrees, the uncertainty reductions are larger and more similar to those estimated in previous satellite data observing system simulation experiments.

  15. Improved Stratospheric Temperature Retrievals for Climate Reanalysis

    NASA Technical Reports Server (NTRS)

    Rokke, L.; Joiner, J.

    1999-01-01

    The Data Assimilation Office (DAO) is embarking on plans to generate a twenty year reanalysis data set of climatic atmospheric variables. One of the focus points will be in the evaluation of the dynamics of the stratosphere. The Stratospheric Sounding Unit (SSU), flown as part of the TIROS Operational Vertical Sounder (TOVS), is one of the primary stratospheric temperature sensors flown consistently throughout the reanalysis period. Seven unique sensors made the measurements over time, with individual instrument characteristics that need to be addressed. The stratospheric temperatures being assimilated across satellite platforms will profoundly impact the reanalysis dynamical fields. To attempt to quantify aspects of instrument and retrieval bias we are carefully collecting and analyzing all available information on the sensors, their instrument anomalies, forward model errors and retrieval biases. For the retrieval of stratospheric temperatures, we adapted the minimum variance approach of Jazwinski (1970) and Rodgers (1976) and applied it to the SSU soundings. In our algorithm, the state vector contains an initial guess of temperature from a model six hour forecast provided by the Goddard EOS Data Assimilation System (GEOS/DAS). This is combined with an a priori covariance matrix, a forward model parameterization, and specifications of instrument noise characteristics. A quasi-Newtonian iteration is used to obtain convergence of the retrieved state to the measurement vector. This algorithm also enables us to analyze and address the systematic errors associated with the unique characteristics of the cell pressures on the individual SSU instruments and the resolving power of the instruments to vertical gradients in the stratosphere. The preliminary results of the improved retrievals and their assimilation as well as baseline calculations of bias and rms error between the NESDIS operational product and col-located ground measurements will be presented.

  16. Improved algorithm of ray tracing in ICF cryogenic targets

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yang, Yongying; Ling, Tong; Jiang, Jiabin

    2016-10-01

    The high precision ray tracing inside inertial confinement fusion (ICF) cryogenic targets plays an important role in the reconstruction of the three-dimensional density distribution by algebraic reconstruction technique (ART) algorithm. The traditional Runge-Kutta methods, which is restricted by the precision of the grid division and the step size of ray tracing, cannot make an accurate calculation in the case of refractive index saltation. In this paper, we propose an improved algorithm of ray tracing based on the Runge-Kutta methods and Snell's law of refraction to achieve high tracing precision. On the boundary of refractive index, we apply Snell's law of refraction and contact point search algorithm to ensure accuracy of the simulation. Inside the cryogenic target, the combination of the Runge-Kutta methods and self-adaptive step algorithm are employed for computation. The original refractive index data, which is used to mesh the target, can be obtained by experimental measurement or priori refractive index distribution function. A finite differential method is performed to calculate the refractive index gradient of mesh nodes, and the distance weighted average interpolation methods is utilized to obtain refractive index and gradient of each point in space. In the simulation, we take ideal ICF target, Luneberg lens and Graded index rod as simulation model to calculate the spot diagram and wavefront map. Compared the simulation results to Zemax, it manifests that the improved algorithm of ray tracing based on the fourth-order Runge-Kutta methods and Snell's law of refraction exhibits high accuracy. The relative error of the spot diagram is 0.2%, and the peak-to-valley (PV) error and the root-mean-square (RMS) error of the wavefront map is less than λ/35 and λ/100, correspondingly.

  17. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  18. A characteristic based volume penalization method for general evolution problems applied to compressible viscous flows

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.

    2014-04-01

    In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O(η), which is more favorable than the error convergence of the already established Dirichlet boundary condition.

  19. A Physiologically Based Pharmacokinetic Model for Pregnant Women to Predict the Pharmacokinetics of Drugs Metabolized Via Several Enzymatic Pathways.

    PubMed

    Dallmann, André; Ince, Ibrahim; Coboeken, Katrin; Eissing, Thomas; Hempel, Georg

    2017-09-18

    Physiologically based pharmacokinetic modeling is considered a valuable tool for predicting pharmacokinetic changes in pregnancy to subsequently guide in-vivo pharmacokinetic trials in pregnant women. The objective of this study was to extend and verify a previously developed physiologically based pharmacokinetic model for pregnant women for the prediction of pharmacokinetics of drugs metabolized via several cytochrome P450 enzymes. Quantitative information on gestation-specific changes in enzyme activity available in the literature was incorporated in a pregnancy physiologically based pharmacokinetic model and the pharmacokinetics of eight drugs metabolized via one or multiple cytochrome P450 enzymes was predicted. The tested drugs were caffeine, midazolam, nifedipine, metoprolol, ondansetron, granisetron, diazepam, and metronidazole. Pharmacokinetic predictions were evaluated by comparison with in-vivo pharmacokinetic data obtained from the literature. The pregnancy physiologically based pharmacokinetic model successfully predicted the pharmacokinetics of all tested drugs. The observed pregnancy-induced pharmacokinetic changes were qualitatively and quantitatively reasonably well predicted for all drugs. Ninety-seven percent of the mean plasma concentrations predicted in pregnant women fell within a twofold error range and 63% within a 1.25-fold error range. For all drugs, the predicted area under the concentration-time curve was within a 1.25-fold error range. The presented pregnancy physiologically based pharmacokinetic model can quantitatively predict the pharmacokinetics of drugs that are metabolized via one or multiple cytochrome P450 enzymes by integrating prior knowledge of the pregnancy-related effect on these enzymes. This pregnancy physiologically based pharmacokinetic model may thus be used to identify potential exposure changes in pregnant women a priori and to eventually support informed decision making when clinical trials are designed in this special population.

  20. Information Content and Sensitivity of the 3β+2α Lidar Measurement System for Microphysical Retrievals

    NASA Astrophysics Data System (ADS)

    Burton, S. P.; Liu, X.; Chemyakin, E.; Hostetler, C. A.; Stamnes, S.; Moore, R.; Sawamura, P.; Ferrare, R. A.; Knobelspiesse, K. D.

    2015-12-01

    There is considerable interest in retrieving aerosol effective radius, number concentration and refractive index from lidar measurements of extinction and backscatter at several wavelengths. The 3 backscatter + 2 extinction (3β+2α) combination is particularly important since the planned NASA Aerosol-Clouds-Ecosystem (ACE) mission recommends this combination of measurements. The 2nd-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β+2α measurements since 2012. Here we develop a deeper understanding of the information content and sensitivities of the 3β+2α system in terms of aerosol microphysical parameters of interest. We determine best case results using a retrieval-free methodology. We calculate information content and uncertainty metrics from Optimal Estimation techniques using only a simplified forward model look-up table, with no explicit inversion. Simplifications include spherical particles, mono-modal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, our results are applicable as a best case for all existing retrievals. Retrieval-dependent errors due to mismatch between the assumptions and true atmospheric aerosols are not included. The sensitivity metrics allow for identifying (1) information content of the measurements versus a priori information; (2) best-case error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. These results suggest that even in the best case, this retrieval system is underdetermined. Recommendations are given for addressing cross-talk between effective radius and number concentration. A potential solution to the under-determination problem is a combined active (lidar) and passive (polarimeter) retrieval, which is the subject of a new funded NASA project by our team.

Top