Sample records for vector error diffusion

  1. Modulated error diffusion CGHs for neural nets

    NASA Astrophysics Data System (ADS)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  2. Reduction of numerical diffusion in three-dimensional vortical flows using a coupled Eulerian/Lagrangian solution procedure

    NASA Technical Reports Server (NTRS)

    Felici, Helene M.; Drela, Mark

    1993-01-01

    A new approach based on the coupling of an Eulerian and a Lagrangian solver, aimed at reducing the numerical diffusion errors of standard Eulerian time-marching finite-volume solvers, is presented. The approach is applied to the computation of the secondary flow in two bent pipes and the flow around a 3D wing. Using convective point markers the Lagrangian approach provides a correction of the basic Eulerian solution. The Eulerian flow in turn integrates in time the Lagrangian state-vector. A comparison of coarse and fine grid Eulerian solutions makes it possible to identify numerical diffusion. It is shown that the Eulerian/Lagrangian approach is an effective method for reducing numerical diffusion errors.

  3. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  4. The solar vector error within the SNPP Common GEO code, the correction, and the effects on the VIIRS SDR RSB calibration

    NASA Astrophysics Data System (ADS)

    Fulbright, Jon; Anderson, Samuel; Lei, Ning; Efremova, Boryana; Wang, Zhipeng; McIntire, Jeffrey; Chiang, Kwofu; Xiong, Xiaoxiong

    2014-11-01

    Due to a software error, the solar and lunar vectors reported in the on-board calibrator intermediate product (OBC-IP) files for SNPP VIIRS are incorrect. The magnitude of the error is about 0.2 degree, and the magnitude is increasing by about 0.01 degree per year. This error, although small, has an effect on the radiometric calibration of the reflective solar bands (RSB) because accurate solar angles are required for calculating the screen transmission functions and for calculating the illumination of the Solar Diffuser panel. In this paper, we describe the error in the Common GEO code, and how it may be fixed. We present evidence for the error from within the OBC-IP data. We also describe the effects of the solar vector error on the RSB calibration and the Sensor Data Record (SDR). In order to perform this evaluation, we have reanalyzed the yaw-maneuver data to compute the vignetting functions required for the on-orbit SD RSB radiometric calibration. After the reanalysis, we find effect of up to 0.5% on the shortwave infrared (SWIR) RSB calibration.

  5. A Fast Vector Radiative Transfer Model for Atmospheric and Oceanic Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ding, J.; Yang, P.; King, M. D.; Platnick, S. E.; Meyer, K.

    2017-12-01

    A fast vector radiative transfer model is developed in support of atmospheric and oceanic remote sensing. This model is capable of simulating the Stokes vector observed at the top of the atmosphere (TOA) and the terrestrial surface by considering absorption, scattering, and emission. The gas absorption is parameterized in terms of atmospheric gas concentrations, temperature, and pressure. The parameterization scheme combines a regression method and the correlated-K distribution method, and can easily integrate with multiple scattering computations. The approach is more than four orders of magnitude faster than a line-by-line radiative transfer model with errors less than 0.5% in terms of transmissivity. A two-component approach is utilized to solve the vector radiative transfer equation (VRTE). The VRTE solver separates the phase matrices of aerosol and cloud into forward and diffuse parts and thus the solution is also separated. The forward solution can be expressed by a semi-analytical equation based on the small-angle approximation, and serves as the source of the diffuse part. The diffuse part is solved by the adding-doubling method. The adding-doubling implementation is computationally efficient because the diffuse component needs much fewer spherical function expansion terms. The simulated Stokes vector at both the TOA and the surface have comparable accuracy compared with the counterparts based on numerically rigorous methods.

  6. On the angular error of intensity vector based direction of arrival estimation in reverberant sound fields.

    PubMed

    Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon

    2010-10-01

    An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.

  7. Dependence of surface tension on curvature obtained from a diffuse-interface approach

    NASA Astrophysics Data System (ADS)

    Badillo, Arnoldo; Lafferty, Nathan; Matar, Omar K.

    2017-11-01

    From a sharp-interface viewpoint, the surface tension force is f = σκδ (x -xi) n , where σ is the surface tension, κ the local interface curvature, δ the delta function, and n the unit normal vector. The numerical implementation of this force on discrete domains poses challenges that arise from the calculation of the curvature. The continuous surface tension force model, proposed by Brackbill et al. (1992), is an alternative, used commonly in two-phase computational models. In this model, δ is replaced by the gradient of a phase indicator field, whose integral across a diffuse-interface equals unity. An alternative to the Brackbill model are Phase-Field models, which do not require an explicit calculation of the curvature. However, and just as in Brackbill's approach, there are numerical errors that depend on the thickness of the diffuse interface, the grid spacing, and the curvature. We use differential geometry to calculate the leading errors in this force when obtained from a diffuse-interface approach, and outline possible routes to eliminate them. Our results also provide a simple geometrical explanation to the dependence of surface tension on curvature, and to the problem of line tension.

  8. A noninvasive method for measuring the velocity of diffuse hydrothermal flow by tracking moving refractive index anomalies

    NASA Astrophysics Data System (ADS)

    Mittelstaedt, Eric; Davaille, Anne; van Keken, Peter E.; Gracias, Nuno; Escartin, Javier

    2010-10-01

    Diffuse flow velocimetry (DFV) is introduced as a new, noninvasive, optical technique for measuring the velocity of diffuse hydrothermal flow. The technique uses images of a motionless, random medium (e.g., rocks) obtained through the lens of a moving refraction index anomaly (e.g., a hot upwelling). The method works in two stages. First, the changes in apparent background deformation are calculated using particle image velocimetry (PIV). The deformation vectors are determined by a cross correlation of pixel intensities across consecutive images. Second, the 2-D velocity field is calculated by cross correlating the deformation vectors between consecutive PIV calculations. The accuracy of the method is tested with laboratory and numerical experiments of a laminar, axisymmetric plume in fluids with both constant and temperature-dependent viscosity. Results show that average RMS errors are ˜5%-7% and are most accurate in regions of pervasive apparent background deformation which is commonly encountered in regions of diffuse hydrothermal flow. The method is applied to a 25 s video sequence of diffuse flow from a small fracture captured during the Bathyluck'09 cruise to the Lucky Strike hydrothermal field (September 2009). The velocities of the ˜10°C-15°C effluent reach ˜5.5 cm/s, in strong agreement with previous measurements of diffuse flow. DFV is found to be most accurate for approximately 2-D flows where background objects have a small spatial scale, such as sand or gravel.

  9. A novel measure of reliability in Diffusion Tensor Imaging after data rejections due to subject motion.

    PubMed

    Sairanen, V; Kuusela, L; Sipilä, O; Savolainen, S; Vanhatalo, S

    2017-02-15

    Diffusion Tensor Imaging (DTI) is commonly challenged by subject motion during data acquisition, which often leads to corrupted image data. Currently used procedure in DTI analysis is to correct or completely reject such data before tensor estimations, however assessing the reliability and accuracy of the estimated tensor in such situations has evaded previous studies. This work aims to define the loss of data accuracy with increasing image rejections, and to define a robust method for assessing reliability of the result at voxel level. We carried out simulations of every possible sub-scheme (N=1,073,567,387) of Jones30 gradient scheme, followed by confirming the idea with MRI data from four newborn and three adult subjects. We assessed the relative error of the most commonly used tensor estimates for DTI and tractography studies, fractional anisotropy (FA) and the major orientation vector (V1), respectively. The error was estimated using two measures, the widely used electric potential (EP) criteria as well as the rotationally variant condition number (CN). Our results show that CN and EP are comparable in situations with very few rejections, but CN becomes clearly more sensitive to depicting errors when more gradient vectors and images were rejected. The error in FA and V1 was also found depend on the actual FA level in the given voxel; low actual FA levels were related to high relative errors in the FA and V1 estimates. Finally, the results were confirmed with clinical MRI data. This showed that the errors after rejections are, indeed, inhomogeneous across brain regions. The FA and V1 errors become progressively larger when moving from the thick white matter bundles towards more superficial subcortical structures. Our findings suggest that i) CN is a useful estimator of data reliability at voxel level, and ii) DTI preprocessing with data rejections leads to major challenges when assessing brain tissue with lower FA levels, such as all newborn brain, as well as the adult superficial, subcortical areas commonly traced in precise connectivity analyses between cortical regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A Demons algorithm for image registration with locally adaptive regularization.

    PubMed

    Cahill, Nathan D; Noble, J Alison; Hawkes, David J

    2009-01-01

    Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.

  11. Generating a Simulated Fluid Flow Over an Aircraft Surface Using Anisotropic Diffusion

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L. (Inventor); Sturdza, Peter (Inventor)

    2013-01-01

    A fluid-flow simulation over a computer-generated aircraft surface is generated using a diffusion technique. The surface is comprised of a surface mesh of polygons. A boundary-layer fluid property is obtained for a subset of the polygons of the surface mesh. A pressure-gradient vector is determined for a selected polygon, the selected polygon belonging to the surface mesh but not one of the subset of polygons. A maximum and minimum diffusion rate is determined along directions determined using a pressure gradient vector corresponding to the selected polygon. A diffusion-path vector is defined between a point in the selected polygon and a neighboring point in a neighboring polygon. An updated fluid property is determined for the selected polygon using a variable diffusion rate, the variable diffusion rate based on the minimum diffusion rate, maximum diffusion rate, and angular difference between the diffusion-path vector and the pressure-gradient vector.

  12. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  13. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  14. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  15. An integrated fiber-optic probe combined with support vector regression for fast estimation of optical properties of turbid media.

    PubMed

    Zhou, Yang; Fu, Xiaping; Ying, Yibin; Fang, Zhenhuan

    2015-06-23

    A fiber-optic probe system was developed to estimate the optical properties of turbid media based on spatially resolved diffuse reflectance. Because of the limitations in numerical calculation of radiative transfer equation (RTE), diffusion approximation (DA) and Monte Carlo simulations (MC), support vector regression (SVR) was introduced to model the relationship between diffuse reflectance values and optical properties. The SVR models of four collection fibers were trained by phantoms in calibration set with a wide range of optical properties which represented products of different applications, then the optical properties of phantoms in prediction set were predicted after an optimal searching on SVR models. The results indicated that the SVR model was capable of describing the relationship with little deviation in forward validation. The correlation coefficient (R) of reduced scattering coefficient μ'(s) and absorption coefficient μ(a) in the prediction set were 0.9907 and 0.9980, respectively. The root mean square errors of prediction (RMSEP) of μ'(s) and μ(a) in inverse validation were 0.411 cm(-1) and 0.338 cm(-1), respectively. The results indicated that the integrated fiber-optic probe system combined with SVR model were suitable for fast and accurate estimation of optical properties of turbid media based on spatially resolved diffuse reflectance. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. High-Order Polynomial Expansions (HOPE) for flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Chris J., Jr.

    1991-01-01

    The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M = 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions.

  17. Generating a Simulated Fluid Flow over a Surface Using Anisotropic Diffusion

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L. (Inventor); Sturdza, Peter (Inventor)

    2016-01-01

    A fluid-flow simulation over a computer-generated surface is generated using a diffusion technique. The surface is comprised of a surface mesh of polygons. A boundary-layer fluid property is obtained for a subset of the polygons of the surface mesh. A gradient vector is determined for a selected polygon, the selected polygon belonging to the surface mesh but not one of the subset of polygons. A maximum and minimum diffusion rate is determined along directions determined using the gradient vector corresponding to the selected polygon. A diffusion-path vector is defined between a point in the selected polygon and a neighboring point in a neighboring polygon. An updated fluid property is determined for the selected polygon using a variable diffusion rate, the variable diffusion rate based on the minimum diffusion rate, maximum diffusion rate, and the gradient vector.

  18. Wheel speed management control system for spacecraft

    NASA Technical Reports Server (NTRS)

    Goodzeit, Neil E. (Inventor); Linder, David M. (Inventor)

    1991-01-01

    A spacecraft attitude control system uses at least four reaction wheels. In order to minimize reaction wheel speed and therefore power, a wheel speed management system is provided. The management system monitors the wheel speeds and generates a wheel speed error vector. The error vector is integrated, and the error vector and its integral are combined to form a correction vector. The correction vector is summed with the attitude control torque command signals for driving the reaction wheels.

  19. Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors

    DTIC Science & Technology

    1989-08-21

    Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one

  20. The parallel-antiparallel signal difference in double-wave-vector diffusion-weighted MR at short mixing times: A phase evolution perspective

    NASA Astrophysics Data System (ADS)

    Finsterbusch, Jürgen

    2011-01-01

    Experiments with two diffusion weightings applied in direct succession in a single acquisition, so-called double- or two-wave-vector diffusion-weighting (DWV) experiments at short mixing times, have been shown to be a promising tool to estimate cell or compartment sizes, e.g. in living tissue. The basic theory for such experiments predicts that the signal decays for parallel and antiparallel wave vector orientations differ by a factor of three for small wave vectors. This seems to be surprising because in standard, single-wave-vector experiments the polarity of the diffusion weighting has no influence on the signal attenuation. Thus, the question how this difference can be understood more pictorially is often raised. In this rather educational manuscript, the phase evolution during a DWV experiment for simple geometries, e.g. diffusion between parallel, impermeable planes oriented perpendicular to the wave vectors, is considered step-by-step and demonstrates how the signal difference develops. Considering the populations of the phase distributions obtained, the factor of three between the signal decays which is predicted by the theory can be reproduced. Furthermore, the intermediate signal decay for orthogonal wave vector orientations can be derived when investigating diffusion in a box. Thus, the presented “phase gymnastics” approach may help to understand the signal modulation observed in DWV experiments at short mixing times.

  1. Frontotemporal correlates of impulsivity and machine learning in retired professional athletes with a history of multiple concussions.

    PubMed

    Goswami, R; Dufort, P; Tartaglia, M C; Green, R E; Crawley, A; Tator, C H; Wennberg, R; Mikulis, D J; Keightley, M; Davis, Karen D

    2016-05-01

    The frontotemporal cortical network is associated with behaviours such as impulsivity and aggression. The health of the uncinate fasciculus (UF) that connects the orbitofrontal cortex (OFC) with the anterior temporal lobe (ATL) may be a crucial determinant of behavioural regulation. Behavioural changes can emerge after repeated concussion and thus we used MRI to examine the UF and connected gray matter as it relates to impulsivity and aggression in retired professional football players who had sustained multiple concussions. Behaviourally, athletes had faster reaction times and an increased error rate on a go/no-go task, and increased aggression and mania compared to controls. MRI revealed that the athletes had (1) cortical thinning of the ATL, (2) negative correlations of OFC thickness with aggression and task errors, indicative of impulsivity, (3) negative correlations of UF axial diffusivity with error rates and aggression, and (4) elevated resting-state functional connectivity between the ATL and OFC. Using machine learning, we found that UF diffusion imaging differentiates athletes from healthy controls with significant classifiers based on UF mean and radial diffusivity showing 79-84 % sensitivity and specificity, and 0.8 areas under the ROC curves. The spatial pattern of classifier weights revealed hot spots at the orbitofrontal and temporal ends of the UF. These data implicate the UF system in the pathological outcomes of repeated concussion as they relate to impulsive behaviour. Furthermore, a support vector machine has potential utility in the general assessment and diagnosis of brain abnormalities following concussion.

  2. Quantitative analysis of binary polymorphs mixtures of fusidic acid by diffuse reflectance FTIR spectroscopy, diffuse reflectance FT-NIR spectroscopy, Raman spectroscopy and multivariate calibration.

    PubMed

    Guo, Canyong; Luo, Xuefang; Zhou, Xiaohua; Shi, Beijia; Wang, Juanjuan; Zhao, Jinqi; Zhang, Xiaoxia

    2017-06-05

    Vibrational spectroscopic techniques such as infrared, near-infrared and Raman spectroscopy have become popular in detecting and quantifying polymorphism of pharmaceutics since they are fast and non-destructive. This study assessed the ability of three vibrational spectroscopy combined with multivariate analysis to quantify a low-content undesired polymorph within a binary polymorphic mixture. Partial least squares (PLS) regression and support vector machine (SVM) regression were employed to build quantitative models. Fusidic acid, a steroidal antibiotic, was used as the model compound. It was found that PLS regression performed slightly better than SVM regression in all the three spectroscopic techniques. Root mean square errors of prediction (RMSEP) were ranging from 0.48% to 1.17% for diffuse reflectance FTIR spectroscopy and 1.60-1.93% for diffuse reflectance FT-NIR spectroscopy and 1.62-2.31% for Raman spectroscopy. The results indicate that diffuse reflectance FTIR spectroscopy offers significant advantages in providing accurate measurement of polymorphic content in the fusidic acid binary mixtures, while Raman spectroscopy is the least accurate technique for quantitative analysis of polymorphs. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. High-Order Polynomial Expansions (HOPE) for flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Chris J., Jr.

    1991-01-01

    The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M equals 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions. Also, the authors give the weakness of the scheme and suggest areas for further investigation.

  4. Fast temporal neural learning using teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)

    1992-01-01

    A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.

  5. Fast temporal neural learning using teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)

    1995-01-01

    A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.

  6. Background Error Correlation Modeling with Diffusion Operators

    DTIC Science & Technology

    2013-01-01

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of

  7. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  8. Background-Error Correlation Model Based on the Implicit Solution of a Diffusion Equation

    DTIC Science & Technology

    2010-01-01

    1 Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation Matthew J. Carrier* and Hans Ngodock...4. TITLE AND SUBTITLE Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation 5a. CONTRACT NUMBER 5b. GRANT...2001), which sought to model error correlations based on the explicit solution of a generalized diffusion equation. The implicit solution is

  9. Fast higher-order MR image reconstruction using singular-vector separation.

    PubMed

    Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P

    2012-07-01

    Medical resonance imaging (MRI) conventionally relies on spatially linear gradient fields for image encoding. However, in practice various sources of nonlinear fields can perturb the encoding process and give rise to artifacts unless they are suitably addressed at the reconstruction level. Accounting for field perturbations that are neither linear in space nor constant over time, i.e., dynamic higher-order fields, is particularly challenging. It was previously shown to be feasible with conjugate-gradient iteration. However, so far this approach has been relatively slow due to the need to carry out explicit matrix-vector multiplications in each cycle. In this work, it is proposed to accelerate higher-order reconstruction by expanding the encoding matrix such that fast Fourier transform can be employed for more efficient matrix-vector computation. The underlying principle is to represent the perturbing terms as sums of separable functions of space and time. Compact representations with this property are found by singular-vector analysis of the perturbing matrix. Guidelines for balancing the accuracy and speed of the resulting algorithm are derived by error propagation analysis. The proposed technique is demonstrated for the case of higher-order field perturbations due to eddy currents caused by diffusion weighting. In this example, image reconstruction was accelerated by two orders of magnitude.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less

  11. Image Halftoning Using Optimized Dot Diffusion

    DTIC Science & Technology

    1998-01-01

    ppvnath@sys.caltech.edu ABSTRACT The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion ...digital halftoning : ordered dither [1], error diffusion [2], neural-net based methods [8], and more recently direct binary search (DBS) [7]. Ordered...from periodic patterns. On the other hand error diffused halftones do not suffer from periodicity and offer blue noise characteristic [3] which is

  12. Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.

    ERIC Educational Resources Information Center

    Taghva, Kazem; And Others

    1996-01-01

    Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)

  13. Estimation of attitude sensor timetag biases

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1995-01-01

    This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.

  14. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for

  15. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    PubMed

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  16. Numerical simulations of short-mixing-time double-wave-vector diffusion-weighting experiments with multiple concatenations on whole-body MR systems

    NASA Astrophysics Data System (ADS)

    Finsterbusch, Jürgen

    2010-12-01

    Double- or two-wave-vector diffusion-weighting experiments with short mixing times in which two diffusion-weighting periods are applied in direct succession, are a promising tool to estimate cell sizes in the living tissue. However, the underlying effect, a signal difference between parallel and antiparallel wave vector orientations, is considerably reduced for the long gradient pulses required on whole-body MR systems. Recently, it has been shown that multiple concatenations of the two wave vectors in a single acquisition can double the modulation amplitude if short gradient pulses are used. In this study, numerical simulations of such experiments were performed with parameters achievable with whole-body MR systems. It is shown that the theoretical model yields a good approximation of the signal behavior if an additional term describing free diffusion is included. More importantly, it is demonstrated that the shorter gradient pulses sufficient to achieve the desired diffusion weighting for multiple concatenations, increase the signal modulation considerably, e.g. by a factor of about five for five concatenations. Even at identical echo times, achieved by a shortened diffusion time, a moderate number of concatenations significantly improves the signal modulation. Thus, experiments on whole-body MR systems may benefit from multiple concatenations.

  17. Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems

    PubMed Central

    Li, Zhining; Zhang, Yingtang; Yin, Gang

    2018-01-01

    The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544

  18. [Transposition errors during learning to reproduce a sequence by the right- and the left-hand movements: simulation of positional and movement coding].

    PubMed

    Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N

    2012-01-01

    Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.

  19. Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging

    PubMed Central

    Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.

    2017-01-01

    Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049

  20. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  1. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part II

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.

  2. Image Halftoning and Inverse Halftoning for Optimized Dot Diffusion

    DTIC Science & Technology

    1998-01-01

    systems.caltech.edu, ppvnath@sys.caltech.edu ABSTRACT The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error ... halftoning : ordered dither [3], error diffusion [4], neural-net based methods [2], and more recently direct binary search (DBS) [10]. Ordered dithering is a...patterns. On the other hand error diffused halftones do not suffer from periodicity and offer blue noise characteristic [11] which is found to be

  3. A median filter approach for correcting errors in a vector field

    NASA Technical Reports Server (NTRS)

    Schultz, H.

    1985-01-01

    Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.

  4. Machine Learning-based Classification of Diffuse Large B-cell Lymphoma Patients by Their Protein Expression Profiles.

    PubMed

    Deeb, Sally J; Tyanova, Stefka; Hummel, Michael; Schmidt-Supprian, Marc; Cox, Juergen; Mann, Matthias

    2015-11-01

    Characterization of tumors at the molecular level has improved our knowledge of cancer causation and progression. Proteomic analysis of their signaling pathways promises to enhance our understanding of cancer aberrations at the functional level, but this requires accurate and robust tools. Here, we develop a state of the art quantitative mass spectrometric pipeline to characterize formalin-fixed paraffin-embedded tissues of patients with closely related subtypes of diffuse large B-cell lymphoma. We combined a super-SILAC approach with label-free quantification (hybrid LFQ) to address situations where the protein is absent in the super-SILAC standard but present in the patient samples. Shotgun proteomic analysis on a quadrupole Orbitrap quantified almost 9,000 tumor proteins in 20 patients. The quantitative accuracy of our approach allowed the segregation of diffuse large B-cell lymphoma patients according to their cell of origin using both their global protein expression patterns and the 55-protein signature obtained previously from patient-derived cell lines (Deeb, S. J., D'Souza, R. C., Cox, J., Schmidt-Supprian, M., and Mann, M. (2012) Mol. Cell. Proteomics 11, 77-89). Expression levels of individual segregation-driving proteins as well as categories such as extracellular matrix proteins behaved consistently with known trends between the subtypes. We used machine learning (support vector machines) to extract candidate proteins with the highest segregating power. A panel of four proteins (PALD1, MME, TNFAIP8, and TBC1D4) is predicted to classify patients with low error rates. Highly ranked proteins from the support vector analysis revealed differential expression of core signaling molecules between the subtypes, elucidating aspects of their pathobiology. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  5. Noise-induced drift in two-dimensional anisotropic systems

    NASA Astrophysics Data System (ADS)

    Farago, Oded

    2017-10-01

    We study the isothermal Brownian dynamics of a particle in a system with spatially varying diffusivity. Due to the heterogeneity of the system, the particle's mean displacement does not vanish even if it does not experience any physical force. This phenomenon has been termed "noise-induced drift," and has been extensively studied for one-dimensional systems. Here, we examine the noise-induced drift in a two-dimensional anisotropic system, characterized by a symmetric diffusion tensor with unequal diagonal elements. A general expression for the mean displacement vector is derived and presented as a sum of two vectors, depicting two distinct drifting effects. The first vector describes the tendency of the particle to drift toward the high diffusivity side in each orthogonal principal diffusion direction. This is a generalization of the well-known expression for the noise-induced drift in one-dimensional systems. The second vector represents a novel drifting effect, not found in one-dimensional systems, originating from the spatial rotation in the directions of the principal axes. The validity of the derived expressions is verified by using Langevin dynamics simulations. As a specific example, we consider the relative diffusion of two transmembrane proteins, and demonstrate that the average distance between them increases at a surprisingly fast rate of several tens of micrometers per second.

  6. Halftoning Algorithms and Systems.

    DTIC Science & Technology

    1996-08-01

    TERMS 15. NUMBER IF PAGESi. Halftoning algorithms; error diffusions ; color printing; topographic maps 16. PRICE CODE 17. SECURITY CLASSIFICATION 18...graylevels for each screen level. In the case of error diffusion algorithms, the calibration procedure using the new centering concept manifests itself as a...Novel Centering Concept for Overlapping Correction Paper / Transparency (Patent Applied 5/94)I * Applications To Error Diffusion * To Dithering (IS&T

  7. Improved Dot Diffusion For Image Halftoning

    DTIC Science & Technology

    1999-01-01

    The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion method. The method was recently improved...by optimization of the so-called class matrix so that the resulting halftones are comparable to the error diffused halftones . In this paper we will...first review the dot diffusion method. Previously, 82 class matrices were used for dot diffusion method. A problem with this size of class matrix is

  8. SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenton, O; Valdes, G; Yin, L

    Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less

  9. Application of Bred Vectors To Data Assimilation

    NASA Astrophysics Data System (ADS)

    Corazza, M.; Kalnay, E.; Patil, Dj

    We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0,0,0]=1.8, less than 2 because one direction is more dominant than the other in representing the original data. The results (Patil et al, 2001) show that there are large regions where the bred vectors span a subspace of substantially lower dimension than that of the full space. These low dimensionality regions are dominant in the baroclinic extratropics, typically have a lifetime of 3-7 days, have a well-defined horizontal and vertical structure that spans 1 most of the atmosphere, and tend to move eastward. New results with a large number of ensemble members confirm these results and indicate that the low dimensionality regions are quite robust, and depend only on the verification time (i.e., the underlying flow). Corazza et al (2001) have performed experiments with a data assimilation system based on a quasi-geostrophic model and simulated observations (Morss, 1999, Hamill et al, 2000). A 3D-variational data assimilation scheme for a quasi-geostrophic chan- nel model is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms considered in this paper (potential vorticity norm and streamfunction norm). The results show that the bred vectors do indeed represent well the characteristics of the data assimilation forecast errors, and that the subspace of bred vectors contains most of the forecast error, except in areas where the forecast errors are small. For example, the angle between the 6hr forecast error and the subspace spanned by 10 bred vectors is less than 10o over 90% of the domain, indicating a pattern correlation of more than 98.5% between the forecast error and its projection onto the bred vector subspace. The presence of low-dimensional regions in the perturbations of the basic flow has important implications for data assimilation. At any given time, there is a difference between the true atmospheric state and the model forecast. Assuming that model er- rors are not the dominant source of errors, in a region of low BV-dimensionality the difference between the true state and the forecast should lie substantially in the low dimensional unstable subspace of the few bred vectors that contribute most strongly to the low BV-dimension. This information should yield a substantial improvement in the forecast: the data assimilation algorithm should correct the model state by moving it closer to the observations along the unstable subspace, since this is where the true state most likely lies. Preliminary experiments have been conducted with the quasi-geostrophic data assim- ilation system testing whether it is possible to add "errors of the day" based on bred vectors to the standard (constant) 3D-Var background error covariance in order to capture these important errors. The results are extremely encouraging, indicating a significant reduction (about 40%) in the analysis errors at a very low computational cost. References: 2 Corazza, M., E. Kalnay, DJ Patil, R. Morss, M Cai, I. Szunyogh, BR Hunt, E Ott and JA Yorke, 2001: Use of the breeding technique to estimate the structure of the analysis "errors of the day". Submitted to Nonlinear Processes in Geophysics. Hamill, T.M., Snyder, C., and Morss, R.E., 2000: A Comparison of Probabilistic Fore- casts from Bred, Singular-Vector and Perturbed Observation Ensembles, Mon. Wea. Rev., 128, 1835­1851. Kalnay, E., and Z. Toth, 1994: Removing growing errors in the analysis cycle. Preprints of the Tenth Conference on Numerical Weather Prediction, Amer. Meteor. Soc., 1994, 212-215. Morss, R. E., 1999: Adaptive observations: Idealized sampling strategies for improv- ing numerical weather prediction. PHD thesis, Massachussetts Institute of technology, 225pp. Patil, D. J. S., B. R. Hunt, E. Kalnay, J. A. Yorke, and E. Ott., 2001: Local Low Dimensionality of Atmospheric Dynamics. Phys. Rev. Lett., 86, 5878. 3

  10. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  11. An Adaptive Method of Lines with Error Control for Parabolic Equations of the Reaction-Diffusion Type.

    DTIC Science & Technology

    1984-06-01

    space discretization error . 1. I 3 1. INTRODUCTION Reaction- diffusion processes occur in many branches of biology and physical chemistry. Examples...to model reaction- diffusion phenomena. The primary goal of this adaptive method is to keep a particular norm of the space discretization error less...AD-A142 253 AN ADAPTIVE MET6 OFD LNES WITH ERROR CONTROL FOR 1 INST FOR PHYSICAL SCIENCE AND TECH. I BABUSKAAAO C7 EA OH S UMR AN UNVC EEP R

  12. Design of thrust vectoring exhaust nozzles for real-time applications using neural networks

    NASA Technical Reports Server (NTRS)

    Prasanth, Ravi K.; Markin, Robert E.; Whitaker, Kevin W.

    1991-01-01

    Thrust vectoring continues to be an important issue in military aircraft system designs. A recently developed concept of vectoring aircraft thrust makes use of flexible exhaust nozzles. Subtle modifications in the nozzle wall contours produce a non-uniform flow field containing a complex pattern of shock and expansion waves. The end result, due to the asymmetric velocity and pressure distributions, is vectored thrust. Specification of the nozzle contours required for a desired thrust vector angle (an inverse design problem) has been achieved with genetic algorithms. This approach is computationally intensive and prevents the nozzles from being designed in real-time, which is necessary for an operational aircraft system. An investigation was conducted into using genetic algorithms to train a neural network in an attempt to obtain, in real-time, two-dimensional nozzle contours. Results show that genetic algorithm trained neural networks provide a viable, real-time alternative for designing thrust vectoring nozzles contours. Thrust vector angles up to 20 deg were obtained within an average error of 0.0914 deg. The error surfaces encountered were highly degenerate and thus the robustness of genetic algorithms was well suited for minimizing global errors.

  13. Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems

    PubMed Central

    Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia

    2016-01-01

    The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351

  14. Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.

    PubMed

    Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia

    2016-12-18

    The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.

  15. Numerical Study of Buoyancy and Different Diffusion Effects on the Structure and Dynamics of Triple Flames

    NASA Technical Reports Server (NTRS)

    Chen, Jyh-Yuan; Echekki, Tarek

    2001-01-01

    Numerical simulations of 2-D triple flames under gravity force have been implemented to identify the effects of gravity on triple flame structure and propagation properties and to understand the mechanisms of instabilities resulting from both heat release and buoyancy effects. A wide range of gravity conditions, heat release, and mixing widths for a scalar mixing layer are computed for downward-propagating (in the same direction with the gravity vector) and upward-propagating (in the opposite direction of the gravity vector) triple flames. Results of numerical simulations show that gravity strongly affects the triple flame speed through its contribution to the overall flow field. A simple analytical model for the triple flame speed, which accounts for both buoyancy and heat release, is developed. Comparisons of the proposed model with the numerical results for a wide range of gravity, heat release and mixing width conditions, yield very good agreement. The analysis shows that under neutral diffusion, downward propagation reduces the triple flame speed, while upward propagation enhances it. For the former condition, a critical Froude number may be evaluated, which corresponds to a vanishing triple flame speed. Downward-propagating triple flames at relatively strong gravity effects have exhibited instabilities. These instabilities are generated without any artificial forcing of the flow. Instead disturbances are initiated by minute round-off errors in the numerical simulations, and subsequently amplified by instabilities. A linear stability analysis on mean profiles of stable triple flame configurations have been performed to identify the most amplified frequency in spatially developed flows. The eigenfunction equations obtained from the linearized disturbance equations are solved using the shooting method. The linear stability analysis yields reasonably good agreements with the observed frequencies of the unstable triple flames. The frequencies and amplitudes of disturbances increase with the magnitude of the gravity vector. Moreover, disturbances appear to be most amplified just downstream of the premixed branches. The effects of mixing width and differential diffusion are investigated and their roles on the flame stability are studied.

  16. Optical Oversampled Analog-to-Digital Conversion

    DTIC Science & Technology

    1992-06-29

    hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions

  17. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part I

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji; Sano, Kousuke

    This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.

  18. Color digital halftoning taking colorimetric color reproduction into account

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Suzuki, Toshiaki; Shimoyama, Nobukatsu; Miyake, Yoichi

    1996-01-01

    Taking colorimetric color reproduction into account, the conventional error diffusion method is modified for color digital half-toning. Assuming that the input to a bilevel color printer is given in CIE-XYZ tristimulus values or CIE-LAB values instead of the more conventional RGB or YMC values, two modified versions based on vector operation in (1) the XYZ color space and (2) the LAB color space were tested. Experimental results show that the modified methods, especially the method using the LAB color space, resulted in better color reproduction performance than the conventional methods. Spatial artifacts that appear in the modified methods are presented and analyzed. It is also shown that the modified method (2) with a thresholding technique achieves a good spatial image quality.

  19. Hyperspherical von Mises-Fisher mixture (HvMF) modelling of high angular resolution diffusion MRI.

    PubMed

    Bhalerao, Abhir; Westin, Carl-Fredrik

    2007-01-01

    A mapping of unit vectors onto a 5D hypersphere is used to model and partition ODFs from HARDI data. This mapping has a number of useful and interesting properties and we make a link to interpretation of the second order spherical harmonic decompositions of HARDI data. The paper presents the working theory and experiments of using a von Mises-Fisher mixture model for directional samples. The MLE of the second moment of the HvMF pdf can also be related to fractional anisotropy. We perform error analysis of the estimation scheme in single and multi-fibre regions and then show how a penalised-likelihood model selection method can be employed to differentiate single and multiple fibre regions.

  20. Constrained motion estimation-based error resilient coding for HEVC

    NASA Astrophysics Data System (ADS)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  1. An Error Analysis for the Finite Element Method Applied to Convection Diffusion Problems.

    DTIC Science & Technology

    1981-03-01

    D TFhG-]NOLOGY k 4b 00 \\" ) ’b Technical Note BN-962 AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONVECTION DIFFUSION PROBLEM by I...Babu~ka and W. G. Szym’czak March 1981 V.. UNVI I Of- ’i -S AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD P. - 0 w APPLIED TO CONVECTION DIFFUSION ...AOAO98 895 MARYLAND UNIVYCOLLEGE PARK INST FOR PHYSICAL SCIENCE--ETC F/G 12/I AN ERROR ANALYIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONV..ETC (U

  2. Sparse Data Representation: The Role of Redundancy in Data Processing

    DTIC Science & Technology

    2005-09-13

    directions The Error Diffusion Halftoning Algorithm: Some Recent Stability Results and Applications Beyond Halftoning Dr. Chai Wu Thomas J. Watson Research...digital and analog printers use some form of halftoning ; just look at any picture in a newspaper or magazine under a magnifying glass. Error diffusion is...a popular technique for high quality digital halftoning . The purpose of this talk is to illustrate the versatility of error diffusion with

  3. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  4. Potential for wind extraction from 4D-Var assimilation of aerosols and moisture

    NASA Astrophysics Data System (ADS)

    Zaplotnik, Žiga; Žagar, Nedjeljka

    2017-04-01

    We discuss the potential of the four-dimensional variational data assimilation (4D-Var) to retrieve the unobserved wind field from observations of atmospheric tracers and the mass field through internal model dynamics and the multivariate relationships in the background-error term for 4D-Var. The presence of non-linear moist dynamics makes the wind retrieval from tracers very difficult. On the other hand, it has been shown that moisture observations strongly influence both tropical and mid-latitude wind field in 4D-Var. We present an intermediate complexity model that describes nonlinear interactions between the wind, temperature, aerosols and moisture including their sinks and sources in the framework of the so-called first baroclinic mode atmosphere envisaged by A. Gill. Aerosol physical processes, which are included in the model, are the non-linear advection, diffusion and sources and sinks that exist as dry and wet deposition and diffusion. Precipitation is parametrized according to the Betts-Miller scheme. The control vector for 4D-Var includes aerosols, moisture and the three dynamical variables. The former is analysed univariately whereas wind field and mass field are analysed in a multivariate fashion taking into account quasi-geostrophic and unbalanced dynamics. The OSSE type of studies are performed for the tropical region to assess the ability of 4D-Var to extract wind-field information from the time series of observations of tracers as a function of the flow nonlinearity, the observations density and the length of the assimilation window (12 hours and 24 hours), in dry and moist environment. Results show that the 4D-Var assimilation of aerosols and temperature data is beneficial for the wind analysis with analysis errors strongly dependent on the moist processes and reliable background-error covariances.

  5. Design of analytical failure detection using secondary observers

    NASA Technical Reports Server (NTRS)

    Sisar, M.

    1982-01-01

    The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.

  6. Feedback controlled optics with wavefront compensation

    NASA Technical Reports Server (NTRS)

    Breckenridge, William G. (Inventor); Redding, David C. (Inventor)

    1993-01-01

    The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.

  7. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  8. A Reaction-Diffusion Model of Vector-Borne Disease with Periodic Delays

    NASA Astrophysics Data System (ADS)

    Wu, Ruiwen; Zhao, Xiao-Qiang

    2018-06-01

    A vector-borne disease is caused by a range of pathogens and transmitted to hosts through vectors. To investigate the multiple effects of the spatial heterogeneity, the temperature sensitivity of extrinsic incubation period and intrinsic incubation period, and the seasonality on disease transmission, we propose a nonlocal reaction-diffusion model of vector-borne disease with periodic delays. We introduce the basic reproduction number R_0 for this model and then establish a threshold-type result on its global dynamics in terms of R_0 . In the case where all the coefficients are constants, we also prove the global attractivity of the positive constant steady state when R_0>1 . Numerically, we study the malaria transmission in Maputo Province, Mozambique.

  9. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1994-01-01

    The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.

  10. A new method for distortion magnetic field compensation of a geomagnetic vector measurement system

    NASA Astrophysics Data System (ADS)

    Liu, Zhongyan; Pan, Mengchun; Tang, Ying; Zhang, Qi; Geng, Yunling; Wan, Chengbiao; Chen, Dixiang; Tian, Wugang

    2016-12-01

    The geomagnetic vector measurement system mainly consists of three-axis magnetometer and an INS (inertial navigation system), which have many ferromagnetic parts on them. The magnetometer is always distorted by ferromagnetic parts and other electric equipments such as INS and power circuit module within the system, which can lead to geomagnetic vector measurement error of thousands of nT. Thus, the geomagnetic vector measurement system has to be compensated in order to guarantee the measurement accuracy. In this paper, a new distortion magnetic field compensation method is proposed, in which a permanent magnet with different relative positions is used to change the ambient magnetic field to construct equations of the error model parameters, and the parameters can be accurately estimated by solving linear equations. In order to verify effectiveness of the proposed method, the experiment is conducted, and the results demonstrate that, after compensation, the components errors of measured geomagnetic field are reduced significantly. It demonstrates that the proposed method can effectively improve the accuracy of the geomagnetic vector measurement system.

  11. Modeling Morphogenesis with Reaction-Diffusion Equations Using Galerkin Spectral Methods

    DTIC Science & Technology

    2002-05-06

    reaction- diffusion equation is a difficult problem in analysis that will not be addressed here. Errors will also arise from numerically approx solutions to...the ODEs. When comparing the approximate solution to actual reaction- diffusion systems found in nature, we must also take into account errors that...

  12. The mean-square error optimal linear discriminant function and its application to incomplete data vectors

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1979-01-01

    In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.

  13. Peak-locking error reduction by birefringent optical diffusers

    NASA Astrophysics Data System (ADS)

    Kislaya, Ankur; Sciacchitano, Andrea

    2018-02-01

    The use of optical diffusers for the reduction of peak-locking errors in particle image velocimetry is investigated. The working principle of the optical diffusers is based on the concept of birefringence, where the incoming rays are subject to different deflections depending on the light direction and polarization. The performances of the diffusers are assessed via wind tunnel measurements in uniform flow and wall-bounded turbulence. Comparison with best-practice image defocusing is also conducted. It is found that the optical diffusers yield an increase of the particle image diameter up to 10 µm in the sensor plane. Comparison with reference measurements showed a reduction of both random and systematic errors by a factor of 3, even at low imaging signal-to-noise ratio.

  14. [Discrimination of types of polyacrylamide based on near infrared spectroscopy coupled with least square support vector machine].

    PubMed

    Zhang, Hong-Guang; Yang, Qin-Min; Lu, Jian-Gang

    2014-04-01

    In this paper, a novel discriminant methodology based on near infrared spectroscopic analysis technique and least square support vector machine was proposed for rapid and nondestructive discrimination of different types of Polyacrylamide. The diffuse reflectance spectra of samples of Non-ionic Polyacrylamide, Anionic Polyacrylamide and Cationic Polyacrylamide were measured. Then principal component analysis method was applied to reduce the dimension of the spectral data and extract of the principal compnents. The first three principal components were used for cluster analysis of the three different types of Polyacrylamide. Then those principal components were also used as inputs of least square support vector machine model. The optimization of the parameters and the number of principal components used as inputs of least square support vector machine model was performed through cross validation based on grid search. 60 samples of each type of Polyacrylamide were collected. Thus a total of 180 samples were obtained. 135 samples, 45 samples for each type of Polyacrylamide, were randomly split into a training set to build calibration model and the rest 45 samples were used as test set to evaluate the performance of the developed model. In addition, 5 Cationic Polyacrylamide samples and 5 Anionic Polyacrylamide samples adulterated with different proportion of Non-ionic Polyacrylamide were also prepared to show the feasibilty of the proposed method to discriminate the adulterated Polyacrylamide samples. The prediction error threshold for each type of Polyacrylamide was determined by F statistical significance test method based on the prediction error of the training set of corresponding type of Polyacrylamide in cross validation. The discrimination accuracy of the built model was 100% for prediction of the test set. The prediction of the model for the 10 mixing samples was also presented, and all mixing samples were accurately discriminated as adulterated samples. The overall results demonstrate that the discrimination method proposed in the present paper can rapidly and nondestructively discriminate the different types of Polyacrylamide and the adulterated Polyacrylamide samples, and offered a new approach to discriminate the types of Polyacrylamide.

  15. Self-focusing therapeutic gene delivery with intelligent gene vector swarms: intra-swarm signalling through receptor transgene expression in targeted cells.

    PubMed

    Tolmachov, Oleg E

    2015-01-01

    Gene delivery in vivo that is tightly focused on the intended target cells is essential to maximize the benefits of gene therapy and to reduce unwanted side-effects. Cell surface markers are immediately available for probing by therapeutic gene vectors and are often used to direct gene transfer with these vectors to specific target cell populations. However, it is not unusual for the choice of available extra-cellular markers to be too scarce to provide a reliable definition of the desired therapeutically relevant set of target cells. Therefore, interrogation of intra-cellular determinants of cell-specificity, such as tissue-specific transcription factors, can be vital in order to provide detailed cell-guiding information to gene vector particles. An important improvement in cell-specific gene delivery can be achieved through auto-buildup in vector homing efficiency using intelligent 'self-focusing' of swarms of vector particles on target cells. Vector self-focusing was previously suggested to rely on the release of diffusible chemo-attractants after a successful target-specific hit by 'scout' vector particles. I hypothesize that intelligent self-focusing behaviour of swarms of cell-targeted therapeutic gene vectors can be accomplished without the employment of difficult-to-use diffusible chemo-attractants, instead relying on the intra-swarm signalling through cells expressing a non-diffusible extra-cellular receptor for the gene vectors. In the proposed model, cell-guiding information is gathered by the 'scout' gene vector particles, which: (1) attach to a variety of cells via a weakly binding (low affinity) receptor; (2) successfully facilitate gene transfer into these cells; (3) query intra-cellular determinants of cell-specificity with their transgene expression control elements and (4) direct the cell-specific biosynthesis of a vector-encoded strongly binding (high affinity) cell-surface receptor. Free members of the vector swarm loaded with therapeutic cargo are then attracted to and internalized into the intended target cells via the expressed cognate strongly binding extra-cellular receptor, causing escalation of gene transfer into these cells and increasing the copy number of the therapeutic gene expression modules. Such self-focusing swarms of gene vectors can be either homogeneous, with 'scout' and 'therapeutic' members of the swarm being structurally identical, or, alternatively, heterogeneous (split), with 'scout' and 'therapeutic' members of the swarm being structurally specialized. It is hoped that the proposed self-focusing cell-targeted gene vector swarms with receptor-mediated intra-swarm signalling could be particularly effective in 'top-up' gene delivery scenarios, achieving high-level and sustained expression of therapeutic transgenes that are prone to shut-down through degradation and silencing. Crucially, in contrast to low-precision 'general location' vector guidance by diffusible chemo-attractants, ear-marking non-diffusible receptors can provide high-accuracy targeting of therapeutic vector particles to the specific cell, which has undergone a 'successful cell-specific hit' by a 'scout' vector particle. Opportunities for cell targeting could be expanded, since in the proposed model of self-focusing it could be possible to probe a broad selection of intra-cellular determinants of cell-specificity and not just to rely exclusively on extra-cellular markers of cell-specificity. By employing such self-focusing gene vectors for the improvement of cell-targeted delivery of therapeutic genes, e.g., in cancer therapy or gene addition therapy of recessive genetic diseases, it could be possible to broaden a leeway for the reduction of the vector load and, consequently, to minimize undesired vector cytotoxicity, immune reactions, and the risk of inadvertent genetic modification of germline cells in genetic treatment in vivo. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Numerical stability of the error diffusion concept

    NASA Astrophysics Data System (ADS)

    Weissbach, Severin; Wyrowski, Frank

    1992-10-01

    The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.

  17. Prediction of stream volatilization coefficients

    USGS Publications Warehouse

    Rathbun, Ronald E.

    1990-01-01

    Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.

  18. Error diffusion concept for multi-level quantization

    NASA Astrophysics Data System (ADS)

    Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof

    1990-11-01

    The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.

  19. An improved procedure for determining grain boundary diffusion coefficients from averaged concentration profiles

    NASA Astrophysics Data System (ADS)

    Gryaznov, D.; Fleig, J.; Maier, J.

    2008-03-01

    Whipple's solution of the problem of grain boundary diffusion and Le Claire's relation, which is often used to determine grain boundary diffusion coefficients, are examined for a broad range of ratios of grain boundary to bulk diffusivities Δ and diffusion times t. Different reasons leading to errors in determining the grain boundary diffusivity (DGB) when using Le Claire's relation are discussed. It is shown that nonlinearities of the diffusion profiles in lnCav-y6/5 plots and deviations from "Le Claire's constant" (-0.78) are the major error sources (Cav=averaged concentration, y =coordinate in diffusion direction). An improved relation (replacing Le Claire's constant) is suggested for analyzing diffusion profiles particularly suited for small diffusion lengths (short times) as often required in diffusion experiments on nanocrystalline materials.

  20. Are Bred Vectors The Same As Lyapunov Vectors?

    NASA Astrophysics Data System (ADS)

    Kalnay, E.; Corazza, M.; Cai, M.

    Regional loss of predictability is an indication of the instability of the underlying flow, where small errors in the initial conditions (or imperfections in the model) grow to large amplitudes in finite times. The stability properties of evolving flows have been studied using Lyapunov vectors (e.g., Alligood et al, 1996, Ott, 1993, Kalnay, 2002), singular vectors (e.g., Lorenz, 1965, Farrell, 1988, Molteni and Palmer, 1993), and, more recently, with bred vectors (e.g., Szunyogh et al, 1997, Cai et al, 2001). Bred vectors (BVs) are, by construction, closely related to Lyapunov vectors (LVs). In fact, after an infinitely long breeding time, and with the use of infinitesimal ampli- tudes, bred vectors are identical to leading Lyapunov vectors. In practical applications, however, bred vectors are different from Lyapunov vectors in two important ways: a) bred vectors are never globally orthogonalized and are intrinsically local in space and time, and b) they are finite-amplitude, finite-time vectors. These two differences are very significant in a dynamical system whose size is very large. For example, the at- mosphere is large enough to have "room" for several synoptic scale instabilities (e.g., storms) to develop independently in different regions (say, North America and Aus- tralia), and it is complex enough to have several different possible types of instabilities (such as barotropic, baroclinic, convective, and even Brownian motion). Bred vectors share some of their properties with leading LVs (Corazza et al, 2001a, 2001b, Toth and Kalnay, 1993, 1997, Cai et al, 2001). For example, 1) Bred vectors are independent of the norm used to define the size of the perturba- tion. Corazza et al. (2001) showed that bred vectors obtained using a potential enstro- phy norm were indistinguishable from bred vectors obtained using a streamfunction squared norm, in contrast with singular vectors. 2) Bred vectors are independent of the length of the rescaling period as long as the perturbations remain approximately linear (for example, for atmospheric models the interval for rescaling could be varied between a single time step and 1 day without affecting qualitatively the characteristics of the bred vectors. However, the finite-amplitude, finite-time, and lack of orthogonalization of the BVs introduces important differences with LVs: 1) In regions that undergo strong instabilities, the bred vectors tend to be locally domi- 1 nated by simple, low-dimensional structures. Patil et al (2001) showed that the BV-dim (appendix) gives a good estimate of the number of dominant directions (shapes) of the local k bred vectors. For example, if half of them are aligned in one direction, and half in a different direction, the BV-dim is about two. If the majority of the bred vectors are aligned predominantly in one direction and only a few are aligned in a second direction, then the BV-dim is between 1 and 2. Patil et al., (2001) showed that the regions with low dimensionality cover about 20% of the atmosphere. They also found that these low-dimensionality regions have a very well defined vertical structure, and a typical lifetime of 3-7 days. The low dimensionality identifies regions where the in- stability of the basic flow has manifested itself in a low number of preferred directions of perturbation growth. 2) Using a Quasi-Geostrophic simulation system of data assimilation developed by Morss (1999), Corazza et al (2001a, b) found that bred vectors have structures that closely resemble the background (short forecasts used as first guess) errors, which in turn dominate the local analysis errors. This is especially true in regions of low dimensionality, which is not surprising if these are unstable regions where errors grow in preferred shapes. 3) The number of bred vectors needed to represent the unstable subspace in the QG system is small (about 6-10). This was shown by computing the local BV-dim as a function of the number of independent bred vectors. Convergence in the local dimen- sion starts to occur at about 6 BVs, and is essentially complete when the number of vectors is about 10-15 (Corazza et al, 2001a). This should be contrasted with the re- sults of Snyder and Joly (1998) and Palmer et al (1998) who showed that hundreds of Lyapunov vectors with positive Lyapunov exponents are needed to represent the attractor of the system in quasi-geostrophic models. 4) Since only a few bred vectors are needed, and background errors project strongly in the subspace of bred vectors, Corazza et al (2001b) were able to develop cost-efficient methods to improve the 3D-Var data assimilation by adding to the background error covariance terms proportional to the outer product of the bred vectors, thus represent- ing the "errors of the day". This approach led to a reduction of analysis error variance of about 40% at very low cost. 5) The fact that BVs have finite amplitude provides a natural way to filter out instabil- ities present in the system that have fast growth, but saturate nonlinearly at such small amplitudes that they are irrelevant for ensemble perturbations. As shown by Lorenz (1996) Lyapunov vectors (and singular vectors) of models including these physical phenomena would be dominated by the fast but small amplitude instabilities, unless they are explicitly excluded from the linearized models. Bred vectors, on the other 2 hand, through the choice of an appropriate size for the perturbation, provide a natural filter based on nonlinear saturation of fast but irrelevant instabilities. 6) Every bred vector is qualitatively similar to the *leading* LV. LVs beyond the leading LV are obtained by orthogonalization after each time step with respect to the previous LVs subspace. The orthogonalization requires the introduction of a norm. With an enstrophy norm, the successive LVs have larger and larger horizontal scales, and a choice of a stream function norm would lead to successively smaller scales in the LVs. Beyond the first few LVs, there is little qualitative similarity between the background errors and the LVs. In summary, in a system like the atmosphere with enough physical space for several independent local instabilities, BVs and LVs share some properties but they also have significant differences. BV are finite-amplitude, finite-time, and because they are not globally orthogonalized, they have local properties in space. Bred vectors are akin to the leading LV, but bred vectors derived from different arbitrary initial perturba- tions remain distinct from each other, instead of collapsing into a single leading vec- tor, presumably because the nonlinear terms and physical parameterizations introduce sufficient stochastic forcing to avoid such convergence. As a result, there is no need for global orthogonalization, and the number of bred vectors required to describe the natural instabilities in an atmospheric system (from a local point of view) is much smaller than the number of Lyapunov vectors with positive Lyapunov exponents. The BVs are independent of the norm, whereas the LVs beyond the first one do depend on the choice of norm: for example, they become larger in scale with a vorticity norm, and smaller with a stream function norm. These properties of BVs result in significant advantages for data assimilation and en- semble forecasting for the atmosphere. Errors in the analysis have structures very similar to bred vectors, and it is found that they project very strongly on the subspace of a few bred vectors. This is not true for either Lyapunov vectors beyond the lead- ing LVs, or for singular vectors unless they are constructed with a norm based on the analysis error covariance matrix (or a bred vector covariance). The similarity between bred vectors and analysis errors leads to the ability to include "errors of the day" in the background error covariance and a significant improvement of the analysis beyond 3D-Var at a very low cost (Corazza, 2001b). References Alligood K. T., T. D. Sauer and J. A. Yorke, 1996: Chaos: an introduction to dynamical systems. Springer-Verlag, New York. Buizza R., J. Tribbia, F. Molteni and T. Palmer, 1993: Computation of optimal unstable 3 structures for numerical weather prediction models. Tellus, 45A, 388-407. Cai, M., E. Kalnay and Z. Toth, 2001: Potential impact of bred vectors on ensemble forecasting and data assimilation in the Zebiak-Cane model. Submitted to J of Climate. Corazza, M., E. Kalnay, D. J. Patil, R. Morss, M. Cai, I. Szunyogh, B. R. Hunt, E. Ott and J. Yorke, 2001: Use of the breeding technique to determine the structure of the "errors of the day". Submitted to Nonlinear Processes in Geophysics. Corazza, M., E. Kalnay, DJ Patil, E. Ott, J. Yorke, I Szunyogh and M. Cai, 2001: Use of the breeding technique in the estimation of the background error covariance matrix for a quasigeostrophic model. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Farrell, B., 1988: Small error dynamics and the predictability of atmospheric flow, J. Atmos. Sciences, 45, 163-172. Kalnay, E 2002: Atmospheric modeling, data assimilation and predictability. Chapter 6. Cambridge University Press, UK. In press. Kalnay E and Z Toth 1994: Removing growing errors in the analysis. Preprints, Tenth Conference on Numerical Weather Prediction, pp 212-215. Amer. Meteor. Soc., July 18-22, 1994. Lorenz, E.N., 1965: A study of the predictability of a 28-variable atmospheric model. Tellus, 21, 289-307. Lorenz, E.N., 1996: Predictability- A problem partly solved. Proceedings of the ECMWF Seminar on Predictability, Reading, England, Vol. 1 1-18. Molteni F. and TN Palmer, 1993: Predictability and finite-time instability of the north- ern winter circulation. Q. J. Roy. Meteorol. Soc. 119, 269-298. Morss, R.E.: 1999: Adaptive observations: Idealized sampling strategies for improving numerical weather prediction. Ph.D. Thesis, Massachussetts Institute of Technology, 225pp. Ott, E., 1993: Chaos in Dynamical Systems. Cambridge University Press. New York. Palmer, TN, R. Gelaro, J. Barkmeijer and R. Buizza, 1998: Singular vectors, metrics and adaptive observations. J. Atmos Sciences, 55, 633-653. Patil, DJ, BR Hunt, E Kalnay, J. Yorke, and E. Ott, 2001: Local low dimensionality of atmospheric dynamics. Phys. Rev. Lett., 86, 5878. Patil, DJ, I. Szunyogh, BR Hunt, E Kalnay, E Ott, and J. Yorke, 2001: Using large 4 member ensembles to isolate local low dimensionality of atmospheric dynamics. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Snyder, C. and A. Joly, 1998: Development of perturbations within growing baroclinic waves. Q. J. Roy. Meteor. Soc., 124, pp 1961. Szunyogh, I, E. Kalnay and Z. Toth, 1997: A comparison of Lyapunov and Singular vectors in a low resolution GCM. Tellus, 49A, 200-227. Toth, Z and E Kalnay 1993: Ensemble forecasting at NMC - the generation of pertur- bations. Bull. Amer. Meteorol. Soc., 74, 2317-2330. Toth, Z and E Kalnay 1997: Ensemble forecasting at NCEP and the breeding method. Mon Wea Rev, 125, 3297-3319. * Corresponding author address: Eugenia Kalnay, Meteorology Depart- ment, University of Maryland, College Park, MD 20742-2425, USA; email: ekalnay@atmos.umd.edu Appendix: BV-dimension Patil et al., (2001) defined local bred vectors around a point in the 3-dimensional grid of the model by taking the 24 closest horizontal neighbors. If there are k bred vectors available, and N model variables for each grid point, the k local bred vectors form the columns of a 25Nxk matrix B. The kxk covariance matrix is C=B^T B. Its eigen- values are positive, and its eigenvectors v(i) are the singular vectors of the local bred vector subspace. The Bred Vector dimension (BV-dim) measures the local effective dimension: BV-dim[s,s,...,s(k)]={SUM[s(i)]}^2/SUM[s(i)]^2 where s(i) are the square roots of the eigenvalues of the covariance matrix. 5

  1. Microscopic diffusion and hydrodynamic interactions of hemoglobin in red blood cells.

    PubMed

    Doster, Wolfgang; Longeville, Stéphane

    2007-08-15

    The cytoplasm of red blood cells is congested with the oxygen storage protein hemoglobin occupying a quarter of the cell volume. The high protein concentration leads to a reduced mobility; the self-diffusion coefficient of hemoglobin in blood cells is six times lower than in dilute solution. This effect is generally assigned to excluded volume effects in crowded media. However, the collective or gradient diffusion coefficient of hemoglobin is only weakly dependent on concentration, suggesting the compensation of osmotic and friction forces. This would exclude hydrodynamic interactions, which are of dynamic origin and do not contribute to the osmotic pressure. Hydrodynamic coupling between protein molecules is dominant at short time- and length scales before direct interactions are fully established. Employing neutron spin-echo-spectroscopy, we study hemoglobin diffusion on a nanosecond timescale and protein displacements on the scale of a few nanometers. A time- and wave-vector dependent diffusion coefficient is found, suggesting the crossover of self- and collective diffusion. Moreover, a wave-vector dependent friction function is derived, which is a characteristic feature of hydrodynamic interactions. The wave-vector and concentration dependence of the long-time self-diffusion coefficient of hemoglobin agree qualitatively with theoretical results on hydrodynamics in hard spheres suspensions. Quantitative agreement requires us to adjust the volume fraction by including part of the hydration shell: Proteins exhibit a larger surface/volume ratio compared to standard colloids of much larger size. It is concluded that hydrodynamic and not direct interactions dominate long-range molecular transport at high concentration.

  2. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  3. Worldwide Ocean Optics Database (WOOD)

    DTIC Science & Technology

    2002-09-30

    attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the computed results. Extensive algorithm...empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the...properties, including diffuse attenuation, beam attenuation, and scattering. Data from ONR-funded bio-optical cruises will be given priority for loading

  4. Worldwide Ocean Optics Database (WOOD)

    DTIC Science & Technology

    2001-09-30

    user can obtain values computed from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error ...from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for...properties, including diffuse attenuation, beam attenuation, and scattering. The database shall be easy to use, Internet accessible, and frequently updated

  5. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  6. Errors induced by the neglect of polarization in radiance calculations for Rayleigh-scattering atmospheres

    NASA Technical Reports Server (NTRS)

    Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.

    1994-01-01

    Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.

  7. Numerical flux formulas for the Euler and Navier-Stokes equations. 2: Progress in flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Vanleer, Bram

    1991-01-01

    The accuracy of various numerical flux functions for the inviscid fluxes when used for Navier-Stokes computations is studied. The flux functions are benchmarked for solutions of the viscous, hypersonic flow past a 10 degree cone at zero angle of attack using first order, upwind spatial differencing. The Harten-Lax/Roe flux is found to give a good boundary layer representation, although its robustness is an issue. Some hybrid flux formulas, where the concepts of flux-vector and flux-difference splitting are combined, are shown to give unsatisfactory pressure distributions; there is still room for improvement. Investigations of low diffusion, pure flux-vector splittings indicate that a pure flux-vector splitting can be developed that eliminates spurious diffusion across the boundary layer. The resulting first-order scheme is marginally stable and not monotone.

  8. Test of Understanding of Vectors: A Reliable Multiple-Choice Vector Concept Test

    ERIC Educational Resources Information Center

    Barniol, Pablo; Zavala, Genaro

    2014-01-01

    In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended…

  9. Wind data mining by Kohonen Neural Networks.

    PubMed

    Fayos, José; Fayos, Carolina

    2007-02-14

    Time series of Circulation Weather Type (CWT), including daily averaged wind direction and vorticity, are self-classified by similarity using Kohonen Neural Networks (KNN). It is shown that KNN is able to map by similarity all 7300 five-day CWT sequences during the period of 1975-94, in London, United Kingdom. It gives, as a first result, the most probable wind sequences preceding each one of the 27 CWT Lamb classes in that period. Inversely, as a second result, the observed diffuse correlation between both five-day CWT sequences and the CWT of the 6(th) day, in the long 20-year period, can be generalized to predict the last from the previous CWT sequence in a different test period, like 1995, as both time series are similar. Although the average prediction error is comparable to that obtained by forecasting standard methods, the KNN approach gives complementary results, as they depend only on an objective classification of observed CWT data, without any model assumption. The 27 CWT of the Lamb Catalogue were coded with binary three-dimensional vectors, pointing to faces, edges and vertex of a "wind-cube," so that similar CWT vectors were close.

  10. Currency crisis indication by using ensembles of support vector machine classifiers

    NASA Astrophysics Data System (ADS)

    Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee

    2014-07-01

    There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.

  11. A manufacturing error measurement methodology for a rotary vector reducer cycloidal gear based on a gear measuring center

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang

    2018-07-01

    A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.

  12. Computational model of a vector-mediated epidemic

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana Gomes; Dickman, Ronald

    2015-05-01

    We discuss a lattice model of vector-mediated transmission of a disease to illustrate how simulations can be applied in epidemiology. The population consists of two species, human hosts and vectors, which contract the disease from one another. Hosts are sedentary, while vectors (mosquitoes) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied.

  13. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    NASA Astrophysics Data System (ADS)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  14. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  15. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  16. Improvement of gray-scale representation of horizontally scanning holographic display using error diffusion.

    PubMed

    Matsumoto, Yuji; Takaki, Yasuhiro

    2014-06-15

    Horizontally scanning holography can enlarge both screen size and viewing zone angle. A microelectromechanical-system spatial light modulator, which can generate only binary images, is used to generate hologram patterns. Thus, techniques to improve gray-scale representation in reconstructed images should be developed. In this study, the error diffusion technique was used for the binarization of holograms. When the Floyd-Steinberg error diffusion coefficients were used, gray-scale representation was improved. However, the linearity in the gray-scale representation was not satisfactory. We proposed the use of a correction table and showed that the linearity was greatly improved.

  17. The combined geodetic network adjusted on the reference ellipsoid - a comparison of three functional models for GNSS observations

    NASA Astrophysics Data System (ADS)

    Kadaj, Roman

    2016-12-01

    The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.

  18. Corrigendum to “Thermophysical properties of U 3Si 2 to 1773 K”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Joshua Taylor; Nelson, Andrew Thomas; Dunwoody, John Tyler

    2016-12-01

    An error was discovered by the authors in the calculation of thermal diffusivity in “Thermophysical properties of U 3Si 2 to 1773 K”. The error was caused by operator error in entry of parameters used to fit the temperature rise versus time model necessary to calculate the thermal diffusivity. Lastly, this error propagated to the calculation of thermal conductivity, leading to values that were 18%–28% larger along with the corresponding calculated Lorenz values.

  19. Volume illustration of muscle from diffusion tensor images.

    PubMed

    Chen, Wei; Yan, Zhicheng; Zhang, Song; Crow, John Allen; Ebert, David S; McLaughlin, Ronald M; Mullins, Katie B; Cooper, Robert; Ding, Zi'ang; Liao, Jun

    2009-01-01

    Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging (DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional example based solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig leg), demonstrating plausible illustration and expressiveness.

  20. Combating speckle in SAR images - Vector filtering and sequential classification based on a multiplicative noise model

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Allebach, Jan P.

    1990-01-01

    An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.

  1. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    NASA Astrophysics Data System (ADS)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  2. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less

  3. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    PubMed Central

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  4. Local error estimates for adaptive simulation of the Reaction-Diffusion Master Equation via operator splitting.

    PubMed

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.

  5. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.

  6. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  7. Assessment of Metronidazole Susceptibility in Helicobacter pylori: Statistical Validation and Error Rate Analysis of Breakpoints Determined by the Disk Diffusion Test

    PubMed Central

    Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José

    1999-01-01

    Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543

  8. A Radiation Chemistry Code Based on the Greens Functions of the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Ionizing radiation produces several radiolytic species such as.OH, e-aq, and H. when interacting with biological matter. Following their creation, radiolytic species diffuse and chemically react with biological molecules such as DNA. Despite years of research, many questions on the DNA damage by ionizing radiation remains, notably on the indirect effect, i.e. the damage resulting from the reactions of the radiolytic species with DNA. To simulate DNA damage by ionizing radiation, we are developing a step-by-step radiation chemistry code that is based on the Green's functions of the diffusion equation (GFDE), which is able to follow the trajectories of all particles and their reactions with time. In the recent years, simulations based on the GFDE have been used extensively in biochemistry, notably to simulate biochemical networks in time and space and are often used as the "gold standard" to validate diffusion-reaction theories. The exact GFDE for partially diffusion-controlled reactions is difficult to use because of its complex form. Therefore, the radial Green's function, which is much simpler, is often used. Hence, much effort has been devoted to the sampling of the radial Green's functions, for which we have developed a sampling algorithm This algorithm only yields the inter-particle distance vector length after a time step; the sampling of the deviation angle of the inter-particle vector is not taken into consideration. In this work, we show that the radial distribution is predicted by the exact radial Green's function. We also use a technique developed by Clifford et al. to generate the inter-particle vector deviation angles, knowing the inter-particle vector length before and after a time step. The results are compared with those predicted by the exact GFDE and by the analytical angular functions for free diffusion. This first step in the creation of the radiation chemistry code should help the understanding of the contribution of the indirect effect in the formation of DNA damage and double-strand breaks.

  9. Experiments With Magnetic Vector Potential

    ERIC Educational Resources Information Center

    Skinner, J. W.

    1975-01-01

    Describes the experimental apparatus and method for the study of magnetic vector potential (MVP). Includes a discussion of inherent errors in the calculations involved, precision of the results, and further applications of MVP. (GS)

  10. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  11. Green-noise halftoning with dot diffusion

    NASA Astrophysics Data System (ADS)

    Lippens, Stefaan; Philips, Wilfried

    2007-02-01

    Dot diffusion is a halftoning technique that is based on the traditional error diffusion concept, but offers a high degree of parallel processing by its block based approach. Traditional dot diffusion however suffers from periodicity artifacts. To limit the visibility of these artifacts, we propose grid diffusion, which applies different class matrices for different blocks. Furthermore, in this paper we will discuss two approaches in the dot diffusion framework to generate green-noise halftone patterns. The first approach is based on output dependent feedback (hysteresis), analogous to the standard green-noise error diffusion techniques. We observe that the resulting halftones are rather coarse and highly dependent on the used dot diffusion class matrices. In the second approach we don't limit the diffusion to the nearest neighbors. This leads to less coarse halftones, compared to the first approach. The drawback is that it can only cope with rather limited cluster sizes. We can reduce these drawbacks by combining the two approaches.

  12. 4 × 20 Gbit/s mode division multiplexing over free space using vector modes and a q-plate mode (de)multiplexer

    NASA Astrophysics Data System (ADS)

    Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.

    2015-05-01

    Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.

  13. Evaluation of the SPAR thermal analyzer on the CYBER-203 computer

    NASA Technical Reports Server (NTRS)

    Robinson, J. C.; Riley, K. M.; Haftka, R. T.

    1982-01-01

    The use of the CYBER 203 vector computer for thermal analysis is investigated. Strengths of the CYBER 203 include the ability to perform, in vector mode using a 64 bit word, 50 million floating point operations per second (MFLOPS) for addition and subtraction, 25 MFLOPS for multiplication and 12.5 MFLOPS for division. The speed of scalar operation is comparable to that of a CDC 7600 and is some 2 to 3 times faster than Langley's CYBER 175s. The CYBER 203 has 1,048,576 64-bit words of real memory with an 80 nanosecond (nsec) access time. Memory is bit addressable and provides single error correction, double error detection (SECDED) capability. The virtual memory capability handles data in either 512 or 65,536 word pages. The machine has 256 registers with a 40 nsec access time. The weaknesses of the CYBER 203 include the amount of vector operation overhead and some data storage limitations. In vector operations there is a considerable amount of time before a single result is produced so that vector calculation speed is slower than scalar operation for short vectors.

  14. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  15. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A unified development of several techniques for the representation of random vectors and data sets

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  17. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NASA Astrophysics Data System (ADS)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  18. Spatial Resolution, Grayscale, and Error Diffusion Trade-offs: Impact on Display System Design

    NASA Technical Reports Server (NTRS)

    Gille, Jennifer L. (Principal Investigator)

    1996-01-01

    We examine technology trade-offs related to grayscale resolution, spatial resolution, and error diffusion for tessellated display systems. We present new empirical results from our psychophysical study of these trade-offs and compare them to the predictions of a model of human vision.

  19. Clouding tracing: Visualization of the mixing of fluid elements in convection-diffusion systems

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Smith, Philip J.

    1993-01-01

    This paper describes a highly interactive method for computer visualization of the basic physical process of dispersion and mixing of fluid elements in convection-diffusion systems. It is based on transforming the vector field from a traditionally Eulerian reference frame into a Lagrangian reference frame. Fluid elements are traced through the vector field for the mean path as well as the statistical dispersion of the fluid elements about the mean position by using added scalar information about the root mean square value of the vector field and its Lagrangian time scale. In this way, clouds of fluid elements are traced and are not just mean paths. We have used this method to visualize the simulation of an industrial incinerator to help identify mechanisms for poor mixing.

  20. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  1. Orion Exploration Flight Test-1 Contingency Drogue Deploy Velocity Trigger

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.; Stochowiak, Susan; Smith, Kelly

    2013-01-01

    As a backup to the GPS-aided Kalman filter and the Barometric altimeter, an "adjusted" velocity trigger is used during entry to trigger the chain of events that leads to drogue chute deploy for the Orion Multi-Purpose Crew Vehicle (MPCV) Exploration Flight Test-1 (EFT-1). Even though this scenario is multiple failures deep, the Orion Guidance, Navigation, and Control (GN&C) software makes use of a clever technique that was taken from the Mars Science Laboratory (MSL) program, which recently successfully landing the Curiosity rover on Mars. MSL used this technique to jettison the heat shield at the proper time during descent. Originally, Orion use the un-adjusted navigated velocity, but the removal of the Star Tracker to save costs for EFT-1, increased attitude errors which increased inertial propagation errors to the point where the un-adjusted velocity caused altitude dispersions at drogue deploy to be too large. Thus, to reduce dispersions, the velocity vector is projected onto a "reference" vector that represents the nominal "truth" vector at the desired point in the trajectory. Because the navigation errors are largely perpendicular to the truth vector, this projection significantly reduces dispersions in the velocity magnitude. This paper will detail the evolution of this trigger method for the Orion project and cover the various methods tested to determine the reference "truth" vector; and at what point in the trajectory it should be computed.

  2. Analysis and correction of gradient nonlinearity bias in apparent diffusion coefficient measurements.

    PubMed

    Malyarenko, Dariya I; Ross, Brian D; Chenevert, Thomas L

    2014-03-01

    Gradient nonlinearity of MRI systems leads to spatially dependent b-values and consequently high non-uniformity errors (10-20%) in apparent diffusion coefficient (ADC) measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Spatial dependence of nonlinearity correction terms accounts for the bulk (75-95%) of ADC bias for FA = 0.3-0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. Copyright © 2013 Wiley Periodicals, Inc.

  3. Detection of blob objects in microscopic zebrafish images based on gradient vector diffusion.

    PubMed

    Li, Gang; Liu, Tianming; Nie, Jingxin; Guo, Lei; Malicki, Jarema; Mara, Andrew; Holley, Scott A; Xia, Weiming; Wong, Stephen T C

    2007-10-01

    The zebrafish has become an important vertebrate animal model for the study of developmental biology, functional genomics, and disease mechanisms. It is also being used for drug discovery. Computerized detection of blob objects has been one of the important tasks in quantitative phenotyping of zebrafish. We present a new automated method that is able to detect blob objects, such as nuclei or cells in microscopic zebrafish images. This method is composed of three key steps. The first step is to produce a diffused gradient vector field by a physical elastic deformable model. In the second step, the flux image is computed on the diffused gradient vector field. The third step performs thresholding and nonmaximum suppression based on the flux image. We report the validation and experimental results of this method using zebrafish image datasets from three independent research labs. Both sensitivity and specificity of this method are over 90%. This method is able to differentiate closely juxtaposed or connected blob objects, with high sensitivity and specificity in different situations. It is characterized by a good, consistent performance in blob object detection.

  4. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  5. Video Vectorization via Tetrahedral Remeshing.

    PubMed

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  6. Calibration Errors in Interferometric Radio Polarimetry

    NASA Astrophysics Data System (ADS)

    Hales, Christopher A.

    2017-08-01

    Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.

  7. Signal location using generalized linear constraints

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.; Feldman, D. D.

    1992-01-01

    This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.

  8. AveBoost2: Boosting for Noisy Data

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.

  9. Comparison of Moderate- to High-Astigmatism Corrections Using WaveFront-Guided Laser In Situ Keratomileusis and Small-Incision Lenticule Extraction.

    PubMed

    Zhang, Jiamei; Wang, Yan; Chen, Xiaoqin

    2016-04-01

    To evaluate and compare refractive outcomes of moderate- and high-astigmatism correction after wavefront-guided laser in situ keratomileusis (LASIK) and small-incision lenticule extraction (SMILE). This comparative study enrolled a total of 64 eyes that had undergone SMILE (42 eyes) and wavefront-guided LASIK (22 eyes). Preoperative cylindrical diopters were ≤-2.25 D in moderate- and >-2.25 D in high-astigmatism subgroups. The refractive results were analyzed based on the Alpins vector method that included target-induced astigmatism, surgically induced astigmatism, difference vector, correction index, index of success, magnitude of error, angle of error, and flattening index. All subjects completed the 3-month follow-up. No significant differences were found in the target-induced astigmatism, surgically induced astigmatism, and difference vector between SMILE and wavefront-guided LASIK. However, the average angle of error value was -1.00 ± 3.16 after wavefront-guided LASIK and 1.22 ± 3.85 after SMILE with statistical significance (P < 0.05). The absolute angle of error value was statistically correlated with difference vector and index of success after both procedures. In the moderate-astigmatism group, correction index was 1.04 ± 0.15 after wavefront-guided LASIK and 0.88 ± 0.15 after SMILE (P < 0.05). However, in the high-astigmatism group, correction index was 0.87 ± 0.13 after wavefront-guided LASIK and 0.88 ± 0.12 after SMILE (P = 0.889). Both procedures showed preferable outcomes in the correction of moderate and high astigmatism. However, high astigmatism was undercorrected after both procedures. Axial error of astigmatic correction may be one of the potential factors for the undercorrection.

  10. Estimation of diffusion coefficients from voltammetric signals by support vector and gaussian process regression

    PubMed Central

    2014-01-01

    Background Support vector regression (SVR) and Gaussian process regression (GPR) were used for the analysis of electroanalytical experimental data to estimate diffusion coefficients. Results For simulated cyclic voltammograms based on the EC, Eqr, and EqrC mechanisms these regression algorithms in combination with nonlinear kernel/covariance functions yielded diffusion coefficients with higher accuracy as compared to the standard approach of calculating diffusion coefficients relying on the Nicholson-Shain equation. The level of accuracy achieved by SVR and GPR is virtually independent of the rate constants governing the respective reaction steps. Further, the reduction of high-dimensional voltammetric signals by manual selection of typical voltammetric peak features decreased the performance of both regression algorithms compared to a reduction by downsampling or principal component analysis. After training on simulated data sets, diffusion coefficients were estimated by the regression algorithms for experimental data comprising voltammetric signals for three organometallic complexes. Conclusions Estimated diffusion coefficients closely matched the values determined by the parameter fitting method, but reduced the required computational time considerably for one of the reaction mechanisms. The automated processing of voltammograms according to the regression algorithms yields better results than the conventional analysis of peak-related data. PMID:24987463

  11. Addendum: New approach to the resummation of logarithms in Higgs-boson decays to a vector quarkonium plus a photon [Phys. Rev. D 95, 054018 (2017)

    DOE PAGES

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak; ...

    2017-12-20

    In this addendum to Phys. Rev. D 95, 054018 (2017) we recompute the rates for the decays of the Higgs boson to a vector quarkonium plus a photon, where the vector quarkonium is J/psi, Upsilon(1S) Upsilon(2S). We correct an error in the Abel-Pad'e summation formula that was used to carry out the evolution of the quarkonium light-cone distribution amplitude in Phys. Rev. D 95, 054018 (2017). We also correct an error in the scale of quarkonium wave function at the origin in Phys. Rev. D 95, 054018 (2017) and introduce several additional refinements in the calculation.

  12. Addendum: New approach to the resummation of logarithms in Higgs-boson decays to a vector quarkonium plus a photon [Phys. Rev. D 95, 054018 (2017)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak

    In this addendum to Phys. Rev. D 95, 054018 (2017) we recompute the rates for the decays of the Higgs boson to a vector quarkonium plus a photon, where the vector quarkonium is J/psi, Upsilon(1S) Upsilon(2S). We correct an error in the Abel-Pad'e summation formula that was used to carry out the evolution of the quarkonium light-cone distribution amplitude in Phys. Rev. D 95, 054018 (2017). We also correct an error in the scale of quarkonium wave function at the origin in Phys. Rev. D 95, 054018 (2017) and introduce several additional refinements in the calculation.

  13. Diffusible signal factor-repressed extracellular traits enable attachment of Xylella fastidiosa to insect vectors and transmission.

    PubMed

    Baccari, Clelia; Killiny, Nabil; Ionescu, Michael; Almeida, Rodrigo P P; Lindow, Steven E

    2014-01-01

    The hypothesis that a wild-type strain of Xylella fastidiosa would restore the ability of rpfF mutants blocked in diffusible signal factor production to be transmitted to new grape plants by the sharpshooter vector Graphocephala atropunctata was tested. While the rpfF mutant was very poorly transmitted by vectors irrespective of whether they had also fed on plants infected with the wild-type strain, wild-type strains were not efficiently transmitted if vectors had fed on plants infected with the rpfF mutant. About 100-fewer cells of a wild-type strain attached to wings of a vector when suspended in xylem sap from plants infected with an rpfF mutant than in sap from uninfected grapes. The frequency of transmission of cells suspended in sap from plants that were infected by the rpfF mutant was also reduced over threefold. Wild-type cells suspended in a culture supernatant of an rpfF mutant also exhibited 10-fold less adherence to wings than when suspended in uninoculated culture media. A factor released into the xylem by rpfF mutants, and to a lesser extent by the wild-type strain, thus inhibits their attachment to, and thus transmission by, sharpshooter vectors and may also enable them to move more readily through host plants.

  14. Reducing On-Board Computer Propagation Errors Due to Omitted Geopotential Terms by Judicious Selection of Uploaded State Vector

    NASA Technical Reports Server (NTRS)

    Greatorex, Scott (Editor); Beckman, Mark

    1996-01-01

    Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.

  15. Demonstration of Nonlinearity Bias in the Measurement of the Apparent Diffusion Coefficient in Multicenter Trials

    PubMed Central

    Malyarenko, Dariya; Newitt, David; Wilmes, Lisa; Tudorica, Alina; Helmer, Karl G.; Arlinghaus, Lori R.; Jacobs, Michael A.; Jajamovich, Guido; Taouli, Bachir; Yankeelov, Thomas E.; Huang, Wei; Chenevert, Thomas L.

    2015-01-01

    Purpose Characterize system-specific bias across common magnetic resonance imaging (MRI) platforms for quantitative diffusion measurements in multicenter trials. Methods Diffusion weighted imaging (DWI) was performed on an ice-water phantom along the superior-inferior (SI) and right-left (RL) orientations spanning ±150 mm. The same scanning protocol was implemented on 14 MRI systems at seven imaging centers. The bias was estimated as a deviation of measured from known apparent diffusion coefficient (ADC) along individual DWI directions. The relative contributions of gradient nonlinearity, shim errors, imaging gradients and eddy currents were assessed independently. The observed bias errors were compared to numerical models. Results The measured systematic ADC errors scaled quadratically with offset from isocenter, and ranged between −55% (SI) and 25% (RL). Nonlinearity bias was dependent on system design and diffusion gradient direction. Consistent with numerical models, minor ADC errors (±5%) due to shim, imaging and eddy currents were mitigated by double echo DWI and image co-registration of individual gradient directions. Conclusion The analysis confirms gradient nonlinearity as a major source of spatial DW bias and variability in off-center ADC measurements across MRI platforms, with minor contributions from shim, imaging gradients and eddy currents. The developed protocol enables empiric description of systematic bias in multicenter quantitative DWI studies. PMID:25940607

  16. Demonstration of nonlinearity bias in the measurement of the apparent diffusion coefficient in multicenter trials.

    PubMed

    Malyarenko, Dariya I; Newitt, David; J Wilmes, Lisa; Tudorica, Alina; Helmer, Karl G; Arlinghaus, Lori R; Jacobs, Michael A; Jajamovich, Guido; Taouli, Bachir; Yankeelov, Thomas E; Huang, Wei; Chenevert, Thomas L

    2016-03-01

    Characterize system-specific bias across common magnetic resonance imaging (MRI) platforms for quantitative diffusion measurements in multicenter trials. Diffusion weighted imaging (DWI) was performed on an ice-water phantom along the superior-inferior (SI) and right-left (RL) orientations spanning ± 150 mm. The same scanning protocol was implemented on 14 MRI systems at seven imaging centers. The bias was estimated as a deviation of measured from known apparent diffusion coefficient (ADC) along individual DWI directions. The relative contributions of gradient nonlinearity, shim errors, imaging gradients, and eddy currents were assessed independently. The observed bias errors were compared with numerical models. The measured systematic ADC errors scaled quadratically with offset from isocenter, and ranged between -55% (SI) and 25% (RL). Nonlinearity bias was dependent on system design and diffusion gradient direction. Consistent with numerical models, minor ADC errors (± 5%) due to shim, imaging and eddy currents were mitigated by double echo DWI and image coregistration of individual gradient directions. The analysis confirms gradient nonlinearity as a major source of spatial DW bias and variability in off-center ADC measurements across MRI platforms, with minor contributions from shim, imaging gradients and eddy currents. The developed protocol enables empiric description of systematic bias in multicenter quantitative DWI studies. © 2015 Wiley Periodicals, Inc.

  17. High angular resolution diffusion imaging with stimulated echoes: compensation and correction in experiment design and analysis.

    PubMed

    Lundell, Henrik; Alexander, Daniel C; Dyrby, Tim B

    2014-08-01

    Stimulated echo acquisition mode (STEAM) diffusion MRI can be advantageous over pulsed-gradient spin-echo (PGSE) for diffusion times that are long compared with T2 . It therefore has potential for biomedical diffusion imaging applications at 7T and above where T2 is short. However, gradient pulses other than the diffusion gradients in the STEAM sequence contribute much greater diffusion weighting than in PGSE and lead to a disrupted experimental design. Here, we introduce a simple compensation to the STEAM acquisition that avoids the orientational bias and disrupted experiment design that these gradient pulses can otherwise produce. The compensation is simple to implement by adjusting the gradient vectors in the diffusion pulses of the STEAM sequence, so that the net effective gradient vector including contributions from diffusion and other gradient pulses is as the experiment intends. High angular resolution diffusion imaging (HARDI) data were acquired with and without the proposed compensation. The data were processed to derive standard diffusion tensor imaging (DTI) maps, which highlight the need for the compensation. Ignoring the other gradient pulses, a bias in DTI parameters from STEAM acquisition is found, due both to confounds in the analysis and the experiment design. Retrospectively correcting the analysis with a calculation of the full B matrix can partly correct for these confounds, but an acquisition that is compensated as proposed is needed to remove the effect entirely. © 2014 The Authors. NMR in Biomedicine published by John Wiley & Sons, Ltd.

  18. Model-based error diffusion for high fidelity lenticular screening.

    PubMed

    Lau, Daniel; Smith, Trebor

    2006-04-17

    Digital halftoning is the process of converting a continuous-tone image into an arrangement of black and white dots for binary display devices such as digital ink-jet and electrophotographic printers. As printers are achieving print resolutions exceeding 1,200 dots per inch, it is becoming increasingly important for halftoning algorithms to consider the variations and interactions in the size and shape of printed dots between neighboring pixels. In the case of lenticular screening where statistically independent images are spatially multiplexed together, ignoring these variations and interactions, such as dot overlap, will result in poor lenticular image quality. To this end, we describe our use of model-based error-diffusion for the lenticular screening problem where statistical independence between component images is achieved by restricting the diffusion of error to only those pixels of the same component image where, in order to avoid instabilities, the proposed approach involves a novel error-clipping procedure.

  19. Dynamic scaling for the growth of non-equilibrium fluctuations during thermophoretic diffusion in microgravity

    DOE PAGES

    Cerbino, Roberto; Sun, Yifei; Donev, Aleksandar; ...

    2015-09-30

    Diffusion processes are widespread in biological and chemical systems, where they play a fundamental role in the exchange of substances at the cellular level and in determining the rate of chemical reactions. Recently, the classical picture that portrays diffusion as random uncorrelated motion of molecules has been revised, when it was shown that giant non-equilibrium fluctuations develop during diffusion processes. Under microgravity conditions and at steady-state, non-equilibrium fluctuations exhibit scale invariance and their size is only limited by the boundaries of the system. Here in this work, we investigate the onset of non-equilibrium concentration fluctuations induced by thermophoretic diffusion inmore » microgravity, a regime not accessible to analytical calculations but of great relevance for the understanding of several natural and technological processes. A combination of state of the art simulations and experiments allows us to attain a fully quantitative description of the development of fluctuations during transient diffusion in microgravity. Both experiments and simulations show that during the onset the fluctuations exhibit scale invariance at large wave vectors. In a broader range of wave vectors simulations predict a spinodal-like growth of fluctuations, where the amplitude and length-scale of the dominant mode are determined by the thickness of the diffuse layer.« less

  20. Dynamic scaling for the growth of non-equilibrium fluctuations during thermophoretic diffusion in microgravity

    PubMed Central

    Cerbino, Roberto; Sun, Yifei; Donev, Aleksandar; Vailati, Alberto

    2015-01-01

    Diffusion processes are widespread in biological and chemical systems, where they play a fundamental role in the exchange of substances at the cellular level and in determining the rate of chemical reactions. Recently, the classical picture that portrays diffusion as random uncorrelated motion of molecules has been revised, when it was shown that giant non-equilibrium fluctuations develop during diffusion processes. Under microgravity conditions and at steady-state, non-equilibrium fluctuations exhibit scale invariance and their size is only limited by the boundaries of the system. In this work, we investigate the onset of non-equilibrium concentration fluctuations induced by thermophoretic diffusion in microgravity, a regime not accessible to analytical calculations but of great relevance for the understanding of several natural and technological processes. A combination of state of the art simulations and experiments allows us to attain a fully quantitative description of the development of fluctuations during transient diffusion in microgravity. Both experiments and simulations show that during the onset the fluctuations exhibit scale invariance at large wave vectors. In a broader range of wave vectors simulations predict a spinodal-like growth of fluctuations, where the amplitude and length-scale of the dominant mode are determined by the thickness of the diffuse layer. PMID:26419420

  1. Dynamic scaling for the growth of non-equilibrium fluctuations during thermophoretic diffusion in microgravity.

    PubMed

    Cerbino, Roberto; Sun, Yifei; Donev, Aleksandar; Vailati, Alberto

    2015-09-30

    Diffusion processes are widespread in biological and chemical systems, where they play a fundamental role in the exchange of substances at the cellular level and in determining the rate of chemical reactions. Recently, the classical picture that portrays diffusion as random uncorrelated motion of molecules has been revised, when it was shown that giant non-equilibrium fluctuations develop during diffusion processes. Under microgravity conditions and at steady-state, non-equilibrium fluctuations exhibit scale invariance and their size is only limited by the boundaries of the system. In this work, we investigate the onset of non-equilibrium concentration fluctuations induced by thermophoretic diffusion in microgravity, a regime not accessible to analytical calculations but of great relevance for the understanding of several natural and technological processes. A combination of state of the art simulations and experiments allows us to attain a fully quantitative description of the development of fluctuations during transient diffusion in microgravity. Both experiments and simulations show that during the onset the fluctuations exhibit scale invariance at large wave vectors. In a broader range of wave vectors simulations predict a spinodal-like growth of fluctuations, where the amplitude and length-scale of the dominant mode are determined by the thickness of the diffuse layer.

  2. Extending color primary set in spectral vector error diffusion by multilevel halftoning

    NASA Astrophysics Data System (ADS)

    Norberg, Ole; Nyström, Daniel

    2013-02-01

    Ever since its origin in the late 19th century, a color reproduction technology has relied on a trichromatic color reproduction approach. This has been a very successful method and also fundamental for the development of color reproduction devices. Trichromatic color reproduction is sufficient to approximate the range of colors perceived by the human visual system. However, tricromatic systems only have the ability to match colors when the viewing illumination for the reproduction matches that of the original. Furthermore, the advancement of digital printing technology has introduced printing systems with additional color channels. These additional color channels are used to extend the tonal range capabilities in light and dark regions and to increase color gamut. By an alternative approach the addition color channels can also be used to reproduce the spectral information of the original color. A reproduced spectral match will always correspond to original independent of lighting situation. On the other hand, spectral color reproductions also introduce a more complex color processing by spectral color transfer functions and spectral gamut mapping algorithms. In that perspective, spectral vector error diffusion (sVED) look like a tempting approach with a simple workflow where the inverse color transfer function and halftoning is performed simultaneously in one single operation. Essential for the sVED method are the available color primaries, created by mixing process colors. Increased numbers of as well as optimal spectral characteristics of color primaries are expected to significantly improve the color accuracy of the spectral reproduction. In this study, sVED in combination with multilevel halftoning has been applied on a ten channel inkjet system. The print resolution has been reduced and the underlying physical high resolution of the printer has been used to mix additional primaries. With ten ink channels and halfton cells built-up by 2x2 micro dots where each micro dot can be a combination of all ten inks the number of possible ink combinations gets huge. Therefore, the initial study has been focused on including lighter colors to the intrinsic primary set. Results from this study shows that by this approach the color reproduction accuracy increases significantly. The RMS spectral difference to target color for multilevel halftoning is less than 1/6 of the difference achieved by binary halftoning.

  3. Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,

    2000-01-01

    Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.

  4. Selection vector filter framework

    NASA Astrophysics Data System (ADS)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2003-10-01

    We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

  5. Evaluation and statistical inference for human connectomes.

    PubMed

    Pestilli, Franco; Yeatman, Jason D; Rokem, Ariel; Kay, Kendrick N; Wandell, Brian A

    2014-10-01

    Diffusion-weighted imaging coupled with tractography is currently the only method for in vivo mapping of human white-matter fascicles. Tractography takes diffusion measurements as input and produces the connectome, a large collection of white-matter fascicles, as output. We introduce a method to evaluate the evidence supporting connectomes. Linear fascicle evaluation (LiFE) takes any connectome as input and predicts diffusion measurements as output, using the difference between the measured and predicted diffusion signals to quantify the prediction error. We use the prediction error to evaluate the evidence that supports the properties of the connectome, to compare tractography algorithms and to test hypotheses about tracts and connections.

  6. Optimal preview control for a linear continuous-time stochastic control system in finite-time horizon

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi

    2017-01-01

    This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.

  7. Real-time optical laboratory solution of parabolic differential equations

    NASA Technical Reports Server (NTRS)

    Casasent, David; Jackson, James

    1988-01-01

    An optical laboratory matrix-vector processor is used to solve parabolic differential equations (the transient diffusion equation with two space variables and time) by an explicit algorithm. This includes optical matrix-vector nonbase-2 encoded laboratory data, the combination of nonbase-2 and frequency-multiplexed data on such processors, a high-accuracy optical laboratory solution of a partial differential equation, new data partitioning techniques, and a discussion of a multiprocessor optical matrix-vector architecture.

  8. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  9. A hybrid frame concealment algorithm for H.264/AVC.

    PubMed

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  10. Error vector magnitude based parameter estimation for digital filter back-propagation mitigating SOA distortions in 16-QAM.

    PubMed

    Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A

    2013-08-26

    We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.

  11. Characterization of a Dynamic String Method for the Construction of Transition Pathways in Molecular Reactions

    PubMed Central

    Johnson, Margaret E.; Hummer, Gerhard

    2012-01-01

    We explore the theoretical foundation of different string methods used to find dominant reaction pathways in high-dimensional configuration spaces. Pathways are assessed by the amount of reactive flux they carry and by their orientation relative to the committor function. By examining the effects of transforming between different collective coordinates that span the same underlying space, we unmask artificial coordinate dependences in strings optimized to follow the free energy gradient. In contrast, strings optimized to follow the drift vector produce reaction pathways that are significantly less sensitive to reparameterizations of the collective coordinates. The differences in these paths arise because the drift vector depends on both the free energy gradient and the diffusion tensor of the coarse collective variables. Anisotropy and position dependence of diffusion tensors arise commonly in spaces of coarse variables, whose generally slow dynamics are obtained by nonlinear projections of the strongly coupled atomic motions. We show here that transition paths constructed to account for dynamics by following the drift vector will (to a close approximation) carry the maximum reactive flux both in systems with isotropic position dependent diffusion, and in systems with constant but anisotropic diffusion. We derive a simple method for calculating the committor function along paths that follow the reactive flux. Lastly, we provide guidance for the practical implementation of the dynamic string method. PMID:22616575

  12. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  13. Proprioception Is Robust under External Forces

    PubMed Central

    Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2013-01-01

    Information from cutaneous, muscle and joint receptors is combined with efferent information to create a reliable percept of the configuration of our body (proprioception). We exposed the hand to several horizontal force fields to examine whether external forces influence this percept. In an end-point task subjects reached visually presented positions with their unseen hand. In a vector reproduction task, subjects had to judge a distance and direction visually and reproduce the corresponding vector by moving the unseen hand. We found systematic individual errors in the reproduction of the end-points and vectors, but these errors did not vary systematically with the force fields. This suggests that human proprioception accounts for external forces applied to the hand when sensing the position of the hand in the horizontal plane. PMID:24019959

  14. Higher order reconstruction for MRI in the presence of spatiotemporal field perturbations.

    PubMed

    Wilm, Bertram J; Barmet, Christoph; Pavan, Matteo; Pruessmann, Klaas P

    2011-06-01

    Despite continuous hardware advances, MRI is frequently subject to field perturbations that are of higher than first order in space and thus violate the traditional k-space picture of spatial encoding. Sources of higher order perturbations include eddy currents, concomitant fields, thermal drifts, and imperfections of higher order shim systems. In conventional MRI with Fourier reconstruction, they give rise to geometric distortions, blurring, artifacts, and error in quantitative data. This work describes an alternative approach in which the entire field evolution, including higher order effects, is accounted for by viewing image reconstruction as a generic inverse problem. The relevant field evolutions are measured with a third-order NMR field camera. Algebraic reconstruction is then formulated such as to jointly minimize artifacts and noise in the resulting image. It is solved by an iterative conjugate-gradient algorithm that uses explicit matrix-vector multiplication to accommodate arbitrary net encoding. The feasibility and benefits of this approach are demonstrated by examples of diffusion imaging. In a phantom study, it is shown that higher order reconstruction largely overcomes variable image distortions that diffusion gradients induce in EPI data. In vivo experiments then demonstrate that the resulting geometric consistency permits straightforward tensor analysis without coregistration. Copyright © 2011 Wiley-Liss, Inc.

  15. Comparison of Agar Dilution, Disk Diffusion, MicroScan, and Vitek Antimicrobial Susceptibility Testing Methods to Broth Microdilution for Detection of Fluoroquinolone-Resistant Isolates of the Family Enterobacteriaceae

    PubMed Central

    Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.

    1999-01-01

    Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809

  16. Quantitative metrics for evaluating parallel acquisition techniques in diffusion tensor imaging at 3 Tesla.

    PubMed

    Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha

    2006-11-01

    Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.

  17. Determination of the optical properties of semi-infinite turbid media from frequency-domain reflectance close to the source.

    PubMed

    Kienle, A; Patterson, M S

    1997-09-01

    We investigate theoretically the errors in determining the reduced scattering and absorption coefficients of semi-infinite turbid media from frequency-domain reflectance measurements made at small distances between the source and the detector(s). The errors are due to the uncertainties in the measurement of the phase, the modulation and the steady-state reflectance as well as to the diffusion approximation which is used as a theoretical model to describe light propagation in tissue. Configurations using one and two detectors are examined for the measurement of the phase and the modulation and for the measurement of the phase and the steady-state reflectance. Three solutions of the diffusion equation are investigated. We show that measurements of the phase and the steady-state reflectance at two different distances are best suited for the determination of the optical properties close to the source. For this arrangement the errors in the absorption coefficient due to typical uncertainties in the measurement are greater than those resulting from the application of the diffusion approximation at a modulation frequency of 200 MHz. A Monte Carlo approach is also examined; this avoids the errors due to the diffusion approximation.

  18. Automatic deformable diffusion tensor registration for fiber population analysis.

    PubMed

    Irfanoglu, M O; Machiraju, R; Sammet, S; Pierpaoli, C; Knopp, M V

    2008-01-01

    In this work, we propose a novel method for deformable tensor-to-tensor registration of Diffusion Tensor Images. Our registration method models the distances in between the tensors with Geode-sic-Loxodromes and employs a version of Multi-Dimensional Scaling (MDS) algorithm to unfold the manifold described with this metric. Defining the same shape properties as tensors, the vector images obtained through MDS are fed into a multi-step vector-image registration scheme and the resulting deformation fields are used to reorient the tensor fields. Results on brain DTI indicate that the proposed method is very suitable for deformable fiber-to-fiber correspondence and DTI-atlas construction.

  19. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint.

    PubMed

    Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong

    2015-12-02

    For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  20. Simultaneous generation of 40, 80 and 120 GHz optical millimeter-wave from one Mach-Zehnder modulator and demonstration of millimeter-wave transmission and down-conversion

    NASA Astrophysics Data System (ADS)

    Zhou, Wen; Qin, Chaoyi

    2017-09-01

    We demonstrate multi-frequency QPSK millimeter-wave (mm-wave) vector signal generation enabled by MZM-based optical carrier suppression (OCS) modulation and in-phase/quadrature (I/Q) modulation. We numerically simulate the generation of 40-, 80- and 120-GHz vector signal. Here, the three different signals carry the same QPSK modulation information. We also experimentally realize 11Gbaud/s QPSK vector signal transmission over 20 km fiber, and the generation of the vector signals at 40-GHz, 80-GHz and 120-GHz. The experimental results show that the bit-error-rate (BER) for all the three different signals can reach the forward-error-correction (FEC) threshold of 3.8×10-3. The advantage of the proposed system is that provide high-speed, high-bandwidth and high-capacity seamless access of TDM and wireless network. These features indicate the important application prospect in wireless access networks for WiMax, Wi-Fi and 5G/LTE.

  1. Kalman Filter for Spinning Spacecraft Attitude Estimation

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Sedlak, Joseph E.

    2008-01-01

    This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented.

  2. Predictive classification of pediatric bipolar disorder using atlas-based diffusion weighted imaging and support vector machines.

    PubMed

    Mwangi, Benson; Wu, Mon-Ju; Bauer, Isabelle E; Modi, Haina; Zeni, Cristian P; Zunta-Soares, Giovana B; Hasan, Khader M; Soares, Jair C

    2015-11-30

    Previous studies have reported abnormalities of white-matter diffusivity in pediatric bipolar disorder. However, it has not been established whether these abnormalities are able to distinguish individual subjects with pediatric bipolar disorder from healthy controls with a high specificity and sensitivity. Diffusion-weighted imaging scans were acquired from 16 youths diagnosed with DSM-IV bipolar disorder and 16 demographically matched healthy controls. Regional white matter tissue microstructural measurements such as fractional anisotropy, axial diffusivity and radial diffusivity were computed using an atlas-based approach. These measurements were used to 'train' a support vector machine (SVM) algorithm to predict new or 'unseen' subjects' diagnostic labels. The SVM algorithm predicted individual subjects with specificity=87.5%, sensitivity=68.75%, accuracy=78.12%, positive predictive value=84.62%, negative predictive value=73.68%, area under receiver operating characteristic curve (AUROC)=0.7812 and chi-square p-value=0.0012. A pattern of reduced regional white matter fractional anisotropy was observed in pediatric bipolar disorder patients. These results suggest that atlas-based diffusion weighted imaging measurements can distinguish individual pediatric bipolar disorder patients from healthy controls. Notably, from a clinical perspective these findings will contribute to the pathophysiological understanding of pediatric bipolar disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. A vorticity transport model to restore spatial gaps in velocity data

    NASA Astrophysics Data System (ADS)

    Ameli, Siavash; Shadden, Shawn

    2017-11-01

    Often measurements of velocity data do not have full spatial coverage in the probed domain or near boundaries. These gaps can be due to missing measurements or masked regions of corrupted data. These gaps confound interpretation, and are problematic when the data is used to compute Lagrangian or trajectory-based analyses. Various techniques have been proposed to overcome coverage limitations in velocity data such as unweighted least square fitting, empirical orthogonal function analysis, variational interpolation as well as boundary modal analysis. In this talk, we present a vorticity transport PDE to reconstruct regions of missing velocity vectors. The transport model involves both nonlinear anisotropic diffusion and advection. This approach is shown to preserve the main features of the flow even in cases of large gaps, and the reconstructed regions are continuous up to second order. We illustrate results for high-frequency radar (HFR) measurements of the ocean surface currents as this is a common application of limited coverage. We demonstrate that the error of the method is on the same order of the error of the original velocity data. In addition, we have developed a web-based gateway for data restoration, and we will demonstrate a practical application using available data. This work is supported by the NSF Grant No. 1520825.

  4. An Adaptive Supervisory Sliding Fuzzy Cerebellar Model Articulation Controller for Sensorless Vector-Controlled Induction Motor Drive Systems

    PubMed Central

    Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang

    2015-01-01

    This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes. PMID:25815450

  5. An adaptive supervisory sliding fuzzy cerebellar model articulation controller for sensorless vector-controlled induction motor drive systems.

    PubMed

    Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang

    2015-03-25

    This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes--the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC--were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes.

  6. Analysis and correction of gradient nonlinearity bias in ADC measurements

    PubMed Central

    Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.

    2013-01-01

    Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533

  7. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  8. b matrix errors in echo planar diffusion tensor imaging

    PubMed Central

    Boujraf, Saïd; Luypaert, Robert; Osteaux, Michel

    2001-01-01

    Diffusion‐weighted magnetic resonance imaging (DW‐MRI) is a recognized tool for early detection of infarction of the human brain. DW‐MRI uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive parameters that reflect the translational mobility of the water molecules in tissues. If diffusion‐weighted images with different values of b matrix are acquired during one individual investigation, it is possible to calculate apparent diffusion coefficient maps that are the elements of the diffusion tensor. The diffusion tensor elements represent the apparent diffusion coefficient of protons of water molecules in each pixel in the corresponding sample. The relation between signal intensity in the diffusion‐weighted images, diffusion tensor, and b matrix is derived from the Bloch equations. Our goal is to establish the magnitude of the error made in the calculation of the elements of the diffusion tensor when the imaging gradients are ignored. PACS number(s): 87.57. –s, 87.61.–c PMID:11602015

  9. Ares I Static Tests Design

    NASA Technical Reports Server (NTRS)

    Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.

    2009-01-01

    Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.

  10. Combined group ECC protection and subgroup parity protection

    DOEpatents

    Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin

    2013-06-18

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.

  11. Nonlinear calibration for petroleum water content measurement using PSO

    NASA Astrophysics Data System (ADS)

    Li, Mingbao; Zhang, Jiawei

    2008-10-01

    A new algorithmic for strapdown inertial navigation system (SINS) state estimation based on neural networks is introduced. In training strategy, the error vector and its delay are introduced. This error vector is made of the position and velocity difference between the estimations of system and the outputs of GPS. After state prediction and state update, the states of the system are estimated. After off-line training, the network can approach the status switching of SINS and after on-line training, the state estimate precision can be improved further by reducing network output errors. Then the network convergence is discussed. In the end, several simulations with different noise are given. The results show that the neural network state estimator has lower noise sensitivity and better noise immunity than Kalman filter.

  12. Vectorization of optically sectioned brain microvasculature: learning aids completion of vascular graphs by connecting gaps and deleting open-ended segments.

    PubMed

    Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David

    2012-08-01

    A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Vectorization of optically sectioned brain microvasculature: Learning aids completion of vascular graphs by connecting gaps and deleting open-ended segments

    PubMed Central

    Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David

    2012-01-01

    A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035

  14. Vector Addition: Effect of the Context and Position of the Vectors

    NASA Astrophysics Data System (ADS)

    Barniol, Pablo; Zavala, Genaro

    2010-10-01

    In this article we investigate the effect of: 1) the context, and 2) the position of the vectors, on 2D vector addition tasks. We administered a test to 512 students completing introductory physics courses at a private Mexican university. In the first part, we analyze students' responses in three isomorphic problems: displacements, forces, and no physical context. Students were asked to draw two vectors and the vector sum. We analyzed students' procedures detecting the difficulties when drawing the vector addition and proved that the context matters, not only compared to the context-free case but also between the contexts. In the second part, we analyze students' responses with three different arrangements of the sum of two vectors: tail-to-tail, head-to-tail and separated vectors. We compared the frequencies of the errors in the three different positions to deduce students' conceptions in the addition of vectors.

  15. Optical vector network analyzer with improved accuracy based on polarization modulation and polarization pulling.

    PubMed

    Li, Wei; Liu, Jian Guo; Zhu, Ning Hua

    2015-04-15

    We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.

  16. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    NASA Astrophysics Data System (ADS)

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  17. Vector space methods of photometric analysis - Applications to O stars and interstellar reddening

    NASA Technical Reports Server (NTRS)

    Massa, D.; Lillie, C. F.

    1978-01-01

    A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.

  18. The epoch state navigation filter. [for maximum likelihood estimates of position and velocity vectors

    NASA Technical Reports Server (NTRS)

    Battin, R. H.; Croopnick, S. R.; Edwards, J. A.

    1977-01-01

    The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.

  19. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  20. Weight Vector Fluctuations in Adaptive Antenna Arrays Tuned Using the Least-Mean-Square Error Algorithm with Quadratic Constraint

    NASA Astrophysics Data System (ADS)

    Zimina, S. V.

    2015-06-01

    We present the results of statistical analysis of an adaptive antenna array tuned using the least-mean-square error algorithm with quadratic constraint on the useful-signal amplification with allowance for the weight-coefficient fluctuations. Using the perturbation theory, the expressions for the correlation function and power of the output signal of the adaptive antenna array, as well as the formula for the weight-vector covariance matrix are obtained in the first approximation. The fluctuations are shown to lead to the signal distortions at the antenna-array output. The weight-coefficient fluctuations result in the appearance of additional terms in the statistical characteristics of the antenna array. It is also shown that the weight-vector fluctuations are isotropic, i.e., identical in all directions of the weight-coefficient space.

  1. Influence of diffuse reflectance measurement accuracy on the scattering coefficient in determination of optical properties with integrating sphere optics (a secondary publication).

    PubMed

    Horibe, Takuro; Ishii, Katsunori; Fukutomi, Daichi; Awazu, Kunio

    2015-12-30

    An estimation error of the scattering coefficient of hemoglobin in the high absorption wavelength range has been observed in optical property calculations of blood-rich tissues. In this study, the relationship between the accuracy of diffuse reflectance measurement in the integrating sphere and calculated scattering coefficient was evaluated with a system to calculate optical properties combined with an integrating sphere setup and the inverse Monte Carlo simulation. Diffuse reflectance was measured with the integrating sphere using a small incident port diameter and optical properties were calculated. As a result, the estimation error of the scattering coefficient was improved by accurate measurement of diffuse reflectance. In the high absorption wavelength range, the accuracy of diffuse reflectance measurement has an effect on the calculated scattering coefficient.

  2. Random walk, diffusion and mixing in simulations of scalar transport in fluid flows

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2008-12-01

    Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.

  3. Errors in Bibliographic Citations: A Continuing Problem.

    ERIC Educational Resources Information Center

    Sweetland, James H.

    1989-01-01

    Summarizes studies examining citation errors and illustrates errors resulting from a lack of standardization, misunderstanding of foreign languages, failure to examine the document cited, and general lack of training in citation norms. It is argued that the failure to detect and correct citation errors is due to diffusion of responsibility in the…

  4. Local Observability Analysis of Star Sensor Installation Errors in a SINS/CNS Integration System for Near-Earth Flight Vehicles.

    PubMed

    Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen

    2017-01-16

    Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.

  5. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.

    PubMed

    Liu, Zhiyuan; Wang, Changhui

    2015-10-23

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.

  6. Adaptive error correction codes for face identification

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2012-06-01

    Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.

  7. Adaptive mesh refinement for time-domain electromagnetics using vector finite elements :a feasibility study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis

    This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.

  8. Vectorized and multitasked solution of the few-group neutron diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-03-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less

  9. HMI Measured Doppler Velocity Contamination from the SDO Orbit Velocity

    NASA Astrophysics Data System (ADS)

    Scherrer, Phil; HMI Team

    2016-10-01

    The Problem: The SDO satellite is in an inclined Geo-sync orbit which allows uninterrupted views of the Sun nearly 98% of the time. This orbit has a velocity of about 3,500 m/s with the solar line-of-sight component varying with time of day and time of year. Due to remaining calibration errors in wavelength filters the orbit velocity leaks into the line-of-sight solar velocity and magnetic field measurements. Since the same model of the filter is used in the Milne-Eddington inversions used to generate the vector magnetic field data, the orbit velocity also contaminates the vector magnetic products. These errors contribute 12h and 24h variations in most HMI data products and are known as the 24-hour problem. Early in the mission we made a patch to the calibration that corrected the disk mean velocity. The resulting LOS velocity has been used for helioseismology with no apparent problems. The velocity signal has about a 1% scale error that varies with time of day and with velocity, i.e. it is non-linear for large velocities. This causes leaks into the LOS field (which is simply the difference between velocity measured in LCP and RCP rescaled for the Zeeman splitting). This poster reviews the measurement process, shows examples of the problem, and describes recent work at resolving the issues. Since the errors are in the filter characterization it makes most sense to work first on the LOS data products since they, unlike the vector products, are directly and simply related to the filter profile without assumptions on the solar atmosphere, filling factors, etc. Therefore this poster is strictly limited to understanding how to better understand the filter profiles as they vary across the field and with time of day and time in years resulting in velocity errors of up to a percent and LOS field estimates with errors up to a few percent (of the standard LOS magnetograph method based on measuring the differences in wavelength of the line centroids in LCP and RCP light). We expect that when better filter profiles are available it will be possible to generate improved vector field data products as well.

  10. Benchmarking the pseudopotential and fixed-node approximations in diffusion Monte Carlo calculations of molecules and solids

    DOE PAGES

    Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.; ...

    2016-03-28

    We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.

  11. Benchmarking the pseudopotential and fixed-node approximations in diffusion Monte Carlo calculations of molecules and solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.

    We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.

  12. Using Redundancy To Reduce Errors in Magnetometer Readings

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor; Zak, Michail

    2004-01-01

    A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.

  13. An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression

    PubMed Central

    Bhatt, Deepak; Aggarwal, Priyanka; Bhattacharya, Prabir; Devabhaktuni, Vijay

    2012-01-01

    Micro Electro Mechanical System (MEMS)-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN) is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM) based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches. PMID:23012552

  14. Analytical approach to impurity transport studies: Charge state dynamics in tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shurygin, V. A.

    2006-08-15

    Ionization and recombination of plasma impurities govern their charge state kinetics, which is imposed upon the dynamics of ions that implies a superposition of the appropriate probabilities and causes an impurity charge state dynamics. The latter is considered in terms of a vector field of conditional probabilities and presented by a vector charge state distribution function with coupled equations of the Kolmogorov type. Analytical solutions of a diffusion problem are derived with the basic spatial and temporal dimensionless parameters. Analysis shows that the empirical scaling D{sub A}{proportional_to}n{sub e}{sup -1} [K. Krieger, G. Fussmann, and the ASDEX Upgrade Team, Nucl. Fusionmore » 30, 2392 (1990)] can be explained by the ratio of the diffusive and kinetic terms, D{sub A}/(n{sub e}a{sup 2}), being used instead of diffusivity, D{sub A}. The derived time scales of charge state dynamics are given by a sum of the diffusive and kinetic times. Detailed simulations of charge state dynamics are performed for argon impurity and compared with the reference modeling.« less

  15. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  16. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    PubMed Central

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  17. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  18. A high-accuracy two-position alignment inertial navigation system for lunar rovers aided by a star sensor with a calibration and positioning function

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming

    2016-12-01

    An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.

  19. Control method and system for hydraulic machines employing a dynamic joint motion model

    DOEpatents

    Danko, George [Reno, NV

    2011-11-22

    A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.

  20. Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains

    NASA Astrophysics Data System (ADS)

    Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.

    2004-07-01

    This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.

  1. Combined group ECC protection and subgroup parity protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Cheng, Dong; Heidelberger, Philip

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less

  2. High accuracy diffuse horizontal irradiance measurements without a shadowband

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlemmer, J.A; Michalsky, J.J.

    1995-12-31

    The standard method for measuring diffuse horizontal irradiance uses a fixed shadowband to block direct solar radiation. This method requires a correction for the excess skylight blocked by the band, and this correction varies with sky conditions. Alternately, diffuse horizontal irradiance may be calculated from total horizontal and direct normal irradiance. This method is in error because of angular (cosine) response of the total horizontal pyranometer to direct beam irradiance. This paper describes an improved calculation of diffuse horizontal irradiance from total horizontal and direct normal irradiance using a predetermination of the angular response of the total horizontal pyranometer. Wemore » compare these diffuse horizontal irradiance calculations with measurements made with a shading-disk pyranometer that shields direct irradiance using a tracking disk. Results indicate significant improvement in most cases. Remaining disagreement most likely arises from undetected tracking errors and instrument leveling.« less

  3. High accuracy diffuse horizontal irradiance measurements without a shadowband

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlemmer, J.A.; Michalsky, J.J.

    1995-10-01

    The standard method for measuring diffuse horizontal irradiance uses a fixed shadowband to block direct solar radiation. This method requires a correction for the excess skylight blocked by the band, and this correction varies with sky conditions. Alternately, diffuse horizontal irradiance may be calculated from the total horizontal and direct normal irradiance. This method is in error because of the angular (often referred to as cosine) response of the total horizontal pyranometer to direct beam irradiance. This paper describes an improved calculation of diffuse horizontal irradiance from total horizontal and direct normal irradiance using a predetermination of the angular responsemore » of the total horizontal pyranometer. The authors compare these diffuse horizontal irradiance calculations with measurements made with a shading-disk pyranometer that shields direct irradiance using a tracking disk. The results indicate significant improvement in most cases. The remaining disagreement most likely arises from undetected tracking errors and instrument leveling.« less

  4. Pulse Vector-Excitation Speech Encoder

    NASA Technical Reports Server (NTRS)

    Davidson, Grant; Gersho, Allen

    1989-01-01

    Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.

  5. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  6. Derivation of simple rules for complex flow vector fields on the lower part of the human face for robot face design.

    PubMed

    Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru

    2017-11-27

    It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.

  7. Multipath Measurements

    DTIC Science & Technology

    1974-08-01

    of the surface irregularities are larce ÖE in comparison to the wavelength > so that E and 5— may be approximated on S by (E) (1 + R E.) S 1...sum vector (Z) and the difference vector (A) at the radar have been determined, the rough boresite error is computed as DELPHI A- I |Z|/|E|27|ä|2

  8. Wind estimates from cloud motions: Phase 1 of an in situ aircraft verification experiment

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Shenk, W. E.; Skillman, W.

    1974-01-01

    An initial experiment was conducted to verify geostationary satellite derived cloud motion wind estimates with in situ aircraft wind velocity measurements. Case histories of one-half hour to two hours were obtained for 3-10km diameter cumulus cloud systems on 6 days. Also, one cirrus cloud case was obtained. In most cases the clouds were discrete enough that both the cloud motion and the ambient wind could be measured with the same aircraft Inertial Navigation System (INS). Since the INS drift error is the same for both the cloud motion and wind measurements, the drift error subtracts out of the relative motion determinations. The magnitude of the vector difference between the cloud motion and the ambient wind at the cloud base averaged 1.2 m/sec. The wind vector at higher levels in the cloud layer differed by about 3 m/sec to 5 m/sec from the cloud motion vector.

  9. Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements

    NASA Astrophysics Data System (ADS)

    Appel, Pontus

    2005-01-01

    For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.

  10. Levofloxacin susceptibility testing against Helicobacter pylori: evaluation of a modified disk diffusion method compared to E test.

    PubMed

    Boyanova, Lyudmila; Ilieva, Juliana; Gergova, Galina; Mitov, Ivan

    2016-01-01

    We compared levofloxacin (1 μg/disk) disk diffusion method to E test against 212 Helicobacter pylori strains. Using diameter breakpoints for susceptibility (≥15 mm) and resistance (≤9 mm), very major error, major error rate, and categoric agreement were 0.0%, 0.6%, and 93.9%, respectively. The method may be useful in low-resource laboratories. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Color extended visual cryptography using error diffusion.

    PubMed

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  12. Two-stage color palettization for error diffusion

    NASA Astrophysics Data System (ADS)

    Mitra, Niloy J.; Gupta, Maya R.

    2002-06-01

    Image-adaptive color palettization chooses a decreased number of colors to represent an image. Palettization is one way to decrease storage and memory requirements for low-end displays. Palettization is generally approached as a clustering problem, where one attempts to find the k palette colors that minimize the average distortion for all the colors in an image. This would be the optimal approach if the image was to be displayed with each pixel quantized to the closest palette color. However, to improve the image quality the palettization may be followed by error diffusion. In this work, we propose a two-stage palettization where the first stage finds some m << k clusters, and the second stage chooses palette points that cover the spread of each of the M clusters. After error diffusion, this method leads to better image quality at less computational cost and with faster display speed than full k-means palettization.

  13. Pointing error analysis of Risley-prism-based beam steering system.

    PubMed

    Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng

    2014-09-01

    Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.

  14. Measurement of diffusion coefficients from solution rates of bubbles

    NASA Technical Reports Server (NTRS)

    Krieger, I. M.

    1979-01-01

    The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.

  15. Multi-Contrast Multi-Atlas Parcellation of Diffusion Tensor Imaging of the Human Brain

    PubMed Central

    Tang, Xiaoying; Yoshida, Shoko; Hsu, John; Huisman, Thierry A. G. M.; Faria, Andreia V.; Oishi, Kenichi; Kutten, Kwame; Poretti, Andrea; Li, Yue; Miller, Michael I.; Mori, Susumu

    2014-01-01

    In this paper, we propose a novel method for parcellating the human brain into 193 anatomical structures based on diffusion tensor images (DTIs). This was accomplished in the setting of multi-contrast diffeomorphic likelihood fusion using multiple DTI atlases. DTI images are modeled as high dimensional fields, with each voxel exhibiting a vector valued feature comprising of mean diffusivity (MD), fractional anisotropy (FA), and fiber angle. For each structure, the probability distribution of each element in the feature vector is modeled as a mixture of Gaussians, the parameters of which are estimated from the labeled atlases. The structure-specific feature vector is then used to parcellate the test image. For each atlas, a likelihood is iteratively computed based on the structure-specific vector feature. The likelihoods from multiple atlases are then fused. The updating and fusing of the likelihoods is achieved based on the expectation-maximization (EM) algorithm for maximum a posteriori (MAP) estimation problems. We first demonstrate the performance of the algorithm by examining the parcellation accuracy of 18 structures from 25 subjects with a varying degree of structural abnormality. Dice values ranging 0.8–0.9 were obtained. In addition, strong correlation was found between the volume size of the automated and the manual parcellation. Then, we present scan-rescan reproducibility based on another dataset of 16 DTI images – an average of 3.73%, 1.91%, and 1.79% for volume, mean FA, and mean MD respectively. Finally, the range of anatomical variability in the normal population was quantified for each structure. PMID:24809486

  16. Orientationally invariant metrics of apparent compartment eccentricity from double pulsed field gradient diffusion experiments.

    PubMed

    Jespersen, Sune Nørhøj; Lundell, Henrik; Sønderby, Casper Kaae; Dyrby, Tim B

    2013-12-01

    Pulsed field gradient diffusion sequences (PFG) with multiple diffusion encoding blocks have been indicated to offer new microstructural tissue information, such as the ability to detect nonspherical compartment shapes in macroscopically isotropic samples, i.e. samples with negligible directional signal dependence on diffusion gradients in standard diffusion experiments. However, current acquisition schemes are not rotationally invariant in the sense that the derived metrics depend on the orientation of the sample, and are affected by the interplay of sampling directions and compartment orientation dispersion when applied to macroscopically anisotropic systems. Here we propose a new framework, the d-PFG 5-design, to enable rotationally invariant estimation of double wave vector diffusion metrics (d-PFG). The method is based on the idea that an appropriate orientational average of the signal emulates the signal from a powder preparation of the same sample, where macroscopic anisotropy is absent by construction. Our approach exploits the theory of exact numerical integration (quadrature) of polynomials on the rotation group, and we exemplify the general procedure with a set consisting of 60 pairs of diffusion wave vectors (the d-PFG 5-design) facilitating a theoretically exact determination of the fourth order Taylor or cumulant expansion of the orientationally averaged signal. The d-PFG 5-design is evaluated with numerical simulations and ex vivo high field diffusion MRI experiments in a nonhuman primate brain. Specifically, we demonstrate rotational invariance when estimating compartment eccentricity, which we show offers new microstructural information, complementary to that of fractional anisotropy (FA) from diffusion tensor imaging (DTI). The imaging observations are supported by a new theoretical result, directly relating compartment eccentricity to FA of individual pores. Copyright © 2013 John Wiley & Sons, Ltd.

  17. An alternative clinical routine for subjective refraction based on power vectors with trial frames.

    PubMed

    María Revert, Antonia; Conversa, Maria Amparo; Albarrán Diego, César; Micó, Vicente

    2017-01-01

    Subjective refraction determines the final point of refractive error assessment in most clinical environments and its foundations have remained unchanged for decades. The purpose of this paper is to compare the results obtained when monocular subjective refraction is assessed in trial frames by a new clinical procedure based on a pure power vector interpretation with conventional clinical refraction procedures. An alternative clinical routine is described that uses power vector interpretation with implementation in trial frames. Refractive error is determined in terms of: (i) the spherical equivalent (M component), and (ii) a pair of Jackson Crossed Cylinder lenses oriented at 0°/90° (J 0 component) and 45°/135° (J 45 component) for determination of astigmatism. This vector subjective refraction result (VR) is compared separately for right and left eyes of 25 subjects (mean age, 35 ± 4 years) against conventional sphero-cylindrical subjective refraction (RX) using a phoropter. The VR procedure was applied with both conventional tumbling E optotypes (VR1) and modified optotypes with oblique orientation (VR2). Bland-Altman plots and intra-class correlation coefficient showed good agreement between VR, and RX (with coefficient values above 0.82) and anova showed no significant differences in any of the power vector components between RX and VR. VR1 and VR2 procedure results were similar (p ≥ 0.77). The proposed routine determines the three components of refractive error in power vector notation [M, J 0 , J 45 ], with a refraction time similar to the one used in conventional subjective procedures. The proposed routine could be helpful for inexperienced clinicians and for experienced clinicians in those cases where it is difficult to get a valid starting point for conventional RX (irregular corneas, media opacities, etc.) and for refractive situations/places with inadequate refractive facilities/equipment. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  18. Sensitivity study on durability variables of marine concrete structures

    NASA Astrophysics Data System (ADS)

    Zhou, Xin'gang; Li, Kefei

    2013-06-01

    In order to study the influence of parameters on durability of marine concrete structures, the parameter's sensitivity analysis was studied in this paper. With the Fick's 2nd law of diffusion and the deterministic sensitivity analysis method (DSA), the sensitivity factors of apparent surface chloride content, apparent chloride diffusion coefficient and its time dependent attenuation factor were analyzed. The results of the analysis show that the impact of design variables on concrete durability was different. The values of sensitivity factor of chloride diffusion coefficient and its time dependent attenuation factor were higher than others. Relative less error in chloride diffusion coefficient and its time dependent attenuation coefficient induces a bigger error in concrete durability design and life prediction. According to probability sensitivity analysis (PSA), the influence of mean value and variance of concrete durability design variables on the durability failure probability was studied. The results of the study provide quantitative measures of the importance of concrete durability design and life prediction variables. It was concluded that the chloride diffusion coefficient and its time dependent attenuation factor have more influence on the reliability of marine concrete structural durability. In durability design and life prediction of marine concrete structures, it was very important to reduce the measure and statistic error of durability design variables.

  19. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  20. Implementation and Assessment of Advanced Analog Vector-Matrix Processor

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.

  1. Quantum angular momentum diffusion of rigid bodies

    NASA Astrophysics Data System (ADS)

    Papendell, Birthe; Stickler, Benjamin A.; Hornberger, Klaus

    2017-12-01

    We show how to describe the diffusion of the quantized angular momentum vector of an arbitrarily shaped rigid rotor as induced by its collisional interaction with an environment. We present the general form of the Lindblad-type master equation and relate it to the orientational decoherence of an asymmetric nanoparticle in the limit of small anisotropies. The corresponding diffusion coefficients are derived for gas particles scattering off large molecules and for ambient photons scattering off dielectric particles, using the elastic scattering amplitudes.

  2. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine

    PubMed Central

    Liu, Zhiyuan; Wang, Changhui

    2015-01-01

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675

  3. Local Observability Analysis of Star Sensor Installation Errors in a SINS/CNS Integration System for Near-Earth Flight Vehicles

    PubMed Central

    Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen

    2017-01-01

    Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles. PMID:28275211

  4. Application of neuroanatomical features to tractography clustering.

    PubMed

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2013-09-01

    Diffusion tensor imaging allows unprecedented insight into brain neural connectivity in vivo by allowing reconstruction of neuronal tracts via captured patterns of water diffusion in white matter microstructures. However, tractography algorithms often output hundreds of thousands of fibers, rendering subsequent data analysis intractable. As a remedy, fiber clustering techniques are able to group fibers into dozens of bundles and thus facilitate analyses. Most existing fiber clustering methods rely on geometrical information of fibers, by viewing them as curves in 3D Euclidean space. The important neuroanatomical aspect of fibers, however, is ignored. In this article, the neuroanatomical information of each fiber is encapsulated in the associativity vector, which functions as the unique "fingerprint" of the fiber. Specifically, each entry in the associativity vector describes the relationship between the fiber and a certain anatomical ROI in a fuzzy manner. The value of the entry approaches 1 if the fiber is spatially related to the ROI at high confidence; on the contrary, the value drops closer to 0. The confidence of the ROI is calculated by diffusing the ROI according to the underlying fibers from tractography. In particular, we have adopted the fast marching method for simulation of ROI diffusion. Using the associativity vectors of fibers, we further model fibers as observations sampled from multivariate Gaussian mixtures in the feature space. To group all fibers into relevant major bundles, an expectation-maximization clustering approach is employed. Experimental results indicate that our method results in anatomically meaningful bundles that are highly consistent across subjects. Copyright © 2012 Wiley Periodicals, Inc., a Wiley company.

  5. Residual sweeping errors in turbulent particle pair diffusion in a Lagrangian diffusion model.

    PubMed

    Malik, Nadeem A

    2017-01-01

    Thomson, D. J. & Devenish, B. J. [J. Fluid Mech. 526, 277 (2005)] and others have suggested that sweeping effects make Lagrangian properties in Kinematic Simulations (KS), Fung et al [Fung J. C. H., Hunt J. C. R., Malik N. A. & Perkins R. J. J. Fluid Mech. 236, 281 (1992)], unreliable. However, such a conclusion can only be drawn under the assumption of locality. The major aim here is to quantify the sweeping errors in KS without assuming locality. Through a novel analysis based upon analysing pairs of particle trajectories in a frame of reference moving with the large energy containing scales of motion it is shown that the normalized integrated error [Formula: see text] in the turbulent pair diffusivity (K) due to the sweeping effect decreases with increasing pair separation (σl), such that [Formula: see text] as σl/η → ∞; and [Formula: see text] as σl/η → 0. η is the Kolmogorov turbulence microscale. There is an intermediate range of separations 1 < σl/η < ∞ in which the error [Formula: see text] remains negligible. Simulations using KS shows that in the swept frame of reference, this intermediate range is large covering almost the entire inertial subrange simulated, 1 < σl/η < 105, implying that the deviation from locality observed in KS cannot be atributed to sweeping errors. This is important for pair diffusion theory and modeling. PACS numbers: 47.27.E?, 47.27.Gs, 47.27.jv, 47.27.Ak, 47.27.tb, 47.27.eb, 47.11.-j.

  6. An investigation of reports of Controlled Flight Toward Terrain (CFTT)

    NASA Technical Reports Server (NTRS)

    Porter, R. F.; Loomis, J. P.

    1981-01-01

    Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.

  7. Modeling and simulation for fewer-axis grinding of complex surface

    NASA Astrophysics Data System (ADS)

    Li, Zhengjian; Peng, Xiaoqiang; Song, Ci

    2017-10-01

    As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.

  8. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines

    PubMed Central

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J.; Raboso, Mariano

    2015-01-01

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements. PMID:26091392

  9. Design of an optimal preview controller for linear discrete-time descriptor systems with state delay

    NASA Astrophysics Data System (ADS)

    Cao, Mengjuan; Liao, Fucheng

    2015-04-01

    In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.

  10. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.

    PubMed

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano

    2015-06-17

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  11. Early Error Detection: An Action-Research Experience Teaching Vector Calculus

    ERIC Educational Resources Information Center

    Añino, María Magdalena; Merino, Gabriela; Miyara, Alberto; Perassi, Marisol; Ravera, Emiliano; Pita, Gustavo; Waigandt, Diana

    2014-01-01

    This paper describes an action-research experience carried out with second year students at the School of Engineering of the National University of Entre Ríos, Argentina. Vector calculus students played an active role in their own learning process. They were required to present weekly reports, in both oral and written forms, on the topics studied,…

  12. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  13. Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas

    USGS Publications Warehouse

    Lindgren, R.J.

    2006-01-01

    A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.

  14. Simplified method for the screening of technological maturity of red grape and total phenolic compounds of red grape skin: application of the characteristic vector method to near-infrared spectra.

    PubMed

    Nogales-Bueno, Julio; Ayala, Fernando; Hernández-Hierro, José Miguel; Rodríguez-Pulido, Francisco José; Echávarri, José Federico; Heredia, Francisco José

    2015-05-06

    Characteristic vector analysis has been applied to near-infrared spectra to extract the main spectral information from hyperspectral images. For this purpose, 3, 6, 9, and 12 characteristic vectors have been used to reconstruct the spectra, and root-mean-square errors (RMSEs) have been calculated to measure the differences between characteristic vector reconstructed spectra (CVRS) and hyperspectral imaging spectra (HIS). RMSE values obtained were 0.0049, 0.0018, 0.0012, and 0.0012 [log(1/R) units] for spectra allocated into the validation set, for 3, 6, 9, and 12 characteristic vectors, respectively. After that, calibration models have been developed and validated using the different groups of CVRS to predict skin total phenolic concentration, sugar concentration, titratable acidity, and pH by modified partial least-squares (MPLS) regression. The obtained results have been compared to those previously obtained from HIS. The models developed from the CVRS reconstructed from 12 characteristic vectors present similar values of coefficients of determination (RSQ) and standard errors of prediction (SEP) than the models developed from HIS. RSQ and SEP were 0.84 and 1.13 mg g(-1) of skin grape (expressed as gallic acid equivalents), 0.93 and 2.26 °Brix, 0.97 and 3.87 g L(-1) (expressed as tartaric acid equivalents), and 0.91 and 0.14 for skin total phenolic concentration, sugar concentration, titratable acidity, and pH, respectively, for the models developed from the CVRS reconstructed from 12 characteristic vectors.

  15. A Double-difference Earthquake location algorithm: Method and application to the Northern Hayward Fault, California

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2000-01-01

    We have developed an efficient method to determine high-resolution hypocenter locations over large distances. The location method incorporates ordinary absolute travel-time measurements and/or cross-correlation P-and S-wave differential travel-time measurements. Residuals between observed and theoretical travel-time differences (or double-differences) are minimized for pairs of earthquakes at each station while linking together all observed event-station pairs. A least-squares solution is found by iteratively adjusting the vector difference between hypocentral pairs. The double-difference algorithm minimizes errors due to unmodeled velocity structure without the use of station corrections. Because catalog and cross-correlation data are combined into one system of equations, interevent distances within multiplets are determined to the accuracy of the cross-correlation data, while the relative locations between multiplets and uncorrelated events are simultaneously determined to the accuracy of the absolute travel-time data. Statistical resampling methods are used to estimate data accuracy and location errors. Uncertainties in double-difference locations are improved by more than an order of magnitude compared to catalog locations. The algorithm is tested, and its performance is demonstrated on two clusters of earthquakes located on the northern Hayward fault, California. There it colapses the diffuse catalog locations into sharp images of seismicity and reveals horizontal lineations of hypocenter that define the narrow regions on the fault where stress is released by brittle failure.

  16. The effects of cracks on the quantification of the cancellous bone fabric tensor in fossil and archaeological specimens: a simulation study.

    PubMed

    Bishop, Peter J; Clemente, Christofer J; Hocknull, Scott A; Barrett, Rod S; Lloyd, David G

    2017-03-01

    Cancellous bone is very sensitive to its prevailing mechanical environment, and study of its architecture has previously aided interpretations of locomotor biomechanics in extinct animals or archaeological populations. However, quantification of architectural features may be compromised by poor preservation in fossil and archaeological specimens, such as post mortem cracking or fracturing. In this study, the effects of post mortem cracks on the quantification of cancellous bone fabric were investigated through the simulation of cracks in otherwise undamaged modern bone samples. The effect on both scalar (degree of fabric anisotropy, fabric elongation index) and vector (principal fabric directions) variables was assessed through comparing the results of architectural analyses of cracked vs. non-cracked samples. Error was found to decrease as the relative size of the crack decreased, and as the orientation of the crack approached the orientation of the primary fabric direction. However, even in the best-case scenario simulated, error remained substantial, with at least 18% of simulations showing a > 10% error when scalar variables were considered, and at least 6.7% of simulations showing a > 10° error when vector variables were considered. As a 10% (scalar) or 10° (vector) difference is probably too large for reliable interpretation of a fossil or archaeological specimen, these results suggest that cracks should be avoided if possible when analysing cancellous bone architecture in such specimens. © 2016 Anatomical Society.

  17. Diffusion theory of decision making in continuous report.

    PubMed

    Smith, Philip L

    2016-07-01

    I present a diffusion model for decision making in continuous report tasks, in which a continuous, circularly distributed, stimulus attribute in working memory is matched to a representation of the attribute in the stimulus display. Memory retrieval is modeled as a 2-dimensional diffusion process with vector-valued drift on a disk, whose bounding circle represents the decision criterion. The direction and magnitude of the drift vector describe the identity of the stimulus and the quality of its representation in memory, respectively. The point at which the diffusion exits the disk determines the reported value of the attribute and the time to exit the disk determines the decision time. Expressions for the joint distribution of decision times and report outcomes are obtained by means of the Girsanov change-of-measure theorem, which allows the properties of the nonzero-drift diffusion process to be characterized as a function of a Euclidian-distance Bessel process. Predicted report precision is equal to the product of the decision criterion and the drift magnitude and follows a von Mises distribution, in agreement with the treatment of precision in the working memory literature. Trial-to-trial variability in criterion and drift rate leads, respectively, to direct and inverse relationships between report accuracy and decision times, in agreement with, and generalizing, the standard diffusion model of 2-choice decisions. The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties of working memory, via the distributions of decision times. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  19. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Analysis of the precision parameters of an optoelectronic vector-matrix processor of digital information

    NASA Astrophysics Data System (ADS)

    Odinokov, S. B.; Petrov, A. V.

    1995-10-01

    Mathematical models of components of a vector-matrix optoelectronic multiplier are considered. Perturbing factors influencing a real optoelectronic system — noise and errors of radiation sources and detectors, nonlinearity of an analogue—digital converter, nonideal optical systems — are taken into account. Analytic expressions are obtained for relating the precision of such a multiplier to the probability of an error amounting to one bit, to the parameters describing the quality of the multiplier components, and to the quality of the optical system of the processor. Various methods of increasing the dynamic range of a multiplier are considered at the technical systems level.

  20. Study on the precision of the guide control system of independent wheel

    NASA Astrophysics Data System (ADS)

    ji, Y.; Ren, L.; Li, R.; Sun, W.

    2016-09-01

    The torque ripple of permanent magnet synchronous motor vector with active control is studied in this paper. The ripple appears because of the impact of position detection and current detection, the error generated in inverter and the influence of motor ontology (magnetic chain harmonic and the cogging effect and so on). Then, the simulation dynamic model of bogie with permanent magnet synchronous motor vector control system is established with MATLAB/Simulink. The stability of bogie with steering control is studied. The relationship between the error of the motor and the precision of the control system is studied. The result shows that the existing motor does not meet the requirements of the control system.

  1. Early error detection: an action-research experience teaching vector calculus

    NASA Astrophysics Data System (ADS)

    Magdalena Añino, María; Merino, Gabriela; Miyara, Alberto; Perassi, Marisol; Ravera, Emiliano; Pita, Gustavo; Waigandt, Diana

    2014-04-01

    This paper describes an action-research experience carried out with second year students at the School of Engineering of the National University of Entre Ríos, Argentina. Vector calculus students played an active role in their own learning process. They were required to present weekly reports, in both oral and written forms, on the topics studied, instead of merely sitting and watching as the teacher solved problems on the blackboard. The students were also asked to perform computer assignments, and their learning process was continuously monitored. Among many benefits, this methodology has allowed students and teachers to identify errors and misconceptions that might have gone unnoticed under a more passive approach.

  2. A fingerprint key binding algorithm based on vector quantization and error correction

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  3. A SEASAT SASS simulation experiment to quantify the errors related to a + or - 3 hour intermittent assimilation technique

    NASA Technical Reports Server (NTRS)

    Sylvester, W. B.

    1984-01-01

    A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.

  4. Accurate optical vector network analyzer based on optical single-sideband modulation and balanced photodetection.

    PubMed

    Xue, Min; Pan, Shilong; Zhao, Yongjiu

    2015-02-15

    A novel optical vector network analyzer (OVNA) based on optical single-sideband (OSSB) modulation and balanced photodetection is proposed and experimentally demonstrated, which can eliminate the measurement error induced by the high-order sidebands in the OSSB signal. According to the analytical model of the conventional OSSB-based OVNA, if the optical carrier in the OSSB signal is fully suppressed, the measurement result is exactly the high-order-sideband-induced measurement error. By splitting the OSSB signal after the optical device-under-test (ODUT) into two paths, removing the optical carrier in one path, and then detecting the two signals in the two paths using a balanced photodetector (BPD), high-order-sideband-induced measurement error can be ideally eliminated. As a result, accurate responses of the ODUT can be achieved without complex post-signal processing. A proof-of-concept experiment is carried out. The magnitude and phase responses of a fiber Bragg grating (FBG) measured by the proposed OVNA with different modulation indices are superimposed, showing that the high-order-sideband-induced measurement error is effectively removed.

  5. MO-C-17A-04: Forecasting Longitudinal Changes in Oropharyngeal Tumor Morphology Throughout the Course of Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A

    2014-06-15

    Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less

  6. Fringe localization requirements for three-dimensional flow visualization of shock waves in diffuse-illumination double-pulse holographic interferometry

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1982-01-01

    A theory of fringe localization in rapid-double-exposure, diffuse-illumination holographic interferometry was developed. The theory was then applied to compare holographic measurements with laser anemometer measurements of shock locations in a transonic axial-flow compressor rotor. The computed fringe localization error was found to agree well with the measured localization error. It is shown how the view orientation and the curvature and positional variation of the strength of a shock wave are used to determine the localization error and to minimize it. In particular, it is suggested that the view direction not deviate from tangency at the shock surface by more than 30 degrees.

  7. Bluetongue virus spread in Europe is a consequence of climatic, landscape and vertebrate host factors as revealed by phylogeographic inference

    PubMed Central

    Palmarini, Massimo; Mertens, Peter

    2017-01-01

    Spatio-temporal patterns of the spread of infectious diseases are commonly driven by environmental and ecological factors. This is particularly true for vector-borne diseases because vector populations can be strongly affected by host distribution as well as by climatic and landscape variables. Here, we aim to identify environmental drivers for bluetongue virus (BTV), the causative agent of a major vector-borne disease of ruminants that has emerged multiple times in Europe in recent decades. In order to determine the importance of climatic, landscape and host-related factors affecting BTV diffusion across Europe, we fitted different phylogeographic models to a dataset of 113 time-stamped and geo-referenced BTV genomes, representing multiple strains and serotypes. Diffusion models using continuous space revealed that terrestrial habitat below 300 m altitude, wind direction and higher livestock densities were associated with faster BTV movement. Results of discrete phylogeographic analysis involving generalized linear models broadly supported these findings, but varied considerably with the level of spatial partitioning. Contrary to common perception, we found no evidence for average temperature having a positive effect on BTV diffusion, though both methodological and biological reasons could be responsible for this result. Our study provides important insights into the drivers of BTV transmission at the landscape scale that could inform predictive models of viral spread and have implications for designing control strategies. PMID:29021180

  8. Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.

    PubMed

    Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T

    2017-07-01

    Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.

  9. Diffusion tensor tracking of neuronal fiber pathways in the living human brain

    NASA Astrophysics Data System (ADS)

    Lori, Nicolas Francisco

    2001-11-01

    The technique of diffusion tensor tracking (DTT) is described, in which diffusion tensor magnetic resonance imaging (DT-MRI) data are processed to allow the visualization of white matter (WM) tracts in a living human brain. To illustrate the methods, a detailed description is given of the physics of DT-MRI, the structure of the DT-MRI experiment, the computer tools that were developed to visualize WM tracts, the anatomical consistency of the obtained WM tracts, and the accuracy and precision of DTT using computer simulations. When presenting the physics of DT-MRI, a completely quantum-mechanical view of DT-MRI is given where some of the results are new. Examples of anatomical tracts viewed using DTT are presented, including the genu and the splenium of the corpus callosum, the ventral pathway with its amygdala connection highlighted, the geniculo- calcarine tract separated into anterior and posterior parts, the geniculo-calcarine tract defined using functional magnetic resonance imaging (MRI), and U- fibers. In the simulation, synthetic DT-MRI data were constructed that would be obtained for a cylindrical WM tract with a helical trajectory surrounded by gray matter. Noise was then added to the synthetic DT-MRI data, and DTT trajectories were calculated using the noisy data (realistic tracks). Simulated DTT errors were calculated as the vector distance between the realistic tracks and the ideal trajectory. The simulation tested the effects of a comprehensive set of experimental conditions, including voxel size, data sampling, data averaging, type of tract tissue, tract diameter and type of tract trajectory. Simulated DTT accuracy and precision were typically below the voxel dimension, and precision was compatible with the experimental results.

  10. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  11. Low-Angle Radar Tracking

    DTIC Science & Technology

    1976-02-01

    Transition from Specular Reflection to Diffuse Scattering. . . 10 Composition of the Electric-Field Vector as Seen at the Radar...r t (16) R • FIGURE P COMPOSITION OF THE ELECTRIC-FIELD VECTOR AS SEEN AT THE RADAR, R, IN FIG. 2. The electric field at the radar, E, is the sum...wavelengths in the VHP and UHF ranges even subsurface characteristics can be important. So in a field experiment one must be careful to measure

  12. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  13. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  14. Predictive control strategies for wind turbine system based on permanent magnet synchronous generator.

    PubMed

    Maaoui-Ben Hassine, Ikram; Naouar, Mohamed Wissem; Mrabet-Bellaaj, Najiba

    2016-05-01

    In this paper, Model Predictive Control and Dead-beat predictive control strategies are proposed for the control of a PMSG based wind energy system. The proposed MPC considers the model of the converter-based system to forecast the possible future behavior of the controlled variables. It allows selecting the voltage vector to be applied that leads to a minimum error by minimizing a predefined cost function. The main features of the MPC are low current THD and robustness against parameters variations. The Dead-beat predictive control is based on the system model to compute the optimum voltage vector that ensures zero-steady state error. The optimum voltage vector is then applied through Space Vector Modulation (SVM) technique. The main advantages of the Dead-beat predictive control are low current THD and constant switching frequency. The proposed control techniques are presented and detailed for the control of back-to-back converter in a wind turbine system based on PMSG. Simulation results (under Matlab-Simulink software environment tool) and experimental results (under developed prototyping platform) are presented in order to show the performances of the considered control strategies. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Convergence of lateral dynamic measurements in the plasma membrane of live cells from single particle tracking and STED-FCS

    NASA Astrophysics Data System (ADS)

    Lagerholm, B. Christoffer; Andrade, Débora M.; Clausen, Mathias P.; Eggeling, Christian

    2017-02-01

    Fluorescence correlation spectroscopy (FCS) in combination with the super-resolution imaging method STED (STED-FCS), and single-particle tracking (SPT) are able to directly probe the lateral dynamics of lipids and proteins in the plasma membrane of live cells at spatial scales much below the diffraction limit of conventional microscopy. However, a major disparity in interpretation of data from SPT and STED-FCS remains, namely the proposed existence of a very fast (unhindered) lateral diffusion coefficient, ⩾5 µm2 s-1, in the plasma membrane of live cells at very short length scales, ≈⩽ 100 nm, and time scales, ≈1-10 ms. This fast diffusion coefficient has been advocated in several high-speed SPT studies, for lipids and membrane proteins alike, but the equivalent has not been detected in STED-FCS measurements. Resolving this ambiguity is important because the assessment of membrane dynamics currently relies heavily on SPT for the determination of heterogeneous diffusion. A possible systematic error in this approach would thus have vast implications in this field. To address this, we have re-visited the analysis procedure for SPT data with an emphasis on the measurement errors and the effect that these errors have on the measurement outputs. We subsequently demonstrate that STED-FCS and SPT data, following careful consideration of the experimental errors of the SPT data, converge to a common interpretation which for the case of a diffusing phospholipid analogue in the plasma membrane of live mouse embryo fibroblasts results in an unhindered, intra-compartment, diffusion coefficient of  ≈0.7-1.0 µm2 s-1, and a compartment size of about 100-150 nm.

  16. Convergence of lateral dynamic measurements in the plasma membrane of live cells from single particle tracking and STED-FCS

    PubMed Central

    Lagerholm, B Christoffer; Andrade, Débora M; Clausen, Mathias P; Eggeling, Christian

    2017-01-01

    Abstract Fluorescence correlation spectroscopy (FCS) in combination with the super-resolution imaging method STED (STED-FCS), and single-particle tracking (SPT) are able to directly probe the lateral dynamics of lipids and proteins in the plasma membrane of live cells at spatial scales much below the diffraction limit of conventional microscopy. However, a major disparity in interpretation of data from SPT and STED-FCS remains, namely the proposed existence of a very fast (unhindered) lateral diffusion coefficient, ⩾5 µm2 s−1, in the plasma membrane of live cells at very short length scales, ≈⩽ 100 nm, and time scales, ≈1–10 ms. This fast diffusion coefficient has been advocated in several high-speed SPT studies, for lipids and membrane proteins alike, but the equivalent has not been detected in STED-FCS measurements. Resolving this ambiguity is important because the assessment of membrane dynamics currently relies heavily on SPT for the determination of heterogeneous diffusion. A possible systematic error in this approach would thus have vast implications in this field. To address this, we have re-visited the analysis procedure for SPT data with an emphasis on the measurement errors and the effect that these errors have on the measurement outputs. We subsequently demonstrate that STED-FCS and SPT data, following careful consideration of the experimental errors of the SPT data, converge to a common interpretation which for the case of a diffusing phospholipid analogue in the plasma membrane of live mouse embryo fibroblasts results in an unhindered, intra-compartment, diffusion coefficient of  ≈0.7–1.0 µm2 s−1, and a compartment size of about 100–150 nm. PMID:28458397

  17. Vectorization of Nucleic Acids for Therapeutic Approach: Tutorial Review.

    PubMed

    Geinguenaud, Frederic; Guenin, Erwann; Lalatonne, Yoann; Motte, Laurence

    2016-05-20

    Oligonucleotides present a high therapeutic potential for a wide variety of diseases. However, their clinical development is limited by their degradation by nucleases and their poor blood circulation time. Depending on the administration mode and the cellular target, these macromolecules will have to cross the vascular endothelium, to diffuse through the extracellular matrix, to be transported through the cell membrane, and finally to reach the cytoplasm. To overcome these physiological barriers, many strategies have been developed. Here, we review different methods of DNA vectorization, discuss limitations and advantages of the various vectors, and provide new perspectives for future development.

  18. Emergency Department Visit Forecasting and Dynamic Nursing Staff Allocation Using Machine Learning Techniques With Readily Available Open-Source Software.

    PubMed

    Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel

    2015-08-01

    Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.

  19. Calculation method for steady-state pollutant concentration in mixing zones considering variable lateral diffusion coefficient.

    PubMed

    Wu, Wen; Wu, Zhouhu; Song, Zhiwen

    2017-07-01

    Prediction of the pollutant mixing zone (PMZ) near the discharge outfall in Huangshaxi shows large error when using the methods based on the constant lateral diffusion assumption. The discrepancy is due to the lack of consideration of the diffusion coefficient variation. The variable lateral diffusion coefficient is proposed to be a function of the longitudinal distance from the outfall. Analytical solution of the two-dimensional advection-diffusion equation of a pollutant is derived and discussed. Formulas to characterize the geometry of the PMZ are derived based on this solution, and a standard curve describing the boundary of the PMZ is obtained by proper choices of the normalization scales. The change of PMZ topology due to the variable diffusion coefficient is then discussed using these formulas. The criterion of assuming the lateral diffusion coefficient to be constant without large error in PMZ geometry is found. It is also demonstrated how to use these analytical formulas in the inverse problems including estimating the lateral diffusion coefficient in rivers by convenient measurements, and determining the maximum allowable discharge load based on the limitations of the geometrical scales of the PMZ. Finally, applications of the obtained formulas to onsite PMZ measurements in Huangshaxi present excellent agreement.

  20. Derivation of a closed form analytical expression for fluorescence recovery after photo bleaching in the case of continuous bleaching during read out

    NASA Astrophysics Data System (ADS)

    Endress, E.; Weigelt, S.; Reents, G.; Bayerl, T. M.

    2005-01-01

    Measurements of very slow diffusive processes in membranes, like the diffusion of integral membrane proteins, by fluorescence recovery after photo bleaching (FRAP) are hampered by bleaching of the probe during the read out of the fluorescence recovery. In the limit of long observation time (very slow diffusion as in the case of large membrane proteins), this bleaching may cause errors to the recovery function and thus provides error-prone diffusion coefficients. In this work we present a new approach to a two-dimensional closed form analytical solution of the reaction-diffusion equation, based on the addition of a dissipative term to the conventional diffusion equation. The calculation was done assuming (i) a Gaussian laser beam profile for bleaching the spot and (ii) that the fluorescence intensity profile emerging from the spot can be approximated by a two-dimensional Gaussian. The detection scheme derived from the analytical solution allows for diffusion measurements without the constraint of observation bleaching. Recovery curves of experimental FRAP data obtained under non-negligible read-out bleaching for native membranes (rabbit endoplasmic reticulum) on a planar solid support showed excellent agreement with the analytical solution and allowed the calculation of the lipid diffusion coefficient.

  1. Chamber measurement of surface-atmosphere trace gas exchange: Numerical evaluation of dependence on soil, interfacial layer, and source/sink properties

    NASA Astrophysics Data System (ADS)

    Hutchinson, G. L.; Livingston, G. P.; Healy, R. W.; Striegl, R. G.

    2000-04-01

    We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere trace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulations showed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steady-state chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.

  2. Chamber measurement of surface-atmosphere trace gas exchange--Numerical evaluation of dependence on soil interfacial layer, and source/sink products

    USGS Publications Warehouse

    Hutchinson, G.L.; Livingston, G.P.; Healy, R.W.; Striegl, Robert G.

    2000-01-01

    We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere tace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulationshowed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steadystate chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.

  3. Dielectrophoresis enhances the whitening effect of carbamide peroxide on enamel.

    PubMed

    Ivanoff, Chris S; Hottel, Timothy L; Garcia-Godoy, Franklin; Riga, Alan T

    2011-10-01

    To compare the enamel whitening effect of a 20-minute dielectrophoresis enhanced electrochemical delivery to a 20-minute diffusion treatment. Forty freshly extracted human teeth without detectable caries or restoration were stored in distilled water at 4 degrees C and used within 1 month of extraction. Two different bleaching gels (Plus White 5 Minute Speed Whitening Gel and 35% Opalescence PF gel) were tested. The study had two parts: Part 1--Quantitative comparison of hydrogen peroxide (H2O2, HP) absorption--following application of an over-the-counter 35% HP whitening gel (Plus White 5 Minute Speed Whitening Gel) to 30 (n = 30) extracted human teeth by conventional diffusion or dielectrophoresis. The amount of H2O2 that diffused from the dentin was measured by a colorimetric oxidation-reduction reaction kit. HP concentration was measured by UV-Vis spectroscopy at 550 nm. Part 2--HP diffusion in stained teeth--35% carbamide peroxide whitening gel (35% Opalescence PF gel) was applied to 10 extracted human teeth (n = 10) stained by immersion in a black tea solution for 48 hours. The teeth were randomly assigned to the 20-minute dielectrophoresis or diffusion treatment group; whitening was evaluated by a dental spectrophotometer and macro-photography. Part 1: The analysis found significant differences between both groups with relative percent errors of 3% or less (a single outlier had an RPE of 12%). The average absorbance for the dielectrophoresis group in round 1 was 79% greater than the diffusion group. The average absorbance for the dielectrophoresis group in round 2 was 130% greater than the diffusion group. A single-factor ANOVA found a statistically significant difference between the diffusion and dielectrophoresis groups (P = 0.01). Part 2--The average change in Shade Guide Units (SGU) was 0.6 for the diffusion group, well under the error of measurement of 0.82 SGU. The average change in SGU for the dielectrophoresis group was 9, significantly above the error of measurement and 14 times or 1,400% greater than the diffusion group average. A single-factor ANOVA found a statistically significant difference between the diffusion and dielectrophoresis treatment groups (P < 0.001).

  4. Chromosomal locus tracking with proper accounting of static and dynamic errors

    PubMed Central

    Backlund, Mikael P.; Joyner, Ryan; Moerner, W. E.

    2015-01-01

    The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object’s motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics (“static error”) and motion blur due to finite exposure time (“dynamic error”) on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors. PMID:26172745

  5. Efficient boundary hunting via vector quantization

    NASA Astrophysics Data System (ADS)

    Diamantini, Claudia; Panti, Maurizio

    2001-03-01

    A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.

  6. Development of a two-dimensional dual pendulum thrust stand for Hall thrusters.

    PubMed

    Nagao, N; Yokota, S; Komurasaki, K; Arakawa, Y

    2007-11-01

    A two-dimensional dual pendulum thrust stand was developed to measure thrust vectors [axial and horizontal (transverse) direction thrusts] of a Hall thruster. A thruster with a steering mechanism is mounted on the inner pendulum, and thrust is measured from the displacement between inner and outer pendulums, by which a thermal drift effect is canceled out. Two crossover knife-edges support each pendulum arm: one is set on the other at a right angle. They enable the pendulums to swing in two directions. Thrust calibration using a pulley and weight system showed that the measurement errors were less than 0.25 mN (1.4%) in the main thrust direction and 0.09 mN (1.4%) in its transverse direction. The thrust angle of the thrust vector was measured with the stand using the thruster. Consequently, a vector deviation from the main thrust direction of +/-2.3 degrees was measured with the error of +/-0.2 degrees under the typical operating conditions for the thruster.

  7. Spaceflight Ka-Band High-Rate Radiation-Hard Modulator

    NASA Technical Reports Server (NTRS)

    Jaso, Jeffery M.

    2011-01-01

    A document discusses the creation of a Ka-band modulator developed specifically for the NASA/GSFC Solar Dynamics Observatory (SDO). This flight design consists of a high-bandwidth, Quadriphase Shift Keying (QPSK) vector modulator with radiation-hardened, high-rate driver circuitry that receives I and Q channel data. The radiationhard design enables SDO fs Ka-band communications downlink system to transmit 130 Mbps (300 Msps after data encoding) of science instrument data to the ground system continuously throughout the mission fs minimum life of five years. The low error vector magnitude (EVM) of the modulator lowers the implementation loss of the transmitter in which it is used, thereby increasing the overall communication system link margin. The modulator comprises a component within the SDO transmitter, and meets the following specifications over a 0 to 40 C operational temperature range: QPSK/OQPSK modulator, 300-Msps symbol rate, 26.5-GHz center frequency, error vector magnitude less than or equal to 10 percent rms, and compliance with the NTIA (National Telecommunications and Information Administration) spectral mask.

  8. Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    NASA Astrophysics Data System (ADS)

    Imig, Astrid; Stephenson, Edward

    2009-10-01

    The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.

  9. Force estimation from OCT volumes using 3D CNNs.

    PubMed

    Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander

    2018-07-01

    Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.

  10. An exploration of diffusion tensor eigenvector variability within human calf muscles.

    PubMed

    Rockel, Conrad; Noseworthy, Michael D

    2016-01-01

    To explore the effect of diffusion tensor imaging (DTI) acquisition parameters on principal and minor eigenvector stability within human lower leg skeletal muscles. Lower leg muscles were evaluated in seven healthy subjects at 3T using an 8-channel transmit/receive coil. Diffusion-encoding was performed with nine signal averages (NSA) using 6, 15, and 25 directions (NDD). Individual DTI volumes were combined into aggregate volumes of 3, 2, and 1 NSA according to number of directions. Tensor eigenvalues (λ1 , λ2 , λ3 ), eigenvectors (ε1 , ε2 , ε3 ), and DTI metrics (fractional anisotropy [FA] and mean diffusivity [MD]) were calculated for each combination of NSA and NDD. Spatial maps of signal-to-noise ratio (SNR), λ3 :λ2 ratio, and zenith angle were also calculated for region of interest (ROI) analysis of vector orientation consistency. ε1 variability was only moderately related to ε2 variability (r = 0.4045). Variation of ε1 was affected by NDD, not NSA (P < 0.0002), while variation of ε2 was affected by NSA, not NDD (P < 0.0003). In terms of tensor shape, vector variability was weakly related to FA (ε1 :r = -0.1854, ε2 : ns), but had a stronger relation to the λ3 :λ2 ratio (ε1 :r = -0.5221, ε2 :r = -0.1771). Vector variability was also weakly related to SNR (ε1 :r = -0.2873, ε2 :r = -0.3483). Zenith angle was found to be strongly associated with variability of ε1 (r = 0.8048) but only weakly with that of ε2 (r = 0.2135). The second eigenvector (ε2 ) displayed higher directional variability relative to ε1 , and was only marginally affected by experimental conditions that impacted ε1 variability. © 2015 Wiley Periodicals, Inc.

  11. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Vezewski, D. J.

    1980-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  12. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  13. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1979-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  14. Using Perturbation Theory to Compute the Morphological Similarity of Diffusion Tensors

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Xu, Dongrong; Laine, Andrew F.; Royal, Jason; Peterson, Bradley S.

    2008-01-01

    Computing the morphological similarity of Diffusion Tensors (DTs) at neighboring voxels within a DT image, or at corresponding locations across different DT images, is a fundamental and ubiquitous operation in the post-processing of DT images. The morphological similarity of DTs typically has been computed using either the Principal Directions (PDs) of DTs (i.e., the direction along which water molecules diffuse preferentially) or their tensor elements. Although comparing PDs allows the similarity of one morphological feature of DTs to be visualized directly in eigenspace, this method takes into account only a single eigenvector, and it is therefore sensitive to the presence of noise in the images that can introduce error into the estimation of that vector. Although comparing tensor elements, rather than PDs, is comparatively more robust to the effects of noise, the individual elements of a given tensor do not directly reflect the diffusion properties of water molecules. We propose a measure for computing the morphological similarity of DTs that uses both their eigenvalues and eigenvectors, and that also accounts for the noise levels present in DT images. Our measure presupposes that DTs in a homogeneous region within or across DT images are random perturbations of one another in the presence of noise. The similarity values that are computed using our method are smooth (in the sense that small changes in eigenvalues and eigenvectors cause only small changes in similarity), and they are symmetric when differences in eigenvalues and eigenvectors are also symmetric. In addition, our method does not presuppose that the corresponding eigenvectors across two DTs have been identified accurately, an assumption that is problematic in the presence of noise. Because we compute the similarity between DTs using their eigenspace components, our similarity measure relates directly to both the magnitude and the direction of the diffusion of water molecules. The favorable performance characteristics of our measure offer the prospect of substantially improving additional post-processing operations that are commonly performed on DTI datasets, such as image segmentation, fiber tracking, noise filtering, and spatial normalization. PMID:18450533

  15. Parameter Variability and Distributional Assumptions in the Diffusion Model

    ERIC Educational Resources Information Center

    Ratcliff, Roger

    2013-01-01

    If the diffusion model (Ratcliff & McKoon, 2008) is to account for the relative speeds of correct responses and errors, it is necessary that the components of processing identified by the model vary across the trials of a task. In standard applications, the rate at which information is accumulated by the diffusion process is assumed to be normally…

  16. High-throughput ab-initio dilute solute diffusion database.

    PubMed

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-19

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  17. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  18. Traveling wave solutions to a reaction-diffusion equation

    NASA Astrophysics Data System (ADS)

    Feng, Zhaosheng; Zheng, Shenzhou; Gao, David Y.

    2009-07-01

    In this paper, we restrict our attention to traveling wave solutions of a reaction-diffusion equation. Firstly we apply the Divisor Theorem for two variables in the complex domain, which is based on the ring theory of commutative algebra, to find a quasi-polynomial first integral of an explicit form to an equivalent autonomous system. Then through this first integral, we reduce the reaction-diffusion equation to a first-order integrable ordinary differential equation, and a class of traveling wave solutions is obtained accordingly. Comparisons with the existing results in the literature are also provided, which indicates that some analytical results in the literature contain errors. We clarify the errors and instead give a refined result in a simple and straightforward manner.

  19. Efficient gradient calibration based on diffusion MRI.

    PubMed

    Teh, Irvin; Maguire, Mahon L; Schneider, Jürgen E

    2017-01-01

    To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. The errors in apparent diffusion coefficients along orthogonal axes ranged from -9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and -0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from -5.5% to + 4.5% precalibration and were likewise reduced to -0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170-179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. © 2016 Wiley Periodicals, Inc.

  20. Efficient gradient calibration based on diffusion MRI

    PubMed Central

    Teh, Irvin; Maguire, Mahon L.

    2016-01-01

    Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277

  1. Can different quantum state vectors correspond to the same physical state? An experimental test

    NASA Astrophysics Data System (ADS)

    Nigg, Daniel; Monz, Thomas; Schindler, Philipp; Martinez, Esteban A.; Hennrich, Markus; Blatt, Rainer; Pusey, Matthew F.; Rudolph, Terry; Barrett, Jonathan

    2016-01-01

    A century after the development of quantum theory, the interpretation of a quantum state is still discussed. If a physicist claims to have produced a system with a particular quantum state vector, does this represent directly a physical property of the system, or is the state vector merely a summary of the physicist’s information about the system? Assume that a state vector corresponds to a probability distribution over possible values of an unknown physical or ‘ontic’ state. Then, a recent no-go theorem shows that distinct state vectors with overlapping distributions lead to predictions different from quantum theory. We report an experimental test of these predictions using trapped ions. Within experimental error, the results confirm quantum theory. We analyse which kinds of models are ruled out.

  2. Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC.

    PubMed

    Mohammed, Nazmi A; Elkarim, Mohammed Abd

    2015-08-10

    This work explores and evaluates the effect of diffuse light reflection on the accuracy of indoor localization systems based on visible light communication (VLC) in a high reflectivity environment using a received signal strength indication (RSSI) technique. The effect of the essential receiver (Rx) and transmitter (Tx) parameters on the localization error with different transmitted LED power and wall reflectivity factors is investigated at the worst Rx coordinates for a directed/overall link. Since this work assumes harsh operating conditions (i.e., a multipath model, high reflectivity surfaces, worst Rx position), an error of ≥ 1.46 m is found. To achieve a localization error in the range of 30 cm under these conditions with moderate LED power (i.e., P = 0.45 W), low reflectivity walls (i.e., ρ = 0.1) should be used, which would enable a localization error of approximately 7 mm at the room's center.

  3. Fluctuations in diffusion processes in microgravity.

    PubMed

    Mazzoni, Stefano; Cerbino, Roberto; Vailati, Alberto; Giglio, Marzio

    2006-09-01

    It has been shown recently that diffusion processes exhibit giant nonequilibrium fluctuations (NEFs). That is, the diffusing fronts display corrugations whose length scale ranges from the molecular to the macroscopic one. The amplitude of the NEF diverges following a power law behavior proportional to q(-4) (where q is the wave vector). However, fluctuations of wave number smaller than a critical "rolloff" wave vector are quenched by the presence of gravity. It is therefore expected that in microgravity conditions, the amplitude of the NEF should be boosted by the absence of the buoyancy-driven restoring force. This may affect any diffusion process performed in microgravity, such as the crystallization of a protein solution induced by the diffusion of a salt buffer. The aim of GRADFLEX (GRAdient-Driven FLuctuation EXperiment), a joint project of ESA and NASA, is to investigate the presence of NEFs arising in a diffusion process under microgravity conditions. The project consists of two experiments. One is carried out by UNIMI (University of Milan) and INFM (Istituto Nazionale per la Fisica della Materia) and is focused on NEF in a concentration diffusion process. The other experiment is performed by UCSB (University of California at Santa Barbara) concerning temperature NEF in a simple fluid. In the UNIMI part of the GRADFLEX experimental setup, NEFs are induced in a binary mixture by means of the Soret effect. The diagnostic method is an all-optical quantitative shadowgraph technique. The power spectrum of the induced NEFs is obtained by the processing of the shadowgraph images. A detailed description of the experimental apparatus as well as the ground-based experimental results is presented here for the UNIMI-INFM experiment. The GRADFLEX payload is scheduled to fly on the FOTON M3 capsule in April 2007.

  4. The Near Wake of Bluff Bodies in Stratified Fluids and the Emergence of Late Wake Characteristics

    DTIC Science & Technology

    2010-10-29

    including suggestions for reducing this burden to Department of Defense. Washington Headquarters Services, Directorate for Information Operations and...represents the orthnormal coordinate vectors in a Cartesian coordinate system , u = i^ei is the velocity vector field, P is pressure, p is the density, and... different characteristics depending upon the Reynolds number, the Froude number, and possibly the diffusivity [22] of the flow. In turn, the

  5. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  6. JPRS Report, Science & Technology, China

    DTIC Science & Technology

    1991-10-22

    ZHONGGUO KEXUE BAO, 30 Aug 91] .......................................... 22 Shanghai Scientist Develops State-of-the-Art Liquid-Crystal Light Valve...the angle of attack will gradu- direction of the final velocity vector of the satellite are ally decrease under the action of aerodynamic moments...impulse and the direction of the thrust vector of the The recovery system, is located inside the sealed reentry retro-rocket engine, errors in the

  7. The Alignment of the Mean Wind and Stress Vectors in the Unstable Surface Layer

    NASA Astrophysics Data System (ADS)

    Bernardes, M.; Dias, N. L.

    2010-01-01

    A significant non-alignment between the mean horizontal wind vector and the stress vector was observed for turbulence measurements both above the water surface of a large lake, and over a land surface (soybean crop). Possible causes for this discrepancy such as flow distortion, averaging times and the procedure used for extracting the turbulent fluctuations (low-pass filtering and filter widths etc.), were dismissed after a detailed analysis. Minimum averaging times always less than 30 min were established by calculating ogives, and error bounds for the turbulent stresses were derived with three different approaches, based on integral time scales (first-crossing and lag-window estimates) and on a bootstrap technique. It was found that the mean absolute value of the angle between the mean wind and stress vectors is highly related to atmospheric stability, with the non-alignment increasing distinctively with increasing instability. Given a coordinate rotation that aligns the mean wind with the x direction, this behaviour can be explained by the growth of the relative error of the u- w component with instability. As a result, under more unstable conditions the u- w and the v- w components become of the same order of magnitude, and the local stress vector gives the impression of being non-aligned with the mean wind vector. The relative error of the v- w component is large enough to make it undistinguishable from zero throughout the range of stabilities. Therefore, the standard assumptions of Monin-Obukhov similarity theory hold: it is fair to assume that the v- w stress component is actually zero, and that the non-alignment is a purely statistical effect. An analysis of the dimensionless budgets of the u- w and the v- w components confirms this interpretation, with both shear and buoyant production of u- w decreasing with increasing instability. In the v- w budget, shear production is zero by definition, while buoyancy displays very low-intensity fluctuations around zero. As local free convection is approached, the turbulence becomes effectively axisymetrical, and a practical limit seems to exist beyond which it is not possible to measure the u- w component accurately.

  8. A method for optimizing the cosine response of solar UV diffusers

    NASA Astrophysics Data System (ADS)

    Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki

    2013-07-01

    Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.

  9. Multilayer perceptron, fuzzy sets, and classification

    NASA Technical Reports Server (NTRS)

    Pal, Sankar K.; Mitra, Sushmita

    1992-01-01

    A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.

  10. Spatial diffusion of raccoon rabies in Pennsylvania, USA.

    PubMed

    Moore, D A

    1999-05-14

    Identification of the geographic pattern of diffusion of a wildlife disease could lead to information regarding its control. The objective of this study was to model raccoon-rabies diffusion in Pennsylvania to identify geographic constraints on the diffusion pattern for potential use in bait-vaccination strategies. A trend-surface analysis (TSA) was used as a spatial filter for month to first report by county location. A cubic polynomial model was fitted (R2 = 0.80). Velocity vectors were calculated from the partial derivatives of the model and mapped to demonstrate the instantaneous speed of diffusion at each location. A main corridor of diffusion through the ridge and valley section of the state was evident early in the outbreak. Once the disease reached the northern counties, the disease moved west toward Ohio. I believe that TSA was useful in identifying the pattern of raccoon-rabies diffusion across the stage from the inherent noise of disease-reporting data.

  11. NMR-based diffusion pore imaging by double wave vector measurements.

    PubMed

    Kuder, Tristan Anselm; Laun, Frederik Bernd

    2013-09-01

    One main interest of nuclear magnetic resonance (NMR) diffusion experiments is the investigation of boundaries such as cell membranes hindering the diffusion process. NMR diffusion measurements allow collecting the signal from the whole sample. This mainly eliminates the problem of vanishing signal at increasing resolution. It has been a longstanding question if, in principle, the exact shape of closed pores can be determined by NMR diffusion measurements. In this work, we present a method using short diffusion gradient pulses only, which is able to reveal the shape of arbitrary closed pores without relying on a priori knowledge. In comparison to former approaches, the method has reduced demands on relaxation times due to faster convergence to the diffusion long-time limit and allows for a more flexible NMR sequence design, because, e.g., stimulated echoes can be used. Copyright © 2012 Wiley Periodicals, Inc.

  12. Note: Focus error detection device for thermal expansion-recovery microscopy (ThERM).

    PubMed

    Domené, E A; Martínez, O E

    2013-01-01

    An innovative focus error detection method is presented that is only sensitive to surface curvature variations, canceling both thermoreflectance and photodefelection effects. The detection scheme consists of an astigmatic probe laser and a four-quadrant detector. Nonlinear curve fitting of the defocusing signal allows the retrieval of a cutoff frequency, which only depends on the thermal diffusivity of the sample and the pump beam size. Therefore, a straightforward retrieval of the thermal diffusivity of the sample is possible with microscopic lateral resolution and high axial resolution (~100 pm).

  13. Coherent Doppler Lidar for Boundary Layer Studies and Wind Energy

    NASA Astrophysics Data System (ADS)

    Choukulkar, Aditya

    This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS RTM) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.

  14. Carbapenem Susceptibility Testing Errors Using Three Automated Systems, Disk Diffusion, Etest, and Broth Microdilution and Carbapenem Resistance Genes in Isolates of Acinetobacter baumannii-calcoaceticus Complex

    DTIC Science & Technology

    2011-10-01

    Phoenix, and Vitek 2 systems). Discordant results were categorized as very major errors (VME), major errors (ME), and minor errors (mE). DNA sequences...01 OCT 2011 2 . REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Carbapenem Susceptibility Testing Errors Using Three Automated...FDA standards required for device approval (11). The Vitek 2 method was the only automated susceptibility method in our study that satisfied FDA

  15. Observations on Polar Coding with CRC-Aided List Decoding

    DTIC Science & Technology

    2016-09-01

    9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector

  16. Star tracker error analysis: Roll-to-pitch nonorthogonality

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1979-01-01

    An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.

  17. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  18. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    PubMed

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  20. Group iterative methods for the solution of two-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.

    2016-06-01

    Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.

  1. Ultrametric distribution of culture vectors in an extended Axelrod model of cultural dissemination.

    PubMed

    Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael

    2014-05-02

    The Axelrod model of cultural diffusion is an apparently simple model that is capable of complex behaviour. A recent work used a real-world dataset of opinions as initial conditions, demonstrating the effects of the ultrametric distribution of empirical opinion vectors in promoting cultural diversity in the model. Here we quantify the degree of ultrametricity of the initial culture vectors and investigate the effect of varying degrees of ultrametricity on the absorbing state of both a simple and extended model. Unlike the simple model, ultrametricity alone is not sufficient to sustain long-term diversity in the extended Axelrod model; rather, the initial conditions must also have sufficiently large variance in intervector distances. Further, we find that a scheme for evolving synthetic opinion vectors from cultural "prototypes" shows the same behaviour as real opinion data in maintaining cultural diversity in the extended model; whereas neutral evolution of cultural vectors does not.

  2. Ultrametric distribution of culture vectors in an extended Axelrod model of cultural dissemination

    NASA Astrophysics Data System (ADS)

    Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael

    2014-05-01

    The Axelrod model of cultural diffusion is an apparently simple model that is capable of complex behaviour. A recent work used a real-world dataset of opinions as initial conditions, demonstrating the effects of the ultrametric distribution of empirical opinion vectors in promoting cultural diversity in the model. Here we quantify the degree of ultrametricity of the initial culture vectors and investigate the effect of varying degrees of ultrametricity on the absorbing state of both a simple and extended model. Unlike the simple model, ultrametricity alone is not sufficient to sustain long-term diversity in the extended Axelrod model; rather, the initial conditions must also have sufficiently large variance in intervector distances. Further, we find that a scheme for evolving synthetic opinion vectors from cultural ``prototypes'' shows the same behaviour as real opinion data in maintaining cultural diversity in the extended model; whereas neutral evolution of cultural vectors does not.

  3. Extracting Diffusion Constants from Echo-Time-Dependent PFG NMR Data Using Relaxation-Time Information

    NASA Astrophysics Data System (ADS)

    Vandusschoten, D.; Dejager, P. A.; Vanas, H.

    Heterogeneous (bio)systems are often characterized by several water-containing compartments that differ in relaxation time values and diffusion constants. Because of the relatively small differences among these diffusion constants, nonoptimal measuring conditions easily lead to the conclusion that a single diffusion constant suffices to describe the water mobility in a heterogeneous (bio)system. This paper demonstrates that the combination of a T2 measurement and diffusion measurements at various echo times (TE), based on the PFG MSE sequence, enables the accurate determination of diffusion constants which are less than a factor of 2 apart. This new method gives errors of the diffusion constant below 10% when two fractions are present, while the standard approach of a biexponential fit to the diffusion data in identical circumstances gives larger (>25%) errors. On application of this approach to water in apple parenchyma tissue, the diffusion constant of water in the vacuole of the cells ( D = 1.7 × 10 -9 m 2/s) can be distinguished from that of the cytoplasm ( D = 1.0 × 10 -9 m 2/s). Also, for mung bean seedlings, the cell size determined by PFG MSE measurements increased from 65 to 100 μm when the echo time increased from 150 to 900 ms, demonstrating that the interpretation of PFG SE data used to investigate cell sizes is strongly dependent on the T2 values of the fractions within the sample. Because relaxation times are used to discriminate the diffusion constants, we propose to name this approach diffusion analysis by relaxation- time- separated (DARTS) PFG NMR.

  4. Test of understanding of vectors: A reliable multiple-choice vector concept test

    NASA Astrophysics Data System (ADS)

    Barniol, Pablo; Zavala, Genaro

    2014-06-01

    In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended problems in which a total of 2067 students participated. Using this taxonomy, we then designed a 20-item multiple-choice test [Test of understanding of vectors (TUV)] and administered it in English to 423 students who were completing the required sequence of introductory physics courses at a large private Mexican university. We evaluated the test's content validity, reliability, and discriminatory power. The results indicate that the TUV is a reliable assessment tool. We also conducted a detailed analysis of the students' understanding of the vector concepts evaluated in the test. The TUV is included in the Supplemental Material as a resource for other researchers studying vector learning, as well as instructors teaching the material.

  5. A generalized nonlocal vector calculus

    NASA Astrophysics Data System (ADS)

    Alali, Bacim; Liu, Kuo; Gunzburger, Max

    2015-10-01

    A nonlocal vector calculus was introduced in Du et al. (Math Model Meth Appl Sci 23:493-540, 2013) that has proved useful for the analysis of the peridynamics model of nonlocal mechanics and nonlocal diffusion models. A formulation is developed that provides a more general setting for the nonlocal vector calculus that is independent of particular nonlocal models. It is shown that general nonlocal calculus operators are integral operators with specific integral kernels. General nonlocal calculus properties are developed, including nonlocal integration by parts formula and Green's identities. The nonlocal vector calculus introduced in Du et al. (Math Model Meth Appl Sci 23:493-540, 2013) is shown to be recoverable from the general formulation as a special example. This special nonlocal vector calculus is used to reformulate the peridynamics equation of motion in terms of the nonlocal gradient operator and its adjoint. A new example of nonlocal vector calculus operators is introduced, which shows the potential use of the general formulation for general nonlocal models.

  6. Attitude control with realization of linear error dynamics

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Bach, Ralph E.

    1993-01-01

    An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.

  7. High-throughput ab-initio dilute solute diffusion database

    PubMed Central

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-01-01

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world. PMID:27434308

  8. Surface photovoltage method extended to silicon solar cell junction

    NASA Technical Reports Server (NTRS)

    Wang, E. Y.; Baraona, C. R.; Brandhorst, H. W., Jr.

    1974-01-01

    The conventional surface photovoltage (SPV) method is extended to the measurement of the minority carrier diffusion length in diffused semiconductor junctions of the type used in a silicon solar cell. The minority carrier diffusion values obtained by the SPV method agree well with those obtained by the X-ray method. Agreement within experimental error is also obtained between the minority carrier diffusion lengths in solar cell diffusion junctions and in the same materials with n-regions removed by etching, when the SPV method was used in the measurements.

  9. Study of diffusion coefficient of anhydrous trehalose glasses by using PFG-NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Joung; Takekawa, Reiji; Kawamura, Junichi; Tokuyama, Michio

    2013-02-01

    We investigated the temperature dependent long time self-diffusion coefficient of the anhydrous trehalose supercooled liquids by using pulsed field gradient nuclear magnetic resonance (PFG-NMR) spectroscopy. At the same temperature ranges, the diffusion coefficient convoluted from the α-relaxation time as Einstein-Smoluchowski relaxation, measured by using the dielectric loss spectroscopy are well overlapped with diffusion coefficients within experimental error. The temperature dependent diffusion coefficients obtained from different methods are normalized by fictive temperature and well satisfied the single master curve, proposed by Tokuyama.

  10. A radio-aware routing algorithm for reliable directed diffusion in lossy wireless sensor networks.

    PubMed

    Kim, Yong-Pyo; Jung, Euihyun; Park, Yong-Jin

    2009-01-01

    In Wireless Sensor Networks (WSNs), transmission errors occur frequently due to node failure, battery discharge, contention or interference by objects. Although Directed Diffusion has been considered as a prominent data-centric routing algorithm, it has some weaknesses due to unexpected network errors. In order to address these problems, we proposed a radio-aware routing algorithm to improve the reliability of Directed Diffusion in lossy WSNs. The proposed algorithm is aware of the network status based on the radio information from MAC and PHY layers using a cross-layer design. The cross-layer design can be used to get detailed information about current status of wireless network such as a link quality or transmission errors of communication links. The radio information indicating variant network conditions and link quality was used to determine an alternative route that provides reliable data transmission under lossy WSNs. According to the simulation result, the radio-aware reliable routing algorithm showed better performance in both grid and random topologies with various error rates. The proposed solution suggested the possibility of providing a reliable transmission method for QoS requests in lossy WSNs based on the radio-awareness. The energy and mobility issues will be addressed in the future work.

  11. Suomi-NPP VIIRS Solar Diffuser Stability Monitor Performance

    NASA Technical Reports Server (NTRS)

    Fulbright, Jon; Lei, Ning; Efremova, Boryana; Xiong, Xiaoxiong

    2015-01-01

    When illuminated by the Sun, the onboard solar diffuser (SD) panel provides a known spectral radiance source to calibrate the reflective solar bands of the Visible Infrared Imaging Radiometer Suite on the Suomi-NPP satellite. The SD bidirectional reflectance distribution function (BRDF) degrades over time due to solar exposure, and this degradation is measured using the SD stability monitor (SDSM). The SDSM acts as a ratioing radiometer, comparing solar irradiance measurements off the SD panel to those from a direct Sun view. We discuss the design and operations of the SDSM, the SDSM data analysis, including improvements incorporated since launch, and present the results through 1000 days after launch. After 1000 days, the band-dependent H-factors, a quantity describing the relative degradation of the BRDF of the SD panel since launch, range from 0.716 at 412 nanometers to 0.989 at 926 nanometers. The random uncertainty of these H-factors is about 0.1 percent, which is confirmed by the similar standard deviation values computed from the residuals of quadratic exponential fits to the H-factor time trends. The SDSM detector gains have temperature sensitivity of up to about 0.36 percent per kelvin, but this does not affect the derived H-factors. An initial error in the solar vector caused a seasonal bias to the H-factors of up to 0.5 percent. The total exposure of the SD panel to UV light after 1000 orbits is equivalent to about 100 hours of direct sunlight illumination perpendicular to the SD panel surface.

  12. Impacts of the Angular Dependence of the Solar Diffuser BRDF Degradation Factor on the SNPP VIIRS Reflective Solar Band On-Orbit Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Xiong, Xiaoxiong

    2016-01-01

    Using an onboard sunlit solar diffuser (SD) as the primary radiance source, the visible infrared imaging radiometer suite (VIIRS) on the Suomi National Polar-orbiting Partnership satellite regularly performs radiometric calibration of its reflective solar bands (RSBs). The SD bidirectional reflectance distribution function (BRDF) value decreases over time. A numerical degradation factor is used to quantify the degradation and is determined by an onboard SD stability monitor (SDSM), which observes the sun and the sunlit SD at almost the same time. We had shown previously that the BRDF degradation factor was angle-dependent. Consequently, due to that the SDSM and the RSB view the SD at very different angles relative to both the solar and the SD surface normal vectors, directly applying the BRDF degradation factor determined by the SDSM to the VIIRS RSB calibration can result in large systematic errors. We develop a phenomenological model to calculate the BRDF degradation factor for the RSB SD view from the degradation factor for the SDSM SD view. Using the yearly undulations observed in the VIIRS detector gains for the M1-M4 bands calculated with the SD BRDF degradation factor for the SDSM SD view and the difference between the VIIRS detector gains calculated from the SD and the lunar observations, we obtain the model parameter values and thus establish the relation between the BRDF degradation factors for the RSB and the SDSM SD view directions.

  13. Comparison of Holographic Photopolymer Materials by Use of Analytic Nonlocal Diffusion Models: Errata

    NASA Astrophysics Data System (ADS)

    O'Neill, Feidhlim T.; Lawrence, Justin R.; Sheridan, John T.

    2003-06-01

    Two typographic errors have been identified by the authors in Equation (5) in Ref. 1 . These errors do not effect either the physical interpretation of the situation or the quantitative results presented in the paper.

  14. “Conjugate Channeling” Effect in Dislocation Core Diffusion: Carbon Transport in Dislocated BCC Iron

    PubMed Central

    Ishii, Akio; Li, Ju; Ogata, Shigenobu

    2013-01-01

    Dislocation pipe diffusion seems to be a well-established phenomenon. Here we demonstrate an unexpected effect, that the migration of interstitials such as carbon in iron may be accelerated not in the dislocation line direction , but in a conjugate diffusion direction. This accelerated random walk arises from a simple crystallographic channeling effect. is a function of the Burgers vector b, but not , thus a dislocation loop possesses the same everywhere. Using molecular dynamics and accelerated dynamics simulations, we further show that such dislocation-core-coupled carbon diffusion in iron has temperature-dependent activation enthalpy like a fragile glass. The 71° mixed dislocation is the only case in which we see straightforward pipe diffusion that does not depend on dislocation mobility. PMID:23593255

  15. "Conjugate channeling" effect in dislocation core diffusion: carbon transport in dislocated BCC iron.

    PubMed

    Ishii, Akio; Li, Ju; Ogata, Shigenobu

    2013-01-01

    Dislocation pipe diffusion seems to be a well-established phenomenon. Here we demonstrate an unexpected effect, that the migration of interstitials such as carbon in iron may be accelerated not in the dislocation line direction ξ, but in a conjugate diffusion direction. This accelerated random walk arises from a simple crystallographic channeling effect. c is a function of the Burgers vector b, but not ξ, thus a dislocation loop possesses the same everywhere. Using molecular dynamics and accelerated dynamics simulations, we further show that such dislocation-core-coupled carbon diffusion in iron has temperature-dependent activation enthalpy like a fragile glass. The 71° mixed dislocation is the only case in which we see straightforward pipe diffusion that does not depend on dislocation mobility.

  16. The exit-time problem for a Markov jump process

    NASA Astrophysics Data System (ADS)

    Burch, N.; D'Elia, M.; Lehoucq, R. B.

    2014-12-01

    The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.

  17. The bee's map of the e-vector pattern in the sky.

    PubMed

    Rossel, S; Wehner, R

    1982-07-01

    It has long been known that bees can use the pattern of polarized light in the sky as a compass cue even if they can see only a small part of the whole pattern. How they solve this problem has remained enigmatic. Here we show that the bees rely on a generalized celestial map that is used invariably throughout the day. We reconstruct this map by analyzing the navigation errors made by bees to which single e-vectors are displayed. In addition, we demonstrate how the bee's celestial map can be derived from the e-vector patterns in the sky.

  18. A Fast Hyperspectral Vector Radiative Transfer Model in UV to IR spectral bands

    NASA Astrophysics Data System (ADS)

    Ding, J.; Yang, P.; Sun, B.; Kattawar, G. W.; Platnick, S. E.; Meyer, K.; Wang, C.

    2016-12-01

    We develop a fast hyperspectral vector radiative transfer model with a spectral range from UV to IR with 5 nm resolutions. This model can simulate top of the atmosphere (TOA) diffuse radiance and polarized reflectance by considering gas absorption, Rayleigh scattering, and aerosol and cloud scattering. The absorption component considers several major atmospheric absorbers such as water vapor, CO2, O3, and O2 including both line and continuum absorptions. A regression-based method is used to parameterize the layer effective optical thickness for each gas, which substantially increases the computation efficiency for absorption while maintaining high accuracy. This method is over 500 times faster than the existing line-by-line method. The scattering component uses the successive order of scattering (SOS) method. For Rayleigh scattering, convergence is fast due to the small optical thickness of atmospheric gases. For cloud and aerosol layers, a small-angle approximation method is used in SOS calculations. The scattering process is divided into two parts, a forward part and a diffuse part. The scattering in the small-angle range in the forward direction is approximated as forward scattering. A cloud or aerosol layer is divided into thin layers. As the ray propagates through each thin layer, a portion diverges as diffuse radiation, while the remainder continues propagating in forward direction. The computed diffuse radiance is the sum of all of the diffuse parts. The small-angle approximation makes the SOS calculation converge rapidly even in a thick cloud layer.

  19. Data-driven probability concentration and sampling on manifold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2016-09-15

    A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less

  20. Estimating diffusivity from the mixed layer heat and salt balances in the North Pacific

    NASA Astrophysics Data System (ADS)

    Cronin, M. F.; Pelland, N.; Emerson, S. R.; Crawford, W. R.

    2015-12-01

    Data from two National Oceanographic and Atmospheric Administration (NOAA) surface moorings in the North Pacific, in combination with data from satellite, Argo floats and glider (when available), are used to evaluate the residual diffusive flux of heat across the base of the mixed layer from the surface mixed layer heat budget. The diffusion coefficient (i.e., diffusivity) is then computed by dividing the diffusive flux by the temperature gradient in the 20-m transition layer just below the base of the mixed layer. At Station Papa in the NE Pacific subpolar gyre, this diffusivity is 1×10-4 m2/s during summer, increasing to ~3×10-4 m2/s during fall. During late winter and early spring, diffusivity has large errors. At other times, diffusivity computed from the mixed layer salt budget at Papa correlate with those from the heat budget, giving confidence that the results are robust for all seasons except late winter-early spring and can be used for other tracers. In comparison, at the Kuroshio Extension Observatory (KEO) in the NW Pacific subtropical recirculation gyre, somewhat larger diffusivity are found based upon the mixed layer heat budget: ~ 3×10-4 m2/s during the warm season and more than an order of magnitude larger during the winter, although again, wintertime errors are large. These larger values at KEO appear to be due to the increased turbulence associated with the summertime typhoons, and weaker wintertime stratification.

  1. Estimating diffusivity from the mixed layer heat and salt balances in the North Pacific

    NASA Astrophysics Data System (ADS)

    Cronin, Meghan F.; Pelland, Noel A.; Emerson, Steven R.; Crawford, William R.

    2015-11-01

    Data from two National Oceanographic and Atmospheric Administration (NOAA) surface moorings in the North Pacific, in combination with data from satellite, Argo floats and glider (when available), are used to evaluate the residual diffusive flux of heat across the base of the mixed layer from the surface mixed layer heat budget. The diffusion coefficient (i.e., diffusivity) is then computed by dividing the diffusive flux by the temperature gradient in the 20 m transition layer just below the base of the mixed layer. At Station Papa in the NE Pacific subpolar gyre, this diffusivity is 1 × 10-4 m2/s during summer, increasing to ˜3 × 10-4 m2/s during fall. During late winter and early spring, diffusivity has large errors. At other times, diffusivity computed from the mixed layer salt budget at Papa correlate with those from the heat budget, giving confidence that the results are robust for all seasons except late winter-early spring and can be used for other tracers. In comparison, at the Kuroshio Extension Observatory (KEO) in the NW Pacific subtropical recirculation gyre, somewhat larger diffusivities are found based upon the mixed layer heat budget: ˜ 3 × 10-4 m2/s during the warm season and more than an order of magnitude larger during the winter, although again, wintertime errors are large. These larger values at KEO appear to be due to the increased turbulence associated with the summertime typhoons, and weaker wintertime stratification.

  2. The role of model dynamics in ensemble Kalman filter performance for chaotic systems

    USGS Publications Warehouse

    Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.

    2011-01-01

    The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.

  3. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  4. Performance enhancement for a GPS vector-tracking loop utilizing an adaptive iterated extended Kalman filter.

    PubMed

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-12-09

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.

  5. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    USGS Publications Warehouse

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  6. PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James

    We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less

  7. Diffusing-wave polarimetry for tissue diagnostics

    NASA Astrophysics Data System (ADS)

    Macdonald, Callum; Doronin, Alexander; Peña, Adrian F.; Eccles, Michael; Meglinski, Igor

    2014-03-01

    We exploit the directional awareness of circularly and/or elliptically polarized light propagating within media which exhibit high numbers of scattering events. By tracking the Stokes vector of the detected light on the Poincaŕe sphere, we demonstrate its applicability for characterization of anisotropy of scattering. A phenomenological model is shown to have an excellent agreement with the experimental data and with the results obtained by the polarization tracking Monte Carlo model, developed in house. By analogy to diffusing-wave spectroscopy we call this approach diffusing-wave polarimetry, and illustrate its utility in probing cancerous and non-cancerous tissue samplesin vitro for diagnostic purposes.

  8. Clinical outcomes of Transepithelial photorefractive keratectomy to treat low to moderate myopic astigmatism.

    PubMed

    Xi, Lei; Zhang, Chen; He, Yanling

    2018-05-09

    To evaluate the refractive and visual outcomes of Transepithelial photorefractive keratectomy (TransPRK) in the treatment of low to moderate myopic astigmatism. This retrospective study enrolled a total of 47 eyes that had undergone Transepithelial photorefractive keratectomy. Preoperative cylinder diopters ranged from - 0.75D to - 2.25D (mean - 1.11 ± 0.40D), and the sphere was between - 1.50D to - 5.75D. Visual outcomes and vector analysis of astigmatism that included error ratio (ER), correction ratio (CR), error of magnitude (EM) and error of angle (EA) were evaluated. At 6 months after TransPRK, all eyes had an uncorrected distance visual acuity of 20/20 or better, no eyes lost ≥2 lines of corrected distant visual acuity (CDVA), and 93.6% had residual refractive cylinder within ±0.50D of intended correction. On vector analysis, the mean correction ratio for refractive cylinder was 1.03 ± 0.30. The mean error magnitude was - 0.04 ± 0.36. The mean error of angle was 0.44° ± 7.42°and 80.9% of eyes had axis shift within ±10°. The absolute astigmatic error of magnitude was statistically significantly correlated with the intended cylinder correction (r = 0.48, P < 0.01). TransPRK showed safe, effective and predictable results in the correction of low to moderate astigmatism and myopia.

  9. Development of advanced methods for analysis of experimental data in diffusion

    NASA Astrophysics Data System (ADS)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.

  10. Comparison of results of fluconazole disk diffusion testing for Candida species with results from a central reference laboratory in the ARTEMIS global antifungal surveillance program.

    PubMed

    Pfaller, M A; Hazen, K C; Messer, S A; Boyken, L; Tendolkar, S; Hollis, R J; Diekema, D J

    2004-08-01

    The accuracy of antifungal susceptibility tests is important for accurate resistance surveillance and for the clinical management of patients with serious infections. Our main objective was to compare the results of fluconazole disk diffusion testing of Candida spp. performed by ARTEMIS participating centers with disk diffusion and MIC results obtained by the central reference laboratory. A total of 2,949 isolates of Candida spp. were tested by NCCLS disk diffusion and reference broth microdilution methods in the central reference laboratory. These results were compared to the results of disk diffusion testing performed in the 54 participating centers. All tests were performed and interpreted following NCCLS recommendations. Overall categorical agreement between participant disk diffusion test results and reference laboratory MIC results was 87.4%, with 0.2% very major errors (VME) and 3.3% major errors (ME). The categorical agreement between the disk diffusion test results obtained in the reference laboratory with the MIC test results was similar: 92.8%. Likewise, good agreement was observed between participant disk diffusion test results and reference laboratory disk diffusion test results: 90.4%, 0.4% VME, and 3.4% ME. The disk diffusion test was especially reliable in detecting those isolates of Candida spp. that were characterized as resistant by reference MIC testing. External quality assurance data obtained by surveillance programs such as the ARTEMIS Global Antifungal Surveillance Program ensure the generation of useful surveillance data and result in the continued improvement of antifungal susceptibility testing practices.

  11. Dispersion of Vapor from LNG Spills -- Simulation in a Meteorological Wind Tunnel of Spills at China Lake Naval Weapons Center, California.

    DTIC Science & Technology

    1979-03-01

    and Diffusion Laboratory Department of Civil Engineering, , / """..,--. Colorado State University DOT-CG-75279-A) )V Fnrt Cnl lin. Colorado 80523 ype...Film Aspirating Probe ......... .. 20 3.5.2 Errors in Concentration Measurement . . 21 4.0 TEST PROGRAM RESULTS ..... ............... .. 23 4.1...Coriolis Force Viscous Diffusivity Prandtl number Pr = v/(k/P C ViThrma Diffusivity0 0 p Thermal Diffusivity Eckert number Ec = /Cpo (AT)o 5 For exact

  12. Effect of Static Strains on Diffusion

    NASA Technical Reports Server (NTRS)

    Girifalco, L. A.; Grimes, H. H.

    1961-01-01

    A theory is developed that gives the diffusion coefficient in strained systems as an exponential function of the strain. This theory starts with the statistical theory of the atomic jump frequency as developed by Vineyard. The parameter determining the effect of strain on diffusion is related to the changes in the inter-atomic forces with strain. Comparison of the theory with published experimental results for the effect of pressure on diffusion shows that the experiments agree with the form of the theoretical equation in all cases within experimental error.

  13. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  14. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.

  15. An algorithm for targeting finite burn maneuvers

    NASA Technical Reports Server (NTRS)

    Barbieri, R. W.; Wyatt, G. H.

    1972-01-01

    An algorithm was developed to solve the following problem: given the characteristics of the engine to be used to make a finite burn maneuver and given the desired orbit, when must the engine be ignited and what must be the orientation of the thrust vector so as to obtain the desired orbit? The desired orbit is characterized by classical elements and functions of these elements whereas the control parameters are characterized by the time to initiate the maneuver and three direction cosines which locate the thrust vector. The algorithm was built with a Monte Carlo capability whereby samples are taken from the distribution of errors associated with the estimate of the state and from the distribution of errors associated with the engine to be used to make the maneuver.

  16. [Gene therapy for the treatment of inborn errors of metabolism].

    PubMed

    Pérez-López, Jordi

    2014-06-16

    Due to the enzymatic defect in inborn errors of metabolism, there is a blockage in the metabolic pathways and an accumulation of toxic metabolites. Currently available therapies include dietary restriction, empowering of alternative metabolic pathways, and the replacement of the deficient enzyme by cell transplantation, liver transplantation or administration of the purified enzyme. Gene therapy, using the transfer in the body of the correct copy of the altered gene by a vector, is emerging as a promising treatment. However, the difficulty of vectors currently used to cross the blood brain barrier, the immune response, the cellular toxicity and potential oncogenesis are some limitations that could greatly limit its potential clinical application in human beings. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  17. Impact of Orbit Position Errors on Future Satellite Gravity Models

    NASA Astrophysics Data System (ADS)

    Encarnacao, J.; Ditmar, P.; Klees, R.

    2015-12-01

    We present the results of a study of the impact of orbit positioning noise (OPN) caused by incomplete knowledge of the Earth's gravity field on gravity models estimated from satellite gravity data. The OPN is simulated as the difference between two sets of orbits integrated on the basis of different static gravity field models. The OPN is propagated into ll-SST data, here computed as averaged inter-satellite accelerations projected onto the Line of Sight (LoS) vector between the two satellites. We consider the cartwheel formation (CF), pendulum formation (PF), and trailing formation (TF) as they produce a different dominant orientation of the LoS vector. Given the polar orbits of the formations, the LoS vector is mainly aligned with the North-South direction in the TF, with the East-West direction in the PF (i.e. no along-track offset), and contains a radial component in the CF. An analytical analysis predicts that the CF suffers from a very high sensitivity to the OPN. This is a fundamental characteristic of this formation, which results from the amplification of this noise by diagonal components of the gravity gradient tensor (defined in the local frame) during the propagation into satellite gravity data. In contrast, the OPN in the data from PF and TF is only scaled by off-diagonal gravity gradient components, which are much smaller than the diagonal tensor components. A numerical analysis shows that the effect of the OPN is similar in the data collected by the TF and the PF. The amplification of the OPN errors for the CF leads to errors in the gravity model that are three orders of magnitude larger than those in case of the PF. This means that any implementation of the CF will most likely produce data with relatively low quality since this error dominates the error budget, especially at low frequencies. This is particularly critical for future gravimetric missions that will be equipped with highly accurate ranging sensors.

  18. Comparison of nanoparticle diffusion using fluorescence correlation spectroscopy and differential dynamic microscopy within concentrated polymer solutions

    NASA Astrophysics Data System (ADS)

    Shokeen, Namita; Issa, Christopher; Mukhopadhyay, Ashis

    2017-12-01

    We studied the diffusion of nanoparticles (NPs) within aqueous entangled solutions of polyethylene oxide (PEO) by using two different optical techniques. Fluorescence correlation spectroscopy, a method widely used to investigate nanoparticle dynamics in polymer solution, was used to measure the long-time diffusion coefficient (D) of 25 nm radius particles within high molecular weight, Mw = 600 kg/mol PEO in water solutions. Differential dynamic microscopy (DDM) was used to determine the wave-vector dependent dynamics of NPs within the same polymer solutions. Our results showed good agreement between the two methods, including demonstration of normal diffusion and almost identical diffusion coefficients obtained by both techniques. The research extends the scope of DDM to study the dynamics and rheological properties of soft matter at a nanoscale. The measured diffusion coefficients followed a scaling theory, which can be explained by the coupling between polymer dynamics and NP motion.

  19. Measuring a diffusion coefficient by single-particle tracking: statistical analysis of experimental mean squared displacement curves.

    PubMed

    Ernst, Dominique; Köhler, Jürgen

    2013-01-21

    We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.

  20. Prevention of medication errors: detection and audit.

    PubMed

    Montesi, Germana; Lechi, Alessandro

    2009-06-01

    1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.

  1. Isotropic resolution diffusion tensor imaging of lumbosacral and sciatic nerves using a phase‐corrected diffusion‐prepared 3D turbo spin echo

    PubMed Central

    Van, Anh T.; Weidlich, Dominik; Kooijman, Hendrick; Hock, Andreas; Rummeny, Ernst J.; Gersing, Alexandra; Kirschke, Jan S.; Karampinos, Dimitrios C.

    2018-01-01

    Purpose To perform in vivo isotropic‐resolution diffusion tensor imaging (DTI) of lumbosacral and sciatic nerves with a phase‐navigated diffusion‐prepared (DP) 3D turbo spin echo (TSE) acquisition and modified reconstruction incorporating intershot phase‐error correction and to investigate the improvement on image quality and diffusion quantification with the proposed phase correction. Methods Phase‐navigated DP 3D TSE included magnitude stabilizers to minimize motion and eddy‐current effects on the signal magnitude. Phase navigation of motion‐induced phase errors was introduced before readout in 3D TSE. DTI of lower back nerves was performed in vivo using 3D TSE and single‐shot echo planar imaging (ss‐EPI) in 13 subjects. Diffusion data were phase‐corrected per k z plane with respect to T2‐weighted data. The effects of motion‐induced phase errors on DTI quantification was assessed for 3D TSE and compared with ss‐EPI. Results Non–phase‐corrected 3D TSE resulted in artifacts in diffusion‐weighted images and overestimated DTI parameters in the sciatic nerve (mean diffusivity [MD] = 2.06 ± 0.45). Phase correction of 3D TSE DTI data resulted in reductions in all DTI parameters (MD = 1.73 ± 0.26) of statistical significance (P ≤ 0.001) and in closer agreement with ss‐EPI DTI parameters (MD = 1.62 ± 0.21). Conclusion DP 3D TSE with phase correction allows distortion‐free isotropic diffusion imaging of lower back nerves with robustness to motion‐induced artifacts and DTI quantification errors. Magn Reson Med 80:609–618, 2018. © 2018 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29380414

  2. Development of a two-dimensional dual pendulum thrust stand for Hall thrusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagao, N.; Yokota, S.; Komurasaki, K.

    A two-dimensional dual pendulum thrust stand was developed to measure thrust vectors (axial and horizontal (transverse) direction thrusts) of a Hall thruster. A thruster with a steering mechanism is mounted on the inner pendulum, and thrust is measured from the displacement between inner and outer pendulums, by which a thermal drift effect is canceled out. Two crossover knife-edges support each pendulum arm: one is set on the other at a right angle. They enable the pendulums to swing in two directions. Thrust calibration using a pulley and weight system showed that the measurement errors were less than 0.25 mN (1.4%)more » in the main thrust direction and 0.09 mN (1.4%) in its transverse direction. The thrust angle of the thrust vector was measured with the stand using the thruster. Consequently, a vector deviation from the main thrust direction of {+-}2.3 deg. was measured with the error of {+-}0.2 deg. under the typical operating conditions for the thruster.« less

  3. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  4. A network application for modeling a centrifugal compressor performance map

    NASA Astrophysics Data System (ADS)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  5. Model of bidirectional reflectance distribution function for metallic materials

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhu, Jing-Ping; Liu, Hong; Hou, Xun

    2016-09-01

    Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials.

  6. A terrestrial lidar-based workflow for determining three-dimensional slip vectors and associated uncertainties

    USGS Publications Warehouse

    Gold, Peter O.; Cowgill, Eric; Kreylos, Oliver; Gold, Ryan D.

    2012-01-01

    Three-dimensional (3D) slip vectors recorded by displaced landforms are difficult to constrain across complex fault zones, and the uncertainties associated with such measurements become increasingly challenging to assess as landforms degrade over time. We approach this problem from a remote sensing perspective by using terrestrial laser scanning (TLS) and 3D structural analysis. We have developed an integrated TLS data collection and point-based analysis workflow that incorporates accurate assessments of aleatoric and epistemic uncertainties using experimental surveys, Monte Carlo simulations, and iterative site reconstructions. Our scanning workflow and equipment requirements are optimized for single-operator surveying, and our data analysis process is largely completed using new point-based computing tools in an immersive 3D virtual reality environment. In a case study, we measured slip vector orientations at two sites along the rupture trace of the 1954 Dixie Valley earthquake (central Nevada, United States), yielding measurements that are the first direct constraints on the 3D slip vector for this event. These observations are consistent with a previous approximation of net extension direction for this event. We find that errors introduced by variables in our survey method result in <2.5 cm of variability in components of displacement, and are eclipsed by the 10–60 cm epistemic errors introduced by reconstructing the field sites to their pre-erosion geometries. Although the higher resolution TLS data sets enabled visualization and data interactivity critical for reconstructing the 3D slip vector and for assessing uncertainties, dense topographic constraints alone were not sufficient to significantly narrow the wide (<26°) range of allowable slip vector orientations that resulted from accounting for epistemic uncertainties.

  7. Sea ice motion from low-resolution satellite sensors: An alternative method and its validation in the Arctic

    NASA Astrophysics Data System (ADS)

    Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.

    2010-10-01

    The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.

  8. The frequency and accuracy of replication past a thymine-thymine cyclobutane dimer are very different in Saccharomyces cerevisiae and Escherichia coli.

    PubMed

    Gibbs, P E; Kilbey, B J; Banerjee, S K; Lawrence, C W

    1993-05-01

    We have compared the mutagenic properties of a T-T cyclobutane dimer in baker's yeast, Saccharomyces cerevisiae, with those in Escherichia coli by transforming each of these species with the same single-stranded shuttle vector carrying either the cis-syn or the trans-syn isomer of this UV photoproduct at a unique site. The mutagenic properties investigated were the frequency of replicational bypass of the photoproduct, the error rate of bypass, and the mutation spectrum. In SOS-induced E. coli, the cis-syn dimer was bypassed in approximately 16% of the vector molecules, and 7.6% of the bypass products had targeted mutations. In S. cerevisiae, however, bypass occurred in about 80% of these molecules, and the bypass was at least 19-fold more accurate (approximately 0.4% targeted mutations). Each of these yeast mutations was a single unique event, and none were like those in E. coli, suggesting that in fact the difference in error rate is much greater. Bypass of the trans-syn dimer occurred in about 17% of the vector molecules in both species, but with this isomer the error rate was higher in S. cerevisiae (21 to 36% targeted mutations) than in E. coli (13%). However, the spectra of mutations induced by the latter photoproduct were virtually identical in the two organisms. We conclude that bypass and error frequencies are determined both by the structure of the photoproduct-containing template and by the particular replication proteins concerned but that the types of mutations induced depend predominantly on the structure of the template. Unlike E. coli, bypass in S. cerevisiae did not require UV-induced functions.

  9. Support of Mark III Optical Interferometer

    DTIC Science & Technology

    1988-11-01

    error, and low visibility* pedestal, and the surface of a zerodur sphere attached to the mirror errors are not entirely consistent. as shown in Fig. 7...of’ stellar usually associated with the primary mirror of a large astronomical interferometers at Mt. Wilson Observatory. The first instrument...the two siderostats is directed toward the central building by fixed mirrors . These fixed mirrors are necessary to keep the polarization - vectors

  10. Defense Mapping Agency (DMA) Raster-to-Vector Analysis

    DTIC Science & Technology

    1984-11-30

    model) to pinpoint critical deficiencies and understand trade-offs between alternative solutions. This may be exemplified by the allocation of human ...process, prone to errors (i.e., human operator eye/motor control limitations), and its time consuming nature (as a function of data density). It should...achieved through the facilities of coinputer interactive graphics. Each error or anomaly is individually identified by a human operator and corrected

  11. A robust interpolation procedure for producing tidal current ellipse inputs for regional and coastal ocean numerical models

    NASA Astrophysics Data System (ADS)

    Byun, Do-Seong; Hart, Deirdre E.

    2017-04-01

    Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.

  12. A hybrid continuous-discrete method for stochastic reaction-diffusion processes.

    PubMed

    Lo, Wing-Cheong; Zheng, Likun; Nie, Qing

    2016-09-01

    Stochastic fluctuations in reaction-diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method.

  13. Comparison of the Radiative Two-Flux and Diffusion Approximations

    NASA Technical Reports Server (NTRS)

    Spuckler, Charles M.

    2006-01-01

    Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and radiation scattered back by internal scattering sites while the Fresnel reflection only accounts for surface reflections.

  14. Brain tumor classification using the diffusion tensor image segmentation (D-SEG) technique.

    PubMed

    Jones, Timothy L; Byrnes, Tiernan J; Yang, Guang; Howe, Franklyn A; Bell, B Anthony; Barrick, Thomas R

    2015-03-01

    There is an increasing demand for noninvasive brain tumor biomarkers to guide surgery and subsequent oncotherapy. We present a novel whole-brain diffusion tensor imaging (DTI) segmentation (D-SEG) to delineate tumor volumes of interest (VOIs) for subsequent classification of tumor type. D-SEG uses isotropic (p) and anisotropic (q) components of the diffusion tensor to segment regions with similar diffusion characteristics. DTI scans were acquired from 95 patients with low- and high-grade glioma, metastases, and meningioma and from 29 healthy subjects. D-SEG uses k-means clustering of the 2D (p,q) space to generate segments with different isotropic and anisotropic diffusion characteristics. Our results are visualized using a novel RGB color scheme incorporating p, q and T2-weighted information within each segment. The volumetric contribution of each segment to gray matter, white matter, and cerebrospinal fluid spaces was used to generate healthy tissue D-SEG spectra. Tumor VOIs were extracted using a semiautomated flood-filling technique and D-SEG spectra were computed within the VOI. Classification of tumor type using D-SEG spectra was performed using support vector machines. D-SEG was computationally fast and stable and delineated regions of healthy tissue from tumor and edema. D-SEG spectra were consistent for each tumor type, with constituent diffusion characteristics potentially reflecting regional differences in tissue microstructure. Support vector machines classified tumor type with an overall accuracy of 94.7%, providing better classification than previously reported. D-SEG presents a user-friendly, semiautomated biomarker that may provide a valuable adjunct in noninvasive brain tumor diagnosis and treatment planning. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Neuro-Oncology.

  15. CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  16. Masking of errors in transmission of VAPC-coded speech

    NASA Technical Reports Server (NTRS)

    Cox, Neil B.; Froese, Edwin L.

    1990-01-01

    A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.

  17. Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution

    NASA Astrophysics Data System (ADS)

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.

    2012-07-01

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  18. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  19. Network Adjustment of Orbit Errors in SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Bahr, Hermann; Hanssen, Ramon

    2010-03-01

    Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  1. Vectorization of copper complexes via biocompatible and biodegradable PLGA nanoparticles.

    PubMed

    Courant, T; Roullin, V G; Cadiou, C; Delavoie, F; Molinari, M; Andry, M C; Gafa, V; Chuburu, F

    2010-04-23

    A double emulsion-solvent diffusion approach with fully biocompatible materials was used to encapsulate copper complexes within biodegradable nanoparticles, for which the release kinetics profiles have highlighted their potential use for a prolonged circulating administration.

  2. Polarization radiation in the planetary atmosphere delimited by a heterogeneous diffusely reflecting surface

    NASA Technical Reports Server (NTRS)

    Strelkov, S. A.; Sushkevich, T. A.

    1983-01-01

    Spatial frequency characteristics (SFC) and the scattering functions were studied in the two cases of a uniform horizontal layer with absolutely black bottom, and an isolated layer. The mathematical model for these examples describes the horizontal heterogeneities in a light field with regard to radiation polarization in a three dimensional planar atmosphere, delimited by a heterogeneous surface with diffuse reflection. The perturbation method was used to obtain vector transfer equations which correspond to the linear and nonlinear systems of polarization radiation transfer. The boundary value tasks for the vector transfer equation that is a parametric set and one dimensional are satisfied by the SFC of the nonlinear system, and are expressed through the SFC of linear approximation. As a consequence of the developed theory, formulas were obtained for analytical calculation of albedo in solving the task of dissemination of polarization radiation in the planetary atmosphere with uniform Lambert bottom.

  3. Vectorization, threading, and cache-blocking considerations for hydrocodes on emerging architectures

    DOE PAGES

    Fung, J.; Aulwes, R. T.; Bement, M. T.; ...

    2015-07-14

    This work reports on considerations for improving computational performance in preparation for current and expected changes to computer architecture. The algorithms studied will include increasingly complex prototypes for radiation hydrodynamics codes, such as gradient routines and diffusion matrix assembly (e.g., in [1-6]). The meshes considered for the algorithms are structured or unstructured meshes. The considerations applied for performance improvements are meant to be general in terms of architecture (not specifically graphical processing unit (GPUs) or multi-core machines, for example) and include techniques for vectorization, threading, tiling, and cache blocking. Out of a survey of optimization techniques on applications such asmore » diffusion and hydrodynamics, we make general recommendations with a view toward making these techniques conceptually accessible to the applications code developer. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.« less

  4. Adaptive Identification of Fluid-Dynamic Systems

    DTIC Science & Technology

    2001-06-14

    Fig. 1. Unknown System Adaptive Filter Σ _ + Input u Filter Output y Desired Output d Error e Fig. 1. Modeling of a SISO system using...2J E e n =   (12) Here [ ]. E is the expectation operator and ( ) ( ) ( ) e n d n y n= − is the error between the desired system output and...B … input vector ( ) ( ) ( ) ( )[ ], , ,1 1 Tn u n u n u n N= − − +U … output and error ( ) ( ) ( ) ( ) ( ) ( ) ( ) T T y n n n e n d n n n

  5. Correction of spin diffusion during iterative automated NOE assignment

    NASA Astrophysics Data System (ADS)

    Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  6. Improving receiver performance of diffusive molecular communication with enzymes.

    PubMed

    Noel, Adam; Cheung, Karen C; Schober, Robert

    2014-03-01

    This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present.

  7. A linear programming manual

    NASA Technical Reports Server (NTRS)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  8. Orientation diffusions.

    PubMed

    Perona, P

    1998-01-01

    Diffusions are useful for image processing and computer vision because they provide a convenient way of smoothing noisy data, analyzing images at multiple scales, and enhancing discontinuities. A number of diffusions of image brightness have been defined and studied so far; they may be applied to scalar and vector-valued quantities that are naturally associated with intervals of either the real line, or other flat manifolds. Some quantities of interest in computer vision, and other areas of engineering that deal with images, are defined on curved manifolds;typical examples are orientation and hue that are defined on the circle. Generalizing brightness diffusions to orientation is not straightforward, especially in the case where a discrete implementation is sought. An example of what may go wrong is presented.A method is proposed to define diffusions of orientation-like quantities. First a definition in the continuum is discussed, then a discrete orientation diffusion is proposed. The behavior of such diffusions is explored both analytically and experimentally. It is shown how such orientation diffusions contain a nonlinearity that is reminiscent of edge-process and anisotropic diffusion. A number of open questions are proposed at the end.

  9. The exit-time problem for a Markov jump process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burch, N.; D'Elia, Marta; Lehoucq, Richard B.

    2014-12-15

    The purpose of our paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developedmore » nonlocal vector calculus. Furthermore, this calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.« less

  10. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  11. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  12. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  13. Simulation of an epidemic model with vector transmission

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana G.; Dickman, Ronald

    2015-03-01

    We study a lattice model for vector-mediated transmission of a disease in a population consisting of two species, A and B, which contract the disease from one another. Individuals of species A are sedentary, while those of species B (the vector) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied. We study the static and dynamic critical behavior of the model using initial spreading, initial decay, and quasistationary simulations. Simulations are checked against mean-field analysis. Although phase transitions to an absorbing state fall generically in the directed percolation universality class, this appears not to be the case for the present model.

  14. Changes in the electric dipole vector of human serum albumin due to complexing with fatty acids.

    PubMed Central

    Scheider, W; Dintzis, H M; Oncley, J L

    1976-01-01

    The magnitude of the electric dipole vector of human serum albumin, as measured by the dielectric increment of the isoionic solution, is found to be a sensitive, monotonic indicator of the number of moles (up to at least 5) of long chain fatty acid complexed. The sensitivity is about three times as great as it is in bovine albumin. New methods of analysis of the frequency dispersion of the dielectric constant were developed to ascertain if molecular shape changes also accompany the complexing with fatty acid. Direct two-component rotary diffusion constant analysis is found to be too strongly affected by cross modulation between small systematic errors and physically significant data components to be a reliable measure of structural modification. Multicomponent relaxation profiles are more useful as recognition patterns for structural comparisons, but the equations involved are ill-conditioned and solutions based on standard least-squares regression contain mathematical artifacts which mask the physically significant spectrum. By constraining the solution to non-negative coefficients, the magnitude of the artifacts is reduced to well below the magnitudes of the spectral components. Profiles calculated in this way show no evidence of significant dipole direction or molecular shape change as the albumin is complexed with 1 mol of fatty acid. In these experiments albumin was defatted by incubation with adipose tissue at physiological pH, which avoids passing the protein through the pH of the N-F transition usually required in defatting. Addition of fatty acid from soluion in small amounts of ethanol appears to form a complex indistinguishable from the "native" complex. PMID:6087

  15. Modelling and validation of diffuse reflectance of the adult human head for fNIRS: scalp sub-layers definition

    NASA Astrophysics Data System (ADS)

    Herrera-Vega, Javier; Montero-Hernández, Samuel; Tachtsidis, Ilias; Treviño-Palacios, Carlos G.; Orihuela-Espina, Felipe

    2017-11-01

    Accurate estimation of brain haemodynamics parameters such as cerebral blood flow and volume as well as oxygen consumption i.e. metabolic rate of oxygen, with funcional near infrared spectroscopy (fNIRS) requires precise characterization of light propagation through head tissues. An anatomically realistic forward model of the human adult head with unprecedented detailed specification of the 5 scalp sublayers to account for blood irrigation in the connective tissue layer is introduced. The full model consists of 9 layers, accounts for optical properties ranging from 750nm to 950nm and has a voxel size of 0.5mm. The whole model is validated comparing the predicted remitted spectra, using Monte Carlo simulations of radiation propagation with 108 photons, against continuous wave (CW) broadband fNIRS experimental data. As the true oxy- and deoxy-hemoglobin concentrations during acquisition are unknown, a genetic algorithm searched for the vector of parameters that generates a modelled spectrum that optimally fits the experimental spectrum. Differences between experimental and model predicted spectra was quantified using the Root mean square error (RMSE). RMSE was 0.071 +/- 0.004, 0.108 +/- 0.018 and 0.235+/-0.015 at 1, 2 and 3cm interoptode distance respectively. The parameter vector of absolute concentrations of haemoglobin species in scalp and cortex retrieved with the genetic algorithm was within histologically plausible ranges. The new model capability to estimate the contribution of the scalp blood flow shall permit incorporating this information to the regularization of the inverse problem for a cleaner reconstruction of brain hemodynamics.

  16. Robust support vector regression networks for function approximation with outliers.

    PubMed

    Chuang, Chen-Chia; Su, Shun-Feng; Jeng, Jin-Tsong; Hsiao, Chih-Ching

    2002-01-01

    Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.

  17. Pre-coding assisted generation of a frequency quadrupled optical vector D-band millimeter wave with one Mach-Zehnder modulator.

    PubMed

    Zhou, Wen; Li, Xinying; Yu, Jianjun

    2017-10-30

    We propose QPSK millimeter-wave (mm-wave) vector signal generation for D-band based on balanced precoding-assisted photonic frequency quadrupling technology employing a single intensity modulator without an optical filter. The intensity MZM is driven by a balanced pre-coding 37-GHz QPSK RF signal. The modulated optical subcarriers are directly sent into the single ended photodiode to generate 148-GHz QPSK vector signal. We experimentally demonstrate 1-Gbaud 148-GHz QPSK mm-wave vector signal generation, and investigate the bit-error-rate (BER) performance of the vector signals at 148-GHz. The experimental results show that the BER value can be achieved as low as 1.448 × 10 -3 when the optical power into photodiode is 8.8dBm. To the best of our knowledge, it is the first time to realize the frequency-quadrupling vector mm-wave signal generation at D-band based on only one MZM without an optical filter.

  18. Mathematical Modeling of Herpes Simplex Virus Distribution in Solid Tumors: Implications for Cancer Gene Therapy

    PubMed Central

    Mok, Wilson; Stylianopoulos, Triantafyllos; Boucher, Yves; Jain, Rakesh K.

    2010-01-01

    Purpose Although oncolytic viral vectors show promise for the treatment of various cancers, ineffective initial distribution and propagation throughout the tumor mass often limit the therapeutic response. A mathematical model is developed to describe the spread of herpes simplex virus from the initial injection site. Experimental Design The tumor is modeled as a sphere of radius R. The model incorporates reversible binding, interstitial diffusion, viral degradation, and internalization and physiologic parameters. Three species are considered as follows: free interstitial virus, virus bound to cell surfaces, and internalized virus. Results This analysis reveals that both rapid binding and internalization as well as hindered diffusion contain the virus to the initial injection volume, with negligible spread to the surrounding tissue. Unfortunately, increasing the dose to saturate receptors and promote diffusion throughout the tumor is not a viable option: the concentration necessary would likely compromise safety. However, targeted modifications to the virus that decrease the binding affinity have the potential to increase the number of infected cells by 1.5-fold or more. An increase in the effective diffusion coefficient can result in similar gains. Conclusions This analysis suggests criteria by which the potential response of a tumor to oncolytic herpes simplex virus therapy can be assessed. Furthermore, it reveals the potential of modifications to the vector delivery method, physicochemical properties of the virus, and tumor extracellular matrix composition to enhance efficacy. PMID:19318482

  19. Improved backward ray tracing with stochastic sampling

    NASA Astrophysics Data System (ADS)

    Ryu, Seung Taek; Yoon, Kyung-Hyun

    1999-03-01

    This paper presents a new technique that enhances the diffuse interreflection with the concepts of backward ray tracing. In this research, we have modeled the diffuse rays with the following conditions. First, as the reflection from the diffuse surfaces occurs in all directions, it is impossible to trace all of the reflected rays. We confined the diffuse rays by sampling the spherical angle out of the reflected rays around the normal vector. Second, the traveled distance of reflected energy from the diffuse surface differs according to the object's property, and has a comparatively short reflection distance. Considering the fact that the rays created on the diffuse surfaces affect relatively small area, it is very inefficient to trace all of the sampled diffused rays. Therefore, we set a fixed distance as the critical distance and all the rays beyond this distance are ignored. The result of this research is that as the improved backward ray tracing can model the illumination effects such as the color bleeding effects, we can replace the radiosity algorithm under the limited environment.

  20. Diffusive dynamics of nanoparticles in ultra-confined media

    DOE PAGES

    Jacob, Jack Deodato; Conrad, Jacinta; Krishnamoorti, Ramanan; ...

    2015-08-10

    Differential dynamic microscopy (DDM) was used to investigate the diffusive dynamics of nanoparticles of diameter 200 400 nm that were strongly confined in a periodic square array of cylindrical nanoposts. The minimum distance between posts was 1.3 5 times the diameter of the nanoparticles. The image structure functions obtained from the DDM analysis were isotropic and could be fit by a stretched exponential function. The relaxation time scaled diffusively across the range of wave vectors studied, and the corresponding scalar diffusivities decreased monotonically with increased confinement. The decrease in diffusivity could be described by models for hindered diffusion that accountedmore » for steric restrictions and hydrodynamic interactions. The stretching exponent decreased linearly as the nanoparticles were increasingly confined by the posts. Altogether, these results are consistent with a picture in which strongly confined nanoparticles experience a heterogeneous spatial environment arising from hydrodynamics and volume exclusion on time scales comparable to cage escape, leading to multiple relaxation processes and Fickian but non-Gaussian diffusive dynamics.« less

  1. Nucleon form factors from quenched lattice QCD with domain wall fermions

    NASA Astrophysics Data System (ADS)

    Sasaki, Shoichi; Yamazaki, Takeshi

    2008-07-01

    We present a quenched lattice calculation of the weak nucleon form factors: vector [FV(q2)], induced tensor [FT(q2)], axial vector [FA(q2)] and induced pseudoscalar [FP(q2)] form factors. Our simulations are performed on three different lattice sizes L3×T=243×32, 163×32, and 123×32 with a lattice cutoff of a-1≈1.3GeV and light quark masses down to about 1/4 the strange quark mass (mπ≈390MeV) using a combination of the DBW2 gauge action and domain wall fermions. The physical volume of our largest lattice is about (3.6fm)3, where the finite volume effects on form factors become negligible and the lower momentum transfers (q2≈0.1GeV2) are accessible. The q2 dependences of form factors in the low q2 region are examined. It is found that the vector, induced tensor, and axial-vector form factors are well described by the dipole form, while the induced pseudoscalar form factor is consistent with pion-pole dominance. We obtain the ratio of axial to vector coupling gA/gV=FA(0)/FV(0)=1.219(38) and the pseudoscalar coupling gP=mμFP(0.88mμ2)=8.15(54), where the errors are statistical errors only. These values agree with experimental values from neutron β decay and muon capture on the proton. However, the root mean-squared radii of the vector, induced tensor, and axial vector underestimate the known experimental values by about 20%. We also calculate the pseudoscalar nucleon matrix element in order to verify the axial Ward-Takahashi identity in terms of the nucleon matrix elements, which may be called as the generalized Goldberger-Treiman relation.

  2. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.

  3. Frequency-domain optical absorption spectroscopy of finite tissue volumes using diffusion theory.

    PubMed

    Pogue, B W; Patterson, M S

    1994-07-01

    The goal of frequency-domain optical absorption spectroscopy is the non-invasive determination of the absorption coefficient of a specific tissue volume. Since this allows the concentration of endogenous and exogenous chromophores to be calculated, there is considerable potential for clinical application. The technique relies on the measurement of the phase and modulation of light, which is diffusely reflected or transmitted by the tissue when it is illuminated by an intensity-modulated source. A model of light propagation must then be used to deduce the absorption coefficient. For simplicity, it is usual to assume the tissue is either infinite in extent (for transmission measurements) or semi-infinite (for reflectance measurements). The goal of this paper is to examine the errors introduced by these assumptions when measurements are actually performed on finite volumes. Diffusion-theory calculations and experimental measurements were performed for slabs, cylinders and spheres with optical properties characteristic of soft tissues in the near infrared. The error in absorption coefficient is presented as a function of object size as a guideline to when the simple models may be used. For transmission measurements, the error is almost independent of the true absorption coefficient, which allows absolute changes in absorption to be measured accurately. The implications of these errors in absorption coefficient for two clinical problems--quantitation of an exogenous photosensitizer and measurement of haemoglobin oxygenation--are presented and discussed.

  4. Characterization of a 300-GHz Transmission System for Digital Communications

    NASA Astrophysics Data System (ADS)

    Hudlička, Martin; Salhi, Mohammed; Kleine-Ostmann, Thomas; Schrader, Thorsten

    2017-08-01

    The paper presents the characterization of a 300-GHz transmission system for modern digital communications. The quality of the modulated signal at the output of the system (error vector magnitude, EVM) is measured using a vector signal analyzer. A method using a digital real-time oscilloscope and consecutive mathematical processing in a computer is shown for analysis of signals with bandwidths exceeding that of state-of-the-art vector signal analyzers. The uncertainty of EVM measured using the real-time oscilloscope is open to analysis. Behaviour of the 300-GHz transmission system is studied with respect to various modulation schemes and different signal symbol rates.

  5. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  6. Extrapolation methods for vector sequences

    NASA Technical Reports Server (NTRS)

    Smith, David A.; Ford, William F.; Sidi, Avram

    1987-01-01

    This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.

  7. Quantization of high dimensional Gaussian vector using permutation modulation with application to information reconciliation in continuous variable QKD

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.

  8. Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing

    PubMed Central

    Matochko, Wadim L.; Derda, Ratmir

    2013-01-01

    Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (S a). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of S a and use them to define the sequencing operator (S e q). Sequencing without any bias and errors is S e q = S a IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (C E N), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071

  9. Biodiversity can help prevent malaria outbreaks in tropical forests.

    PubMed

    Laporta, Gabriel Zorello; Lopez de Prado, Paulo Inácio Knegt; Kraenkel, Roberto André; Coutinho, Renato Mendes; Sallum, Maria Anice Mureb

    2013-01-01

    Plasmodium vivax is a widely distributed, neglected parasite that can cause malaria and death in tropical areas. It is associated with an estimated 80-300 million cases of malaria worldwide. Brazilian tropical rain forests encompass host- and vector-rich communities, in which two hypothetical mechanisms could play a role in the dynamics of malaria transmission. The first mechanism is the dilution effect caused by presence of wild warm-blooded animals, which can act as dead-end hosts to Plasmodium parasites. The second is diffuse mosquito vector competition, in which vector and non-vector mosquito species compete for blood feeding upon a defensive host. Considering that the World Health Organization Malaria Eradication Research Agenda calls for novel strategies to eliminate malaria transmission locally, we used mathematical modeling to assess those two mechanisms in a pristine tropical rain forest, where the primary vector is present but malaria is absent. The Ross-Macdonald model and a biodiversity-oriented model were parameterized using newly collected data and data from the literature. The basic reproduction number ([Formula: see text]) estimated employing Ross-Macdonald model indicated that malaria cases occur in the study location. However, no malaria cases have been reported since 1980. In contrast, the biodiversity-oriented model corroborated the absence of malaria transmission. In addition, the diffuse competition mechanism was negatively correlated with the risk of malaria transmission, which suggests a protective effect provided by the forest ecosystem. There is a non-linear, unimodal correlation between the mechanism of dead-end transmission of parasites and the risk of malaria transmission, suggesting a protective effect only under certain circumstances (e.g., a high abundance of wild warm-blooded animals). To achieve biological conservation and to eliminate Plasmodium parasites in human populations, the World Health Organization Malaria Eradication Research Agenda should take biodiversity issues into consideration.

  10. Effect of volume-scattering function on the errors induced when polarization is neglected in radiance calculations in an atmosphere-ocean system.

    PubMed

    Adams, C N; Kattawar, G W

    1993-08-20

    We have developed a Monte Carlo program that is capable of calculating both the scalar and the Stokes vector radiances in an atmosphere-ocean system in a single computer run. The correlated sampling technique is used to compute radiance distributions for both the scalar and the Stokes vector formulations simultaneously, thus permitting a direct comparison of the errors induced. We show the effect of the volume-scattering phase function on the errors in radiance calculations when one neglects polarization effects. The model used in this study assumes a conservative Rayleigh-scattering atmosphere above a flat ocean. Within the ocean, the volume-scattering function (the first element in the Mueller matrix) is varied according to both a Henyey-Greenstein phase function, with asymmetry factors G = 0.0, 0.5, and 0.9, and also to a Rayleigh-scattering phase function. The remainder of the reduced Mueller matrix for the ocean is taken to be that for Rayleigh scattering, which is consistent with ocean water measurement.

  11. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  12. Evaluation of the navigation performance of shipboard-VTOL-landing guidance systems

    NASA Technical Reports Server (NTRS)

    Mcgee, L. A.; Paulk, C. H., Jr.; Steck, S. A.; Schmidt, S. F.; Merz, A. W.

    1979-01-01

    The objective of this study was to explore the performance of a VTOL aircraft landing approach navigation system that receives data (1) from either a microwave scanning beam (MSB) or a radar-transponder (R-T) landing guidance system, and (2) information data-linked from an aviation facility ship. State-of-the-art low-cost-aided inertial techniques and variable gain filters were used in the assumed navigation system. Compensation for ship motion was accomplished by a landing pad deviation vector concept that is a measure of the landing pad's deviation from its calm sea location. The results show that the landing guidance concepts were successful in meeting all of the current Navy navigation error specifications, provided that vector magnitude of the allowable error, rather than the error in each axis, is a permissible interpretation of acceptable performance. The success of these concepts, however, is strongly dependent on the distance measuring equipment bias. In addition, the 'best possible' closed-loop tracking performance achievable with the assumed point-mass VTOL aircraft guidance concept is demonstrated.

  13. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    PubMed

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  14. Heavy and Light Quarks with Lattice Chiral Fermions

    NASA Astrophysics Data System (ADS)

    Liu, K. F.; Dong, S. J.

    The feasibility of using lattice chiral fermions which are free of O(a) errors for both the heavy and light quarks is examined. The fact that the effective quark propagators in these fermions have the same form as that in the continuum with the quark mass being only an additive parameter to a chirally symmetric anti-Hermitian Dirac operator is highlighted. This implies that there is no distinction between the heavy and light quarks and no mass dependent tuning of the action or operators as long as the discretization error O(m2a2) is negligible. Using the overlap fermion, we find that the O(m2a2) (and O(ma2)) errors in the dispersion relations of the pseudoscalar and vector mesons and the renormalization of the axial-vector current and scalar density are small. This suggests that the applicable range of ma may be extended to ~0.56 with only 5% error, which is a factor of ~2.4 larger than the corresponding range of the improved Wilson action. We show that the generalized Gell-Mann-Oakes-Renner relation with unequal masses can be utilized to determine the finite ma corrections in the renormalization of the matrix elements for the heavy-light decay constants and semileptonic decay constants of the B/D meson.

  15. Conical Probe Calibration and Wind Tunnel Data Analysis of the Channeled Centerbody Inlet Experiment

    NASA Technical Reports Server (NTRS)

    Truong, Samson Siu

    2011-01-01

    For a multi-hole test probe undergoing wind tunnel tests, the resulting data needs to be analyzed for any significant trends. These trends include relating the pressure distributions, the geometric orientation, and the local velocity vector to one another. However, experimental runs always involve some sort of error. As a result, a calibration procedure is required to compensate for this error. For this case, it is the misalignment bias angles resulting from the distortion associated with the angularity of the test probe or the local velocity vector. Through a series of calibration steps presented here, the angular biases are determined and removed from the data sets. By removing the misalignment, smoother pressure distributions contribute to more accurate experimental results, which in turn could be then compared to theoretical and actual in-flight results to derive any similarities. Error analyses will also be performed to verify the accuracy of the calibration error reduction. The resulting calibrated data will be implemented into an in-flight RTF script that will output critical flight parameters during future CCIE experimental test runs. All of these tasks are associated with and in contribution to NASA Dryden Flight Research Center s F-15B Research Testbed s Small Business Innovation Research of the Channeled Centerbody Inlet Experiment.

  16. Mutation-adapted U1 snRNA corrects a splicing error of the dopa decarboxylase gene.

    PubMed

    Lee, Ni-Chung; Lee, Yu-May; Chen, Pin-Wen; Byrne, Barry J; Hwu, Wuh-Liang

    2016-12-01

    Aromatic l-amino acid decarboxylase (AADC) deficiency is an inborn error of monoamine neurotransmitter synthesis, which results in dopamine, serotonin, epinephrine and norepinephrine deficiencies. The DDC gene founder mutation IVS6 + 4A > T is highly prevalent in Chinese patients with AADC deficiency. In this study, we designed several U1 snRNA vectors to adapt U1 snRNA binding sequences of the mutated DDC gene. We found that only the modified U1 snRNA (IVS-AAA) that completely matched both the intronic and exonic U1 binding sequences of the mutated DDC gene could correct splicing errors of either the mutated human DDC minigene or the mouse artificial splicing construct in vitro. We further injected an adeno-associated viral (AAV) vector to express IVS-AAA in the brain of a knock-in mouse model. This treatment was well tolerated and improved both the survival and brain dopamine and serotonin levels of mice with AADC deficiency. Therefore, mutation-adapted U1 snRNA gene therapy can be a promising method to treat genetic diseases caused by splicing errors, but the efficiency of such a treatment still needs improvements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Misalignment calibration of geomagnetic vector measurement system using parallelepiped frame rotation method

    NASA Astrophysics Data System (ADS)

    Pang, Hongfeng; Zhu, XueJun; Pan, Mengchun; Zhang, Qi; Wan, Chengbiao; Luo, Shitu; Chen, Dixiang; Chen, Jinfei; Li, Ji; Lv, Yunxiao

    2016-12-01

    Misalignment error is one key factor influencing the measurement accuracy of geomagnetic vector measurement system, which should be calibrated with the difficulties that sensors measure different physical information and coordinates are invisible. A new misalignment calibration method by rotating a parallelepiped frame is proposed. Simulation and experiment result show the effectiveness of calibration method. The experimental system mainly contains DM-050 three-axis fluxgate magnetometer, INS (inertia navigation system), aluminium parallelepiped frame, aluminium plane base. Misalignment angles are calculated by measured data of magnetometer and INS after rotating the aluminium parallelepiped frame on aluminium plane base. After calibration, RMS error of geomagnetic north, vertical and east are reduced from 349.441 nT, 392.530 nT and 562.316 nT to 40.130 nT, 91.586 nT and 141.989 nT respectively.

  18. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  19. Direct discretization of planar div-curl problems

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.

    1989-01-01

    A control volume method is proposed for planar div-curl systems. The method is independent of potential and least squares formulations, and works directly with the div-curl system. The novelty of the technique lies in its use of a single local vector field component and two control volumes rather than the other way around. A discrete vector field theory comes quite naturally from this idea and is developed. Error estimates are proved for the method, and other ramifications investigated.

  20. Memorization of Sequences of Movements of the Right or the Left Hand by Right- and Left-Handers: Vector Coding.

    PubMed

    Bobrova, E V; Bogacheva, I N; Lyakhovetskii, V A; Fabinskaja, A A; Fomina, E V

    2017-01-01

    In order to test the hypothesis of hemisphere specialization for different types of information coding (the right hemisphere, for positional coding; the left one, for vector coding), we analyzed the errors of right and left-handers during a task involving the memorization of sequences of movements by the left or the right hand, which activates vector coding by changing the order of movements in memorized sequences. The task was first performed by the right or the left hand, then by the opposite hand. It was found that both'right- and left-handers use the information about the previous movements of the dominant hand, but not of the non-dom" inant one. After changing the hand, right-handers use the information about previous movements of the second hand, while left-handers do not. We compared our results with the data of previous experiments, in which positional coding was activated, and concluded that both right- and left-handers use vector coding for memorizing the sequences of their dominant hands and positional coding for memorizing the sequences of non-dominant hand. No similar patterns of errors were found between right- and left-handers after changing the hand, which suggests that in right- and left-handersthe skills are transferred in different ways depending on the type of coding.

  1. Magnetometer-only attitude and angular velocity filtering estimation for attitude changing spacecraft

    NASA Astrophysics Data System (ADS)

    Ma, Hongliang; Xu, Shijie

    2014-09-01

    This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.

  2. Angular motion estimation using dynamic models in a gyro-free inertial measurement unit.

    PubMed

    Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar

    2012-01-01

    In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.

  3. Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit

    PubMed Central

    Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar

    2012-01-01

    In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters. PMID:22778586

  4. Electron-Beam-Induced Deposition as a Technique for Analysis of Precursor Molecule Diffusion Barriers and Prefactors.

    PubMed

    Cullen, Jared; Lobo, Charlene J; Ford, Michael J; Toth, Milos

    2015-09-30

    Electron-beam-induced deposition (EBID) is a direct-write chemical vapor deposition technique in which an electron beam is used for precursor dissociation. Here we show that Arrhenius analysis of the deposition rates of nanostructures grown by EBID can be used to deduce the diffusion energies and corresponding preexponential factors of EBID precursor molecules. We explain the limitations of this approach, define growth conditions needed to minimize errors, and explain why the errors increase systematically as EBID parameters diverge from ideal growth conditions. Under suitable deposition conditions, EBID can be used as a localized technique for analysis of adsorption barriers and prefactors.

  5. Binarization of apodizers by adapted one-dimensional error diffusion method

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Marek; Cichocki, Tomasz; Martinez-Corral, Manuel; Andres, Pedro

    1994-10-01

    Two novel algorithms for the binarization of continuous rotationally symmetric real positive pupil filters are presented. Both algorithms are based on 1-D error diffusion concept. The original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the pupils with equal width zones give Fraunhofer diffraction pattern more similar to that of the original continuous-tone pupil than those with equal area zones, assuming in both cases the same resolution limit of printing device.

  6. Improved estimation of anomalous diffusion exponents in single-particle tracking experiments

    NASA Astrophysics Data System (ADS)

    Kepten, Eldad; Bronshtein, Irena; Garini, Yuval

    2013-05-01

    The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.

  7. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Structure and dynamics of solvated polyethylenimine chains

    NASA Astrophysics Data System (ADS)

    Beu, Titus A.; Farcaş, Alexandra

    2017-12-01

    Polimeric gene-delivery carriers have attracted great interest in recent years, owing to their applicability in gene therapy. In particular, cationic polymers represent the most promising delivery vectors for nucleic acids into the cells. This study presents extensive atomistic molecular dynamics simulations of linear polyethylenimine chains. The simulations show that the variation of the chain size and protonation fraction causes a substantial change of the diffusion coefficient. Examination of the solvated chains suggests the possibility of controlling the polymer diffusion mobility in solution.

  9. Two-Photon Laser-Induced Fluorescence O and N Atoms for the Study of Heterogeneous Catalysis in a Diffusion Reactor

    NASA Technical Reports Server (NTRS)

    Pallix, Joan B.; Copeland, Richard A.; Arnold, James O. (Technical Monitor)

    1995-01-01

    Advanced laser-based diagnostics have been developed to examine catalytic effects and atom/surface interactions on thermal protection materials. This study establishes the feasibility of using laser-induced fluorescence for detection of O and N atom loss in a diffusion tube to measure surface catalytic activity. The experimental apparatus is versatile in that it allows fluorescence detection to be used for measuring species selective recombination coefficients as well as diffusion tube and microwave discharge diagnostics. Many of the potential sources of error in measuring atom recombination coefficients by this method have been identified and taken into account. These include scattered light, detector saturation, sample surface cleanliness, reactor design, gas pressure and composition, and selectivity of the laser probe. Recombination coefficients and their associated errors are reported for N and O atoms on a quartz surface at room temperature.

  10. Asynchronous discrete event schemes for PDEs

    NASA Astrophysics Data System (ADS)

    Stone, D.; Geiger, S.; Lord, G. J.

    2017-08-01

    A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.

  11. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  12. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: Quality-assurance implications for target volume and organ-at-risk margination using daily CT-on-rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.

    2016-01-01

    Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151

  13. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: quality assurance implications for target volume and organs‐at‐risk margination using daily CT on‐rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul

    2014-01-01

    Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr

  14. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  15. EMMA: An Extensible Mammalian Modular Assembly Toolkit for the Rapid Design and Production of Diverse Expression Vectors.

    PubMed

    Martella, Andrea; Matjusaitis, Mantas; Auxillos, Jamie; Pollard, Steven M; Cai, Yizhi

    2017-07-21

    Mammalian plasmid expression vectors are critical reagents underpinning many facets of research across biology, biomedical research, and the biotechnology industry. Traditional cloning methods often require laborious manual design and assembly of plasmids using tailored sequential cloning steps. This process can be protracted, complicated, expensive, and error-prone. New tools and strategies that facilitate the efficient design and production of bespoke vectors would help relieve a current bottleneck for researchers. To address this, we have developed an extensible mammalian modular assembly kit (EMMA). This enables rapid and efficient modular assembly of mammalian expression vectors in a one-tube, one-step golden-gate cloning reaction, using a standardized library of compatible genetic parts. The high modularity, flexibility, and extensibility of EMMA provide a simple method for the production of functionally diverse mammalian expression vectors. We demonstrate the value of this toolkit by constructing and validating a range of representative vectors, such as transient and stable expression vectors (transposon based vectors), targeting vectors, inducible systems, polycistronic expression cassettes, fusion proteins, and fluorescent reporters. The method also supports simple assembly combinatorial libraries and hierarchical assembly for production of larger multigenetic cargos. In summary, EMMA is compatible with automated production, and novel genetic parts can be easily incorporated, providing new opportunities for mammalian synthetic biology.

  16. A Worksheet to Enhance Students’ Conceptual Understanding in Vector Components

    NASA Astrophysics Data System (ADS)

    Wutchana, Umporn; Emarat, Narumon

    2017-09-01

    With and without physical context, we explored 59 undergraduate students’conceptual and procedural understanding of vector components using both open ended problems and multiple choice items designed based on research instruments used in physics education research. The results showed that a number of students produce errors and revealed alternative conceptions especially when asked to draw graphical form of vector components. It indicated that most of them did not develop a strong foundation of understanding in vector components and could not apply those concepts to such problems with physical context. Based on the findings, we designed a worksheet to enhance the students’ conceptual understanding in vector components. The worksheet is composed of three parts which help students to construct their own understanding of definition, graphical form, and magnitude of vector components. To validate the worksheet, focus group discussions of 3 and 10 graduate students (science in-service teachers) had been conducted. The modified worksheet was then distributed to 41 grade 9 students in a science class. The students spent approximately 50 minutes to complete the worksheet. They sketched and measured vectors and its components and compared with the trigonometry ratio to condense the concepts of vector components. After completing the worksheet, their conceptual model had been verified. 83% of them constructed the correct model of vector components.

  17. A hybrid continuous-discrete method for stochastic reaction–diffusion processes

    PubMed Central

    Zheng, Likun; Nie, Qing

    2016-01-01

    Stochastic fluctuations in reaction–diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method. PMID:27703710

  18. Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters.

    PubMed

    Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M

    2009-10-15

    Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.

  19. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  20. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  1. Term Cancellations in Computing Floating-Point Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Sasaki, Tateaki; Kako, Fujio

    We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.

  2. On extreme points of the diffusion polytope

    DOE PAGES

    Hay, M. J.; Schiff, J.; Fisch, N. J.

    2017-01-04

    Here, we consider a class of diffusion problems defined on simple graphs in which the populations at any two vertices may be averaged if they are connected by an edge. The diffusion polytope is the convex hull of the set of population vectors attainable using finite sequences of these operations. A number of physical problems have linear programming solutions taking the diffusion polytope as the feasible region, e.g. the free energy that can be removed from plasma using waves, so there is a need to describe and enumerate its extreme points. We also review known results for the case ofmore » the complete graph Kn, and study a variety of problems for the path graph Pn and the cyclic graph Cn. Finall, we describe the different kinds of extreme points that arise, and identify the diffusion polytope in a number of simple cases. In the case of increasing initial populations on Pn the diffusion polytope is topologically an n-dimensional hypercube.« less

  3. Efficient parallel reconstruction for high resolution multishot spiral diffusion data with low rank constraint.

    PubMed

    Liao, Congyu; Chen, Ying; Cao, Xiaozhi; Chen, Song; He, Hongjian; Mani, Merry; Jacob, Mathews; Magnotta, Vincent; Zhong, Jianhui

    2017-03-01

    To propose a novel reconstruction method using parallel imaging with low rank constraint to accelerate high resolution multishot spiral diffusion imaging. The undersampled high resolution diffusion data were reconstructed based on a low rank (LR) constraint using similarities between the data of different interleaves from a multishot spiral acquisition. The self-navigated phase compensation using the low resolution phase data in the center of k-space was applied to correct shot-to-shot phase variations induced by motion artifacts. The low rank reconstruction was combined with sensitivity encoding (SENSE) for further acceleration. The efficiency of the proposed joint reconstruction framework, dubbed LR-SENSE, was evaluated through error quantifications and compared with ℓ1 regularized compressed sensing method and conventional iterative SENSE method using the same datasets. It was shown that with a same acceleration factor, the proposed LR-SENSE method had the smallest normalized sum-of-squares errors among all the compared methods in all diffusion weighted images and DTI-derived index maps, when evaluated with different acceleration factors (R = 2, 3, 4) and for all the acquired diffusion directions. Robust high resolution diffusion weighted image can be efficiently reconstructed from highly undersampled multishot spiral data with the proposed LR-SENSE method. Magn Reson Med 77:1359-1366, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. A SVM framework for fault detection of the braking system in a high speed train

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Li, Yan-Fu; Zio, Enrico

    2017-03-01

    In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.

  5. A support vector regression-firefly algorithm-based model for limiting velocity prediction in sewer pipes.

    PubMed

    Ebtehaj, Isa; Bonakdari, Hossein

    2016-01-01

    Sediment transport without deposition is an essential consideration in the optimum design of sewer pipes. In this study, a novel method based on a combination of support vector regression (SVR) and the firefly algorithm (FFA) is proposed to predict the minimum velocity required to avoid sediment settling in pipe channels, which is expressed as the densimetric Froude number (Fr). The efficiency of support vector machine (SVM) models depends on the suitable selection of SVM parameters. In this particular study, FFA is used by determining these SVM parameters. The actual effective parameters on Fr calculation are generally identified by employing dimensional analysis. The different dimensionless variables along with the models are introduced. The best performance is attributed to the model that employs the sediment volumetric concentration (C(V)), ratio of relative median diameter of particles to hydraulic radius (d/R), dimensionless particle number (D(gr)) and overall sediment friction factor (λ(s)) parameters to estimate Fr. The performance of the SVR-FFA model is compared with genetic programming, artificial neural network and existing regression-based equations. The results indicate the superior performance of SVR-FFA (mean absolute percentage error = 2.123%; root mean square error =0.116) compared with other methods.

  6. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  7. Comparison of Disk Diffusion, VITEK 2, and Broth Microdilution Antimicrobial Susceptibility Test Results for Unusual Species of Enterobacteriaceae▿

    PubMed Central

    Stone, Nimalie D.; O'Hara, Caroline M.; Williams, Portia P.; McGowan, John E.; Tenover, Fred C.

    2007-01-01

    We compared the antimicrobial susceptibility testing results generated by disk diffusion and the VITEK 2 automated system with the results of the Clinical and Laboratory Standards Institute (CLSI) broth microdilution (BMD) reference method for 61 isolates of unusual species of Enterobacteriaceae. The isolates represented 15 genera and 26 different species, including Buttiauxella, Cedecea, Kluyvera, Leminorella, and Yokenella. Antimicrobial agents included aminoglycosides, carbapenems, cephalosporins, fluoroquinolones, penicillins, and trimethoprim-sulfamethoxazole. CLSI interpretative criteria for Enterobacteriaceae were used. Of the 12 drugs tested by BMD and disk diffusion, 10 showed >95% categorical agreement (CA). CA was lower for ampicillin (80.3%) and cefazolin (77.0%). There were 3 very major errors (all with cefazolin), 1 major error (also with cefazolin), and 26 minor errors. Of the 40 isolates (representing 12 species) that could be identified with the VITEK 2 database, 36 were identified correctly to species level, 1 was identified to genus level only, and 3 were reported as unidentified. VITEK 2 generated MIC results for 42 (68.8%) of 61 isolates, but categorical interpretations (susceptible, intermediate, and resistant) were provided for only 22. For the 17 drugs tested by both BMD and VITEK 2, essential agreement ranged from 80.9 to 100% and CA ranged from 68.2% (ampicillin) to 100%; thirteen drugs exhibited 100% CA. In summary, disk diffusion provides a reliable alternative to BMD for testing of unusual Enterobacteriaceae, some of which cannot be tested, or produce incorrect results, by automated methods. PMID:17135429

  8. Diffusion of Zonal Variables Using Node-Centered Diffusion Solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, T B

    2007-08-06

    Tom Kaiser [1] has done some preliminary work to use the node-centered diffusion solver (originally developed by T. Palmer [2]) in Kull for diffusion of zonal variables such as electron temperature. To avoid numerical diffusion, Tom used a scheme developed by Shestakov et al. [3] and found their scheme could, in the vicinity of steep gradients, decouple nearest-neighbor zonal sub-meshes leading to 'alternating-zone' (red-black mode) errors. Tom extended their scheme to couple the sub-meshes with appropriate chosen artificial diffusion and thereby solved the 'alternating-zone' problem. Because the choice of the artificial diffusion coefficient could be very delicate, it is desirablemore » to use a scheme that does not require the artificial diffusion but still able to avoid both numerical diffusion and the 'alternating-zone' problem. In this document we present such a scheme.« less

  9. Variations in Static Force Control and Motor Unit Behavior with Error Amplification Feedback in the Elderly.

    PubMed

    Chen, Yi-Ching; Lin, Linda L; Lin, Yen-Ting; Hu, Chia-Ling; Hwang, Ing-Shiou

    2017-01-01

    Error amplification (EA) feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG). EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations [Formula: see text], short-term effective diffusion coefficients (Ds), and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13-35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization.

  10. Variations in Static Force Control and Motor Unit Behavior with Error Amplification Feedback in the Elderly

    PubMed Central

    Chen, Yi-Ching; Lin, Linda L.; Lin, Yen-Ting; Hu, Chia-Ling; Hwang, Ing-Shiou

    2017-01-01

    Error amplification (EA) feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG). EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations <ΔFc2>, short-term effective diffusion coefficients (Ds), and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13–35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization. PMID:29167637

  11. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  12. Density-based penalty parameter optimization on C-SVM.

    PubMed

    Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.

  13. Supplier Short Term Load Forecasting Using Support Vector Regression and Exogenous Input

    NASA Astrophysics Data System (ADS)

    Matijaš, Marin; Vukićcević, Milan; Krajcar, Slavko

    2011-09-01

    In power systems, task of load forecasting is important for keeping equilibrium between production and consumption. With liberalization of electricity markets, task of load forecasting changed because each market participant has to forecast their own load. Consumption of end-consumers is stochastic in nature. Due to competition, suppliers are not in a position to transfer their costs to end-consumers; therefore it is essential to keep forecasting error as low as possible. Numerous papers are investigating load forecasting from the perspective of the grid or production planning. We research forecasting models from the perspective of a supplier. In this paper, we investigate different combinations of exogenous input on the simulated supplier loads and show that using points of delivery as a feature for Support Vector Regression leads to lower forecasting error, while adding customer number in different datasets does the opposite.

  14. Postlaunch calibration of spacecraft attitude instruments

    NASA Technical Reports Server (NTRS)

    Davis, W.; Hashmall, J.; Garrick, J.; Harman, R.

    1993-01-01

    The accuracy of both onboard and ground attitude determination can be significantly enhanced by calibrating spacecraft attitude instruments (sensors) after launch. Although attitude sensors are accurately calibrated before launch, the stresses of launch and the space environment inevitably cause changes in sensor parameters. During the mission, these parameters may continue to drift requiring repeated on-orbit calibrations. The goal of attitude sensor calibration is to reduce the systematic errors in the measurement models. There are two stages at which systematic errors may enter. The first occurs in the conversion of sensor output into an observation vector in the sensor frame. The second occurs in the transformation of the vector from the sensor frame to the spacecraft attitude reference frame. This paper presents postlaunch alignment and transfer function calibration of the attitude sensors for the Compton Gamma Ray Observatory (GRO), the Upper Atmosphere Research Satellite (UARS), and the Extreme Ultraviolet Explorer (EUVE).

  15. Wavefront sensing with a thin diffuser

    NASA Astrophysics Data System (ADS)

    Berto, Pascal; Rigneault, Hervé; Guillon, Marc

    2017-12-01

    We propose and implement a broadband, compact, and low-cost wavefront sensing scheme by simply placing a thin diffuser in the close vicinity of a camera. The local wavefront gradient is determined from the local translation of the speckle pattern. The translation vector map is computed thanks to a fast diffeomorphic image registration algorithm and integrated to reconstruct the wavefront profile. The simple translation of speckle grains under local wavefront tip/tilt is ensured by the so-called "memory effect" of the diffuser. Quantitative wavefront measurements are experimentally demonstrated both for the few first Zernike polynomials and for phase-imaging applications requiring high resolution. We finally provided a theoretical description of the resolution limit that is supported experimentally.

  16. Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume

    2013-01-01

    Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.

  17. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  18. Radiologic-Pathologic Analysis of Contrast-enhanced and Diffusion-weighted MR Imaging in Patients with HCC after TACE: Diagnostic Accuracy of 3D Quantitative Image Analysis

    PubMed Central

    Chapiro, Julius; Wood, Laura D.; Lin, MingDe; Duran, Rafael; Cornish, Toby; Lesage, David; Charu, Vivek; Schernthaner, Rüdiger; Wang, Zhijun; Tacher, Vania; Savic, Lynn Jeanette; Kamel, Ihab R.

    2014-01-01

    Purpose To evaluate the diagnostic performance of three-dimensional (3Dthree-dimensional) quantitative enhancement-based and diffusion-weighted volumetric magnetic resonance (MR) imaging assessment of hepatocellular carcinoma (HCChepatocellular carcinoma) lesions in determining the extent of pathologic tumor necrosis after transarterial chemoembolization (TACEtransarterial chemoembolization). Materials and Methods This institutional review board–approved retrospective study included 17 patients with HCChepatocellular carcinoma who underwent TACEtransarterial chemoembolization before surgery. Semiautomatic 3Dthree-dimensional volumetric segmentation of target lesions was performed at the last MR examination before orthotopic liver transplantation or surgical resection. The amount of necrotic tumor tissue on contrast material–enhanced arterial phase MR images and the amount of diffusion-restricted tumor tissue on apparent diffusion coefficient (ADCapparent diffusion coefficient) maps were expressed as a percentage of the total tumor volume. Visual assessment of the extent of tumor necrosis and tumor response according to European Association for the Study of the Liver (EASLEuropean Association for the Study of the Liver) criteria was performed. Pathologic tumor necrosis was quantified by using slide-by-slide segmentation. Correlation analysis was performed to evaluate the predictive values of the radiologic techniques. Results At histopathologic examination, the mean percentage of tumor necrosis was 70% (range, 10%–100%). Both 3Dthree-dimensional quantitative techniques demonstrated a strong correlation with tumor necrosis at pathologic examination (R2 = 0.9657 and R2 = 0.9662 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively) and a strong intermethod agreement (R2 = 0.9585). Both methods showed a significantly lower discrepancy with pathologically measured necrosis (residual standard error [RSEresidual standard error] = 6.38 and 6.33 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively), when compared with non-3Dthree-dimensional techniques (RSEresidual standard error = 12.18 for visual assessment). Conclusion This radiologic-pathologic correlation study demonstrates the diagnostic accuracy of 3Dthree-dimensional quantitative MR imaging techniques in identifying pathologically measured tumor necrosis in HCChepatocellular carcinoma lesions treated with TACEtransarterial chemoembolization. © RSNA, 2014 Online supplemental material is available for this article. PMID:25028783

  19. On the implementation of an accurate and efficient solver for convection-diffusion equations

    NASA Astrophysics Data System (ADS)

    Wu, Chin-Tien

    In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.

  20. The NEUF-DIX space project - Non-EquilibriUm Fluctuations during DIffusion in compleX liquids.

    PubMed

    Baaske, Philipp; Bataller, Henri; Braibanti, Marco; Carpineti, Marina; Cerbino, Roberto; Croccolo, Fabrizio; Donev, Aleksandar; Köhler, Werner; Ortiz de Zárate, José M; Vailati, Alberto

    2016-12-01

    Diffusion and thermal diffusion processes in a liquid mixture are accompanied by long-range non-equilibrium fluctuations, whose amplitude is orders of magnitude larger than that of equilibrium fluctuations. The mean-square amplitude of the non-equilibrium fluctuations presents a scale-free power law behavior q -4 as a function of the wave vector q, but the divergence of the amplitude of the fluctuations at small wave vectors is prevented by the presence of gravity. In microgravity conditions the non-equilibrium fluctuations are fully developed and span all the available length scales up to the macroscopic size of the systems in the direction parallel to the applied gradient. Available theoretical models are based on linearized hydrodynamics and provide an adequate description of the statics and dynamics of the fluctuations in the presence of small temperature/concentration gradients and under stationary or quasi-stationary conditions. We describe a project aimed at the investigation of Non-EquilibriUm Fluctuations during DIffusion in compleX liquids (NEUF-DIX). The focus of the project is on the investigation in micro-gravity conditions of the non-equilibrium fluctuations in complex liquids, trying to tackle several challenging problems that emerged during the latest years, such as the theoretical predictions of Casimir-like forces induced by non-equilibrium fluctuations; the understanding of the non-equilibrium fluctuations in multi-component mixtures including a polymer, both in relation to the transport coefficients and to their behavior close to a glass transition; the understanding of the non-equilibrium fluctuations in concentrated colloidal suspensions, a problem closely related with the detection of Casimir forces; and the investigation of the development of fluctuations during transient diffusion. We envision to parallel these experiments with state-of-the-art multi-scale simulations.

  1. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo

    DOE PAGES

    Kent, Paul R.; Krogel, Jaron T.

    2017-06-22

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  2. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations.

    PubMed

    Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng

    2018-05-03

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  3. Systemic errors in quantitative polymerase chain reaction titration of self-complementary adeno-associated viral vectors and improved alternative methods.

    PubMed

    Fagone, Paolo; Wright, J Fraser; Nathwani, Amit C; Nienhuis, Arthur W; Davidoff, Andrew M; Gray, John T

    2012-02-01

    Self-complementary AAV (scAAV) vector genomes contain a covalently closed hairpin derived from a mutated inverted terminal repeat that connects the two monomer single-stranded genomes into a head-to-head or tail-to-tail dimer. We found that during quantitative PCR (qPCR) this structure inhibits the amplification of proximal amplicons and causes the systemic underreporting of copy number by as much as 10-fold. We show that cleavage of scAAV vector genomes with restriction endonuclease to liberate amplicons from the covalently closed terminal hairpin restores quantitative amplification, and we implement this procedure in a simple, modified qPCR titration method for scAAV vectors. In addition, we developed and present an AAV genome titration procedure based on gel electrophoresis that requires minimal sample processing and has low interassay variability, and as such is well suited for the rigorous quality control demands of clinical vector production facilities.

  4. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    PubMed Central

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  5. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  6. Surface-roughness considerations for atmospheric correction of ocean color sensors. I: The Rayleigh-scattering component.

    PubMed

    Gordon, H R; Wang, M

    1992-07-20

    The first step in the coastal zone color scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering contribution, Lr(r), to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm Lr(r), is computed by assuming that the ocean surface is flat. Computations of the radiance leaving a Rayleigh-scattering atmosphere overlying a rough Fresnel-reflecting ocean are presented to assess the radiance error caused by the flat-ocean assumption. The surface-roughness model is described in detail for both scalar and vector (including polarization) radiative transfer theory. The computations utilizing the vector theory show that the magnitude of the error significantly depends on the assumptions made in regard to the shadowing of one wave by another. In the case of the coastal zone color scanner bands, we show that for moderate solar zenith angles the error is generally below the 1 digital count level, except near the edge of the scan for high wind speeds. For larger solar zenith angles, the error is generally larger and can exceed 1 digital count at some wavelengths over the entire scan, even for light winds. The error in Lr(r) caused by ignoring surface roughness is shown to be the same order of magnitude as that caused by uncertainties of +/- 15 mb in the surface atmospheric pressure or of +/- 50 Dobson units in the ozone concentration. For future sensors, which will have greater radiometric sensitivity, the error caused by the flat-ocean assumption in the computation of Lr(r) could be as much as an order of magnitude larger than the noise-equivalent spectral radiance in certain situations.

  7. Autonomous frequency domain identification: Theory and experiment

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.

    1989-01-01

    The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.

  8. VERTICAL DIFFUSION IN SMALL STRATIFIED LAKES: DATA AND ERROR ANALYSIS

    EPA Science Inventory

    Water temperature profiles were measured at 2-min intervals in a stratified temperate lake with a surface area of 0.06 km2 and a aximum depth of 10 m from May 7 to August 9, 1989. he data were used to calculate the vertical eddy diffusion coefficient K2 in the hypolimnion. he dep...

  9. New explicit equations for the accurate calculation of the growth and evaporation of hydrometeors by the diffusion of water vapor

    NASA Technical Reports Server (NTRS)

    Srivastava, R. C.; Coen, J. L.

    1992-01-01

    The traditional explicit growth equation has been widely used to calculate the growth and evaporation of hydrometeors by the diffusion of water vapor. This paper reexamines the assumptions underlying the traditional equation and shows that large errors (10-30 percent in some cases) result if it is used carelessly. More accurate explicit equations are derived by approximating the saturation vapor-density difference as a quadratic rather than a linear function of the temperature difference between the particle and ambient air. These new equations, which reduce the error to less than a few percent, merit inclusion in a broad range of atmospheric models.

  10. Disk Diffusion Testing Using Candida sp. Colonies Taken Directly from CHROMagar Candida Medium May Decrease Time Required To Obtain Results

    PubMed Central

    Klevay, Michael; Ebinger, Alex; Diekema, Daniel; Messer, Shawn; Hollis, Richard; Pfaller, Michael

    2005-01-01

    We compared results of disk diffusion antifungal susceptibility testing from Candida sp. strains passaged on CHROMagar and on potato dextrose agar. The overall categorical agreements for fluconazole and voriconazole disk testing were 95% and 98% with 0% and 0.5% very major errors, respectively. Disk diffusion testing by the CLSI (formerly NCCLS) M44-A method can be performed accurately by taking inocula directly from CHROMagar. PMID:16000489

  11. A ratioing radiometer for use with a solar diffuser. [to monitor in-flight calibration of satellite sensors

    NASA Technical Reports Server (NTRS)

    Palmer, James M.; Slater, Philip N.

    1991-01-01

    The use of an on-board solar diffuser has been proposed to monitor the in-flight calibration of satellite sensors. This paper presents the preliminary specifications and design for a ratioing radiometer, to be used to determine the change in radiance of the solar diffuser. The issues involved in spectral channel selection are discussed and the effects of stray light are presented. An error analysis showing the benefit of the ratioing radiometer is included.

  12. Corruption of genomic databases with anomalous sequence.

    PubMed

    Lamperti, E D; Kittelberger, J M; Smith, T F; Villa-Komaroff, L

    1992-06-11

    We describe evidence that DNA sequences from vectors used for cloning and sequencing have been incorporated accidentally into eukaryotic entries in the GenBank database. These incorporations were not restricted to one type of vector or to a single mechanism. Many minor instances may have been the result of simple editing errors, but some entries contained large blocks of vector sequence that had been incorporated by contamination or other accidents during cloning. Some cases involved unusual rearrangements and areas of vector distant from the normal insertion sites. Matches to vector were found in 0.23% of 20,000 sequences analyzed in GenBank Release 63. Although the possibility of anomalous sequence incorporation has been recognized since the inception of GenBank and should be easy to avoid, recent evidence suggests that this problem is increasing more quickly than the database itself. The presence of anomalous sequence may have serious consequences for the interpretation and use of database entries, and will have an impact on issues of database management. The incorporated vector fragments described here may also be useful for a crude estimate of the fidelity of sequence information in the database. In alignments with well-defined ends, the matching sequences showed 96.8% identity to vector; when poorer matches with arbitrary limits were included, the aggregate identity to vector sequence was 94.8%.

  13. Direct model-based predictive control scheme without cost function for voltage source inverters with reduced common-mode voltage

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin

    2018-04-01

    This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.

  14. Computerized tongue image segmentation via the double geo-vector flow

    PubMed Central

    2014-01-01

    Background Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Methods Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. Results The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. Conclusions By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation. PMID:24507094

  15. Computerized tongue image segmentation via the double geo-vector flow.

    PubMed

    Shi, Miao-Jing; Li, Guo-Zheng; Li, Fu-Feng; Xu, Chao

    2014-02-08

    Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation.

  16. CFD application to subsonic inlet airframe integration. [computational fluid dynamics (CFD)

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.

    1988-01-01

    The fluid dynamics of curved diffuser duct flows of military aircraft is discussed. Three-dimensional parabolized Navier-Stokes analysis, and experiment techniques are reviewed. Flow measurements and pressure distributions are shown. Velocity vectors, and the effects of vortex generators are considered.

  17. Long time, large scale properties of the noisy driven-diffusion equation

    NASA Astrophysics Data System (ADS)

    Prakash, J. Ravi; Bouchaud, J. P.; Edwards, S. F.

    1994-07-01

    We study the driven-diffusion equation, describing the dynamics of density fluctuations delta-rho(x-vector, t) in powders or traffic flows. We have performed quite detailed numerical simulations of this equation in one dimension, focusing in particular on the scaling behavior of the correlation function (delta-rho(x-vector, t)delta-rho(0, 0)). One of our motivations was to assess the validity of various theoretical approaches, such as Renormalization Group and different self consistent truncation schemes, to these nonlinear dynamical equations. Although all of them are seen to predict correctly the scaling exponents, only one of them (where the non-exponential nature of the relaxation is taken into account) is able to reproduce satisfactorily the value of the numerical prefactors. Several other interesting issues, such as the noise spectrum of the output current, or the statistics of distance between jams (showing a transition between a `laminar' regime for small noise to a `jammed' regime for higher noise) are also investigated.

  18. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  19. Role of color memory in successive color constancy.

    PubMed

    Ling, Yazhu; Hurlbert, Anya

    2008-06-01

    We investigate color constancy for real 2D paper samples using a successive matching paradigm in which the observer memorizes a reference surface color under neutral illumination and after a temporal interval selects a matching test surface under the same or different illumination. We find significant effects of the illumination, reference surface, and their interaction on the matching error. We characterize the matching error in the absence of illumination change as the "pure color memory shift" and introduce a new index for successive color constancy that compares this shift against the matching error under changing illumination. The index also incorporates the vector direction of the matching errors in chromaticity space, unlike the traditional constancy index. With this index, we find that color constancy is nearly perfect.

  20. New-Sum: A Novel Online ABFT Scheme For General Iterative Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram

    Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less

  1. Study of compressible flow through a rectangular-to-semiannular transition duct

    NASA Technical Reports Server (NTRS)

    Foster, Jeffry; Okiishi, Theodore H.; Wendt, Bruce J.; Reichert, Bruce A.

    1995-01-01

    Detailed flow field measurements are presented for compressible flow through a diffusing rectangular-to-semiannular transition duct. Comparisons are made with published computational results for flow through the duct. Three-dimensional velocity vectors and total pressures were measured at the exit plane of the diffuser model. The inlet flow was also measured. These measurements are made using calibrated five-hole probes. Surface oil flow visualization and surface static pressure data were also taken. The study was conducted with an inlet Mach number of 0.786. The diffuser Reynolds based on the inlet centerline velocity and the exit diameter of the diffuser was 3,200,000. Comparison of the measured data with previously published computational results are made. Data demonstrating the ability of vortex generators to reduce flow separation and circumferential distortion is also presented.

  2. Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-06-01

    Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.

  3. A map overlay error model based on boundary geometry

    USGS Publications Warehouse

    Gaeuman, D.; Symanzik, J.; Schmidt, J.C.

    2005-01-01

    An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.

  4. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    PubMed Central

    Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.

    2014-01-01

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518

  5. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind

    2014-08-15

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less

  6. TrackArt: the user friendly interface for single molecule tracking data analysis and simulation applied to complex diffusion in mica supported lipid bilayers.

    PubMed

    Matysik, Artur; Kraut, Rachel S

    2014-05-01

    Single molecule tracking (SMT) analysis of fluorescently tagged lipid and protein probes is an attractive alternative to ensemble averaged methods such as fluorescence correlation spectroscopy (FCS) or fluorescence recovery after photobleaching (FRAP) for measuring diffusion in artificial and plasma membranes. The meaningful estimation of diffusion coefficients and their errors is however not straightforward, and is heavily dependent on sample type, acquisition method, and equipment used. Many approaches require advanced computing and programming skills for their implementation. Here we present TrackArt software, an accessible graphic interface for simulation and complex analysis of multiple particle paths. Imported trajectories can be filtered to eliminate spurious or corrupted tracks, and are then analyzed using several previously described methodologies, to yield single or multiple diffusion coefficients, their population fractions, and estimated errors. We use TrackArt to analyze the single-molecule diffusion behavior of a sphingolipid analog SM-Atto647N, in mica supported DOPC (1,2-dioleoyl-sn-glycero-3-phosphocholine) bilayers. Fitting with a two-component diffusion model confirms the existence of two separate populations of diffusing particles in these bilayers on mica. As a demonstration of the TrackArt workflow, we characterize and discuss the effective activation energies required to increase the diffusion rates of these populations, obtained from Arrhenius plots of temperature-dependent diffusion. Finally, TrackArt provides a simulation module, allowing the user to generate models with multiple particle trajectories, diffusing with different characteristics. Maps of domains, acting as impermeable or permeable obstacles for particles diffusing with given rate constants and diffusion coefficients, can be simulated or imported from an image. Importantly, this allows one to use simulated data with a known diffusion behavior as a comparison for results acquired using particular algorithms on actual, "natural" samples whose diffusion behavior is to be extracted. It can also serve as a tool for demonstrating diffusion principles. TrackArt is an open source, platform-independent, Matlab-based graphical user interface, and is easy to use even for those unfamiliar with the Matlab programming environment. TrackArt can be used for accurate simulation and analysis of complex diffusion data, such as diffusion in lipid bilayers, providing publication-quality formatted results.

  7. Quadrature mixture LO suppression via DSW DAC noise dither

    DOEpatents

    Dubbert, Dale F [Cedar Crest, NM; Dudley, Peter A [Albuquerque, NM

    2007-08-21

    A Quadrature Error Corrected Digital Waveform Synthesizer (QECDWS) employs frequency dependent phase error corrections to, in effect, pre-distort the phase characteristic of the chirp to compensate for the frequency dependent phase nonlinearity of the RF and microwave subsystem. In addition, the QECDWS can employ frequency dependent correction vectors to the quadrature amplitude and phase of the synthesized output. The quadrature corrections cancel the radars' quadrature upconverter (mixer) errors to null the unwanted spectral image. A result is the direct generation of an RF waveform, which has a theoretical chirp bandwidth equal to the QECDWS clock frequency (1 to 1.2 GHz) with the high Spurious Free Dynamic Range (SFDR) necessary for high dynamic range radar systems such as SAR. To correct for the problematic upconverter local oscillator (LO) leakage, precision DC offsets can be applied over the chirped pulse using a pseudo-random noise dither. The present dither technique can effectively produce a quadrature DC bias which has the precision required to adequately suppress the LO leakage. A calibration technique can be employed to calculate both the quadrature correction vectors and the LO-nulling DC offsets using the radar built-in test capability.

  8. Validation of a vector version of the 6S radiative transfer code for atmospheric correction of satellite data. Part II. Homogeneous Lambertian and anisotropic surfaces.

    PubMed

    Kotchenova, Svetlana Y; Vermote, Eric F

    2007-07-10

    This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.

  9. Validation of a vector version of the 6S radiative transfer code for atmospheric correction of satellite data. Part II. Homogeneous Lambertian and anisotropic surfaces

    NASA Astrophysics Data System (ADS)

    Kotchenova, Svetlana Y.; Vermote, Eric F.

    2007-07-01

    This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.

  10. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  11. Efficient and precise calculation of the b-matrix elements in diffusion-weighted imaging pulse sequences.

    PubMed

    Zubkov, Mikhail; Stait-Gardner, Timothy; Price, William S

    2014-06-01

    Precise NMR diffusion measurements require detailed knowledge of the cumulative dephasing effect caused by the numerous gradient pulses present in most NMR pulse sequences. This effect, which ultimately manifests itself as the diffusion-related NMR signal attenuation, is usually described by the b-value or the b-matrix in the case of multidirectional diffusion weighting, the latter being common in diffusion-weighted NMR imaging. Neglecting some of the gradient pulses introduces an error in the calculated diffusion coefficient reaching in some cases 100% of the expected value. Therefore, ensuring the b-matrix calculation includes all the known gradient pulses leads to significant error reduction. Calculation of the b-matrix for simple gradient waveforms is rather straightforward, yet it grows cumbersome when complexly shaped and/or numerous gradient pulses are introduced. Making three broad assumptions about the gradient pulse arrangement in a sequence results in an efficient framework for calculation of b-matrices as well providing some insight into optimal gradient pulse placement. The framework allows accounting for the diffusion-sensitising effect of complexly shaped gradient waveforms with modest computational time and power. This is achieved by using the b-matrix elements of the simple unmodified pulse sequence and minimising the integration of the complexly shaped gradient waveform in the modified sequence. Such re-evaluation of the b-matrix elements retains all the analytical relevance of the straightforward approach, yet at least halves the amount of symbolic integration required. The application of the framework is demonstrated with the evaluation of the expression describing the diffusion-sensitizing effect, caused by different bipolar gradient pulse modules. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Comparison of Single-Shot Echo-Planar and Line Scan Protocols for Diffusion Tensor Imaging1

    PubMed Central

    Kubicki, Marek; Maier, Stephan E.; Westin, Carl-Frederik; Mamata, Hatsuho; Ersner-Hershfield, Hal; Estepar, Raul; Kikinis, Ron; Jolesz, Ferenc A.

    2009-01-01

    Rationale and Objectives Both single-shot diffusion-weighted echo-planar imaging (EPI) and line scan diffusion imaging (LSDI) can be used to obtain magnetic resonance diffusion tensor data and to calculate directionally invariant diffusion anisotropy indices, ie, indirect measures of the organization and coherence of white matter fibers in the brain. To date, there has been no comparison of EPI and LSDI. Because EPI is the most commonly used technique for acquiring diffusion tensor data, it is important to understand the limitations and advantages of LSDI relative to EPI. Materials and Methods Five healthy volunteers underwent EPI and LSDI diffusion on a 1.5 Tesla magnet (General Electric Medical Systems, Milwaukee, WI). Four-mm thick coronal sections, covering the entire brain, were obtained. In addition, one subject was tested with both sequences over four sessions. For each image voxel, eigenvectors and eigenvalues of the diffusion tensor were calculated, and fractional anisotropy (FA) was derived. Several regions of interest were delineated, and for each, mean FA and estimated mean standard deviation were calculated and compared. Results Results showed no significant differences between EPI and LSDI for mean FA for the five subjects. When inter-session reproducibility for one subject was evaluated, there was a significant difference between EPI and LSDI in FA for the corpus callosum and the right uncinate fasciculus. Moreover, errors associated with each FA measure were larger for EPI than for LSDI. Conclusion Results indicate that both EPI- and LSDI-derived FA measures are sufficiently robust. However, when higher accuracy is needed, LSDI provides smaller error and smaller inter-subject and inter-session variability than EPI. PMID:14974598

  13. Diffusion tensor optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Marks, Daniel L.; Blackmon, Richard L.; Oldenburg, Amy L.

    2018-01-01

    In situ measurements of diffusive particle transport provide insight into tissue architecture, drug delivery, and cellular function. Analogous to diffusion-tensor magnetic resonance imaging (DT-MRI), where the anisotropic diffusion of water molecules is mapped on the millimeter scale to elucidate the fibrous structure of tissue, here we propose diffusion-tensor optical coherence tomography (DT-OCT) for measuring directional diffusivity and flow of optically scattering particles within tissue. Because DT-OCT is sensitive to the sub-resolution motion of Brownian particles as they are constrained by tissue macromolecules, it has the potential to quantify nanoporous anisotropic tissue structure at micrometer resolution as relevant to extracellular matrices, neurons, and capillaries. Here we derive the principles of DT-OCT, relating the detected optical signal from a minimum of six probe beams with the six unique diffusion tensor and three flow vector components. The optimal geometry of the probe beams is determined given a finite numerical aperture, and a high-speed hardware implementation is proposed. Finally, Monte Carlo simulations are employed to assess the ability of the proposed DT-OCT system to quantify anisotropic diffusion of nanoparticles in a collagen matrix, an extracellular constituent that is known to become highly aligned during tumor development.

  14. Integrating Models of Diffusion and Behavior to Predict Innovation Adoption, Maintenance, and Social Diffusion.

    PubMed

    Smith, Rachel A; Kim, Youllee; Zhu, Xun; Doudou, Dimi Théodore; Sternberg, Eleanore D; Thomas, Matthew B

    2018-01-01

    This study documents an investigation into the adoption and diffusion of eave tubes, a novel mosquito vector control, during a large-scale scientific field trial in West Africa. The diffusion of innovations (DOI) and the integrated model of behavior (IMB) were integrated (i.e., innovation attributes with attitudes and social pressures with norms) to predict participants' (N = 329) diffusion intentions. The findings showed that positive attitudes about the innovation's attributes were a consistent positive predictor of diffusion intentions: adopting it, maintaining it, and talking with others about it. As expected by the DOI and the IMB, the social pressure created by a descriptive norm positively predicted intentions to adopt and maintain the innovation. Drawing upon sharing research, we argued that the descriptive norm may dampen future talk about the innovation, because it may no longer be seen as a novel, useful topic to discuss. As predicted, the results showed that as the descriptive norm increased, the intention to talk about the innovation decreased. These results provide broad support for integrating the DOI and the IMB to predict diffusion and for efforts to draw on other research to understand motivations for social diffusion.

  15. Orientational Order on Surfaces: The Coupling of Topology, Geometry, and Dynamics

    NASA Astrophysics Data System (ADS)

    Nestler, M.; Nitschke, I.; Praetorius, S.; Voigt, A.

    2018-02-01

    We consider the numerical investigation of surface bound orientational order using unit tangential vector fields by means of a gradient flow equation of a weak surface Frank-Oseen energy. The energy is composed of intrinsic and extrinsic contributions, as well as a penalization term to enforce the unity of the vector field. Four different numerical discretizations, namely a discrete exterior calculus approach, a method based on vector spherical harmonics, a surface finite element method, and an approach utilizing an implicit surface description, the diffuse interface method, are described and compared with each other for surfaces with Euler characteristic 2. We demonstrate the influence of geometric properties on realizations of the Poincaré-Hopf theorem and show examples where the energy is decreased by introducing additional orientational defects.

  16. Automatic tool alignment in a backscatter X-ray scanning system

    DOEpatents

    Garretson, Justin; Hobart, Clinton G.; Gladwell, Thomas S.; Monda, Mark J.

    2015-11-17

    Technologies pertaining to backscatter x-ray scanning systems are described herein. The backscatter x-ray scanning system includes an x-ray source, which directs collimated x-rays along a plurality of output vectors towards a target. A detector detects diffusely reflected x-rays subsequent to respective collimated x-rays impacting the target, and outputs signals indicative of parameters of the detected x-rays. An image processing system generates an x-ray image based upon parameters of the detected x-rays, wherein each pixel in the image corresponds to a respective output vector. A user selects a particular portion of the image, and a medical device is positioned such that its directional axis is coincident with the output vector corresponding to at least one pixel in the portion of the image.

  17. Automatic tool alignment in a backscatter x-ray scanning system

    DOEpatents

    Garretson, Justin; Hobart, Clinton G.; Gladwell, Thomas S.; Monda, Mark J.

    2015-06-16

    Technologies pertaining to backscatter x-ray scanning systems are described herein. The backscatter x-ray scanning system includes an x-ray source, which directs collimated x-rays along a plurality of output vectors towards a target. A detector detects diffusely reflected x-rays subsequent to respective collimated x-rays impacting the target, and outputs signals indicative of parameters of the detected x-rays. An image processing system generates an x-ray image based upon parameters of the detected x-rays, wherein each pixel in the image corresponds to a respective output vector. A user selects a particular portion of the image, and a tool is positioned such that its directional axis is coincident with the output vector corresponding to at least one pixel in the portion of the image.

  18. A diffuse plate boundary model for Indian Ocean tectonics

    NASA Technical Reports Server (NTRS)

    Wiens, D. A.; Demets, C.; Gordon, R. G.; Stein, S.; Argus, D.

    1985-01-01

    It is suggested that motion along the virtually aseismic Owen fracture zone is negligible, so that Arabia and India are contained within a single Indo-Arabian plate divided from the Australian plate by a diffuse boundary. The boundary is a zone of concentrated seismicity and deformation commonly characterized as 'intraplate'. The rotation vector of Australia relative to Indo-Arabia is consistent with the seismologically observed 2 cm/yr of left-lateral strike-slip along the Ninetyeast Ridge, north-south compression in the Central Indian Ocean, and the north-south extension near Chagos.

  19. Spintronics: spin accumulation in mesoscopic systems.

    PubMed

    Johnson, Mark

    2002-04-25

    In spintronics, in which use is made of the spin degree of freedom of the electron, issues concerning electrical spin injection and detection of electron spin diffusion are fundamentally important. Jedema et al. describe a magneto-resistance study in which they claim to have observed spin accumulation in a mesoscopic copper wire, but their one-dimensional model ignores two-dimensional spin-diffusion effects, which casts doubt on their analysis. A two-dimensional vector formalism of spin transport is called for to model spin-injection experiments, and the identification of spurious background resistance effects is crucial.

  20. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  1. Polarization-analyzing circuit on InP for integrated Stokes vector receiver.

    PubMed

    Ghosh, Samir; Kawabata, Yuto; Tanemura, Takuo; Nakano, Yoshiaki

    2017-05-29

    Stokes vector modulation and direct detection (SVM/DD) has immense potentiality to reduce the cost burden for the next-generation short-reach optical communication networks. In this paper, we propose and demonstrate an InGaAsP/InP waveguide-based polarization-analyzing circuit for an integrated Stokes vector (SV) receiver. By transforming the input state-of-polarization (SOP) and projecting its SV onto three different vectors on the Poincare sphere, we show that the actual SOP can be retrieved by simple calculation. We also reveal that this projection matrix has a flexibility and its deviation due to device imperfectness can be calibrated to a certain degree, so that the proposed device would be fundamentally robust against fabrication errors. A proof-of-concept photonic integrated circuit (PIC) is fabricated on InP by using half-ridge waveguides to successfully demonstrate detection of different SOPs scattered on the Poincare sphere.

  2. Boosting with Averaged Weight Vectors

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.

  3. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  4. New Research on MEMS Acoustic Vector Sensors Used in Pipeline Ground Markers

    PubMed Central

    Song, Xiaopeng; Jian, Zeming; Zhang, Guojun; Liu, Mengran; Guo, Nan; Zhang, Wendong

    2015-01-01

    According to the demands of current pipeline detection systems, the above-ground marker (AGM) system based on sound detection principle has been a major development trend in pipeline technology. A novel MEMS acoustic vector sensor for AGM systems which has advantages of high sensitivity, high signal-to-noise ratio (SNR), and good low frequency performance has been put forward. Firstly, it is presented that the frequency of the detected sound signal is concentrated in a lower frequency range, and the sound attenuation is relatively low in soil. Secondly, the MEMS acoustic vector sensor structure and basic principles are introduced. Finally, experimental tests are conducted and the results show that in the range of 0°∼90°, when r = 5 m, the proposed MEMS acoustic vector sensor can effectively detect sound signals in soil. The measurement errors of all angles are less than 5°. PMID:25609046

  5. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation.

    PubMed

    Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-03-16

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.

  6. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  7. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation

    PubMed Central

    Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-01-01

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552

  8. Coherent vector meson photoproduction from deuterium at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, T.C.; Strikman, M.I.; Sargsian, M.M.

    2006-04-15

    We analyze the cross section for vector meson photoproduction off a deuteron for the intermediate range of photon energies starting at a few giga-electron-volts above the threshold and higher. We reproduce the steps in the derivation of the conventional nonrelativistic Glauber expression based on an effective diagrammatic method while making corrections for Fermi motion and intermediate-energy kinematic effects. We show that, for intermediate-energy vector meson production, the usual Glauber factorization breaks down, and we derive corrections to the usual Glauber method to linear order in longitudinal nucleon momentum. The purpose of our analysis is to establish methods for probing interestingmore » physics in the production mechanism for {phi} mesons and heavier vector mesons. We demonstrate how neglecting the breakdown of Glauber factorization can lead to errors in measurements of basic cross sections extracted from nuclear data.« less

  9. Nonperturbative interpretation of the Bloch vector's path beyond the rotating-wave approximation

    NASA Astrophysics Data System (ADS)

    Benenti, Giuliano; Siccardi, Stefano; Strini, Giuliano

    2013-09-01

    The Bloch vector's path of a two-level system exposed to a monochromatic field exhibits, in the regime of strong coupling, complex corkscrew trajectories. By considering the infinitesimal evolution of the two-level system when the field is treated as a classical object, we show that the Bloch vector's rotation speed oscillates between zero and twice the rotation speed predicted by the rotating wave approximation. Cusps appear when the rotation speed vanishes. We prove analytically that in correspondence to cusps the curvature of the Bloch vector's path diverges. On the other hand, numerical data show that the curvature is very large even for a quantum field in the deep quantum regime with mean number of photons n¯≲1. We finally compute numerically the typical error size in a quantum gate when the terms beyond rotating wave approximation are neglected.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatia, Harsh

    This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields—an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single “correct” reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thusmore » creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of “correctness” of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (timeindependent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations.« less

  11. Real-time catheter tracking for high-dose-rate prostate brachytherapy using an electromagnetic 3D-guidance device: A preliminary performance study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jun; Sebastian, Evelyn; Mangona, Victor

    2013-02-15

    Purpose: In order to increase the accuracy and speed of catheter reconstruction in a high-dose-rate (HDR) prostate implant procedure, an automatic tracking system has been developed using an electromagnetic (EM) device (trakSTAR, Ascension Technology, VT). The performance of the system, including the accuracy and noise level with various tracking parameters and conditions, were investigated. Methods: A direct current (dc) EM transmitter (midrange model) and a sensor with diameter of 1.3 mm (Model 130) were used in the trakSTAR system for tracking catheter position during HDR prostate brachytherapy. Localization accuracy was assessed under both static and dynamic analyses conditions. For themore » static analysis, a calibration phantom was used to investigate error dependency on operating room (OR) table height (bottom vs midposition vs top), sensor position (distal tip of catheter vs connector end of catheter), direction [left-right (LR) vs anterior-posterior (AP) vs superior-inferior (SI)], sampling frequency (40 vs 80 vs 120 Hz), and interference from OR equipment (present vs absent). The mean and standard deviation of the localization offset in each direction and the corresponding error vectors were calculated. For dynamic analysis, the paths of five straight catheters were tracked to study the effects of directions, sampling frequency, and interference of EM field. Statistical analysis was conducted to compare the results in different configurations. Results: When interference was present in the static analysis, the error vectors were significantly higher at the top table position (3.3 {+-} 1.3 vs 1.8 {+-} 0.9 mm at bottom and 1.7 {+-} 1.0 mm at middle, p < 0.001), at catheter end position (3.1 {+-} 1.1 vs 1.4 {+-} 0.7 mm at the tip position, p < 0.001), and at 40 Hz sampling frequency (2.6 {+-} 1.1 vs 2.4 {+-} 1.5 mm at 80 Hz and 1.8 {+-} 1.1 at 160 Hz, p < 0.001). So did the mean offset errors in the LR direction (-1.7 {+-} 1.4 vs 0.4 {+-} 0.5 mm in AP and 0.8 {+-} 0.8 mm in SI directions, p < 0.001). The error vectors were significantly higher with surrounding interference (2.2 {+-} 1.3 mm) vs without interference (1.0 {+-} 0.7 mm, p < 0.001). An accuracy of 1.6 {+-} 0.2 mm can be reached when using optimum configuration (160 Hz at middle table position). When interference was present in the dynamic tracking, the mean tracking errors in LR direction (1.4 {+-} 0.5 mm) was significantly higher than that in AP direction (0.3 {+-} 0.2 mm, p < 0.001). So did the mean vector errors at 40 Hz (2.1 {+-} 0.2 mm vs 1.3 {+-} 0.2 mm at 80 Hz and 0.9 {+-} 0.2 mm at 160 Hz, p < 0.05). However, when interference was absent, they were comparable in the both directions and at all sampling frequencies. An accuracy of 0.9 {+-} 0.2 mm was obtained for the dynamic tracking when using optimum configuration. Conclusions: The performance of an EM tracking system depends highly on the system configuration and surrounding environment. The accuracy of EM tracking for catheter reconstruction in a prostate HDR brachytherapy procedure can be improved by reducing interference from surrounding equipment, decreasing distance from transmitter to tracking area, and choosing appropriated sampling frequency. A calibration scheme is needed to further reduce the tracking error when the interference is high.« less

  12. Edge-based nonlinear diffusion for finite element approximations of convection-diffusion equations and its relation to algebraic flux-correction schemes.

    PubMed

    Barrenechea, Gabriel R; Burman, Erik; Karakatsani, Fotini

    2017-01-01

    For the case of approximation of convection-diffusion equations using piecewise affine continuous finite elements a new edge-based nonlinear diffusion operator is proposed that makes the scheme satisfy a discrete maximum principle. The diffusion operator is shown to be Lipschitz continuous and linearity preserving. Using these properties we provide a full stability and error analysis, which, in the diffusion dominated regime, shows existence, uniqueness and optimal convergence. Then the algebraic flux correction method is recalled and we show that the present method can be interpreted as an algebraic flux correction method for a particular definition of the flux limiters. The performance of the method is illustrated on some numerical test cases in two space dimensions.

  13. Linear single-step image reconstruction in the presence of nonscattering regions.

    PubMed

    Dehghani, H; Delpy, D T

    2002-06-01

    There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.

  14. Linear single-step image reconstruction in the presence of nonscattering regions

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Delpy, D. T.

    2002-06-01

    There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.

  15. Stochastic estimates of gradient from laser measurements for an autonomous Martian roving vehicle

    NASA Technical Reports Server (NTRS)

    Burger, P. A.

    1973-01-01

    The general problem of estimating the state vector x from the state equation h = Ax where h, A, and x are all stochastic, is presented. Specifically, the problem is for an autonomous Martian roving vehicle to utilize laser measurements in estimating the gradient of the terrain. Error exists due to two factors - surface roughness and instrumental measurements. The errors in slope depend on the standard deviations of these noise factors. Numerically, the error in gradient is expressed as a function of instrumental inaccuracies. Certain guidelines for the accuracy of permissable gradient must be set. It is found that present technology can meet these guidelines.

  16. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    NASA Technical Reports Server (NTRS)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  17. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    PubMed

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  18. A new model integrating short- and long-term aging of copper added to soils

    PubMed Central

    Zeng, Saiqi; Li, Jumei; Wei, Dongpu

    2017-01-01

    Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888

  19. MO-FG-CAMPUS-JeP3-01: A Statistical Model for Analyzing the Rotational Error of Single Iso-Center Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J; Dept of Radiation Oncology, New York Weill Cornell Medical Ctr, New York, NY

    Purpose: To develop a generalized statistical model that incorporates the treatment uncertainty from the rotational error of single iso-center technique, and calculate the additional PTV (planning target volume) margin required to compensate for this error. Methods: The random vectors for setup and additional rotation errors in the three-dimensional (3D) patient coordinate system were assumed to follow the 3D independent normal distribution with zero mean, and standard deviations σx, σy, σz, for setup error and a uniform σR for rotational error. Both random vectors were summed, normalized and transformed to the spherical coordinates to derive the chi distribution with 3 degreesmore » of freedom for the radical distance ρ. PTV margin was determined using the critical value of this distribution for 0.05 significant level so that 95% of the time the treatment target would be covered by ρ. The additional PTV margin required to compensate for the rotational error was calculated as a function of σx, σy, σz and σR. Results: The effect of the rotational error is more pronounced for treatments that requires high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2mm PTV margin (or σx =σy=σz=0.7mm), a σR=0.32mm will decrease the PTV coverage from 95% to 90% of the time, or an additional 0.2mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σR>0.3mm will lead to an additional PTV margin that cannot be ignored, and the maximal σR that can be ignored is 0.0064 rad (or 0.37°) for iso-to-target distance=5cm, or 0.0032 rad (or 0.18°) for iso-to-target distance=10cm. Conclusions: The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the iso-center and target is large.« less

  20. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

Top