Sample records for matrix correction method

  1. Optically buffered Jones-matrix-based multifunctional optical coherence tomography with polarization mode dispersion correction

    PubMed Central

    Hong, Young-Joo; Makita, Shuichi; Sugiyama, Satoshi; Yasuno, Yoshiaki

    2014-01-01

    Polarization mode dispersion (PMD) degrades the performance of Jones-matrix-based polarization-sensitive multifunctional optical coherence tomography (JM-OCT). The problem is specially acute for optically buffered JM-OCT, because the long fiber in the optical buffering module induces a large amount of PMD. This paper aims at presenting a method to correct the effect of PMD in JM-OCT. We first mathematically model the PMD in JM-OCT and then derive a method to correct the PMD. This method is a combination of simple hardware modification and subsequent software correction. The hardware modification is introduction of two polarizers which transform the PMD into global complex modulation of Jones matrix. Subsequently, the software correction demodulates the global modulation. The method is validated with an experimentally obtained point spread function with a mirror sample, as well as by in vivo measurement of a human retina. PMID:25657888

  2. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  3. Continuous improvement of medical test reliability using reference methods and matrix-corrected target values in proficiency testing schemes: application to glucose assay.

    PubMed

    Delatour, Vincent; Lalere, Beatrice; Saint-Albin, Karène; Peignaux, Maryline; Hattchouel, Jean-Marc; Dumont, Gilles; De Graeve, Jacques; Vaslin-Reimann, Sophie; Gillery, Philippe

    2012-11-20

    The reliability of biological tests is a major issue for patient care in terms of public health that involves high economic stakes. Reference methods, as well as regular external quality assessment schemes (EQAS), are needed to monitor the analytical performance of field methods. However, control material commutability is a major concern to assess method accuracy. To overcome material non-commutability, we investigated the possibility of using lyophilized serum samples together with a limited number of frozen serum samples to assign matrix-corrected target values, taking the example of glucose assays. Trueness of the current glucose assays was first measured against a primary reference method by using human frozen sera. Methods using hexokinase and glucose oxidase with spectroreflectometric detection proved very accurate, with bias ranging between -2.2% and +2.3%. Bias of methods using glucose oxidase with spectrophotometric detection was +4.5%. Matrix-related bias of the lyophilized materials was then determined and ranged from +2.5% to -14.4%. Matrix-corrected target values were assigned and used to assess trueness of 22 sub-peer groups. We demonstrated that matrix-corrected target values can be a valuable tool to assess field method accuracy in large scale surveys where commutable materials are not available in sufficient amount with acceptable costs. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Correction of photoresponse nonuniformity for matrix detectors based on prior compensation for their nonlinear behavior.

    PubMed

    Ferrero, Alejandro; Campos, Joaquin; Pons, Alicia

    2006-04-10

    What we believe to be a novel procedure to correct the nonuniformity that is inherent in all matrix detectors has been developed and experimentally validated. This correction method, unlike other nonuniformity-correction algorithms, consists of two steps that separate two of the usual problems that affect characterization of matrix detectors, i.e., nonlinearity and the relative variation of the pixels' responsivity across the array. The correction of the nonlinear behavior remains valid for any illumination wavelength employed, as long as the nonlinearity is not due to power dependence of the internal quantum efficiency. This method of correction of nonuniformity permits the immediate calculation of the correction factor for any given power level and for any illuminant that has a known spectral content once the nonuniform behavior has been characterized for a sufficient number of wavelengths. This procedure has a significant advantage compared with other traditional calibration-based methods, which require that a full characterization be carried out for each spectral distribution pattern of the incident optical radiation. The experimental application of this novel method has achieved a 20-fold increase in the uniformity of a CCD array for response levels close to saturation.

  5. Apparatus and method for quantitative assay of samples of transuranic waste contained in barrels in the presence of matrix material

    DOEpatents

    Caldwell, J.T.; Herrera, G.C.; Hastings, R.D.; Shunk, E.R.; Kunz, W.E.

    1987-08-28

    Apparatus and method for performing corrections for matrix material effects on the neutron measurements generated from analysis of transuranic waste drums using the differential-dieaway technique. By measuring the absorption index and the moderator index for a particular drum, correction factors can be determined for the effects of matrix materials on the ''observed'' quantity of fissile and fertile material present therein in order to determine the actual assays thereof. A barrel flux monitor is introduced into the measurement chamber to accomplish these measurements as a new contribution to the differential-dieaway technology. 9 figs.

  6. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  7. Variational second order density matrix study of F3-: importance of subspace constraints for size-consistency.

    PubMed

    van Aggelen, Helen; Verstichel, Brecht; Bultinck, Patrick; Van Neck, Dimitri; Ayers, Paul W; Cooper, David L

    2011-02-07

    Variational second order density matrix theory under "two-positivity" constraints tends to dissociate molecules into unphysical fractionally charged products with too low energies. We aim to construct a qualitatively correct potential energy surface for F(3)(-) by applying subspace energy constraints on mono- and diatomic subspaces of the molecular basis space. Monoatomic subspace constraints do not guarantee correct dissociation: the constraints are thus geometry dependent. Furthermore, the number of subspace constraints needed for correct dissociation does not grow linearly with the number of atoms. The subspace constraints do impose correct chemical properties in the dissociation limit and size-consistency, but the structure of the resulting second order density matrix method does not exactly correspond to a system of noninteracting units.

  8. First Industrial Tests of a Drum Monitor Matrix Correction for the Fissile Mass Measurement in Large Volume Historic Metallic Residues with the Differential Die-away Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, R.; Passard, C.; Perot, B.

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT. In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (NML) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor ({sup 3}He proportional counter inside the measurement cavity).more » A previous study performed with the NML R and D measurement cell PROMETHEE 6 has shown the feasibility of method, and the capability of MCNP simulations to correctly reproduce experimental data and to assess the performances of the proposed correction. A next step of the study has focused on the performance assessment of the method on the industrial station using numerical simulation. A correlation between the prompt calibration coefficient of the {sup 239}Pu signal and the drum monitor signal was established using the MCNPX computer code and a fractional factorial experimental design composed of matrix parameters representative of the variation range of historical waste. Calculations have showed that the method allows the assay of the fissile mass with an uncertainty within a factor of 2, while the matrix effect without correction ranges on 2 decades. In this paper, we present and discuss the first experimental tests on the industrial ACC measurement system. A calculation vs. experiment benchmark has been achieved by performing dedicated calibration measurement with a representative drum and {sup 235}U samples. The preliminary comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)« less

  9. Noise-immune complex correlation for vasculature imaging based on standard and Jones-matrix optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Li, En; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    A new optical coherence angiography (OCA) method, called correlation mapping OCA (cmOCA), is presented by using the SNR-corrected complex correlation. An SNR-correction theory for the complex correlation calculation is presented. The method also integrates a motion-artifact-removal method for the sample motion induced decorrelation artifact. The theory is further extended to compute more reliable correlation by using multi- channel OCT systems, such as Jones-matrix OCT. The high contrast vasculature imaging of in vivo human posterior eye has been obtained. Composite imaging of cmOCA and degree of polarization uniformity indicates abnormalities of vasculature and pigmented tissues simultaneously.

  10. Perturbation theory corrections to the two-particle reduced density matrix variational method.

    PubMed

    Juhasz, Tamas; Mazziotti, David A

    2004-07-15

    In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.

  11. SU-E-T-644: Evaluation of Angular Dependence Correction for 2D Array Detector Using for Quality Assurance of Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthikeyan, N; Ganesh, K M; Vikraman, S

    2014-06-15

    Purpose: To evaluate the angular dependence correction for Matrix Evolution 2D array detector in quality assurance of volumetric modulated arc therapy(VMAT). Methods: Total ten patients comprising of different sites were planned for VMAT and taken for the study. Each plan was exposed on Matrix Evolution 2D array detector with Omnipro IMRT software based on the following three different methods using 6MV photon beams from Elekta Synergy linear accelerator. First method, VMAT plan was delivered on Matrix Evolution detector as it gantry mounted with dedicated holder with build-up of 2.3cm. Second, the VMAT plan was delivered with the static gantry anglemore » on to the table mounted setup. Third, the VMAT plan was delivered with actual gantry angle on Matrix Evolution detector fixed in Multicube phantom with gantry angle sensor and angular dependence correction were applied to quantify the plan quality. For all these methods, the corresponding QA plans were generated in TPS and the dose verification was done for both point and 2D fluence analysis with pass criteria of 3% dose difference and 3mm distance to agreement. Results: The measured point dose variation for the first method was observed as 1.58±0.6% of mean and SD with TPS calculated. For second and third method, the mean and standard deviation(SD) was observed as 1.67±0.7% and 1.85±0.8% respectively. The 2D fluence analysis of measured and TPS calculated has the mean and SD of 97.9±1.1%, 97.88±1.2% and 97.55±1.3% for first, second and third methods respectively. The calculated two-tailed Pvalue for point dose and 2D fluence analysis shows the insignificance with values of 0.9316 and 0.9015 respectively, among the different methods of QA. Conclusion: The qualitative evaluation of angular dependence correction for Matrix Evolution 2D array detector shows its competency in accuracy of quality assurance measurement of composite dose distribution of volumetric modulated arc therapy.« less

  12. Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media

    NASA Astrophysics Data System (ADS)

    Ito, G.; Mishchenko, M. I.; Glotch, T. D.

    2017-12-01

    Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.

  13. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  14. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  15. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  16. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  17. Statistical Refinement of the Q-Matrix in Cognitive Diagnosis

    ERIC Educational Resources Information Center

    Chiu, Chia-Yi

    2013-01-01

    Most methods for fitting cognitive diagnosis models to educational test data and assigning examinees to proficiency classes require the Q-matrix that associates each item in a test with the cognitive skills (attributes) needed to answer it correctly. In most cases, the Q-matrix is not known but is constructed from the (fallible) judgments of…

  18. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  19. A two-dimensional matrix correction for off-axis portal dose prediction errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Daniel W.; Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263; Kumaraswamy, Lalith

    2013-05-15

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axismore » prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in the 1D correction case, the 2D algorithm leaves the portal dosimetry process virtually unchanged in the central portion of the detector, and thus these correction algorithms are not needed for centrally located fields of moderate size (at least, in the case of 6 MV beam energy).Conclusion: The 2D correction improves the portal dosimetry results for those fields for which the 1D correction proves insufficient, especially in the inplane, off-axis regions of the detector. This 2D correction neglects the relatively smaller discrepancies that may be caused by backscatter from nonuniform machine components downstream from the detecting layer.« less

  20. Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

    NASA Astrophysics Data System (ADS)

    Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus

    2017-10-01

    We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.

  1. The free and forced vibrations of structures using the finite dynamic element method. Ph.D. Thesis, Aug. 1991 Final Report

    NASA Technical Reports Server (NTRS)

    Fergusson, Neil J.

    1992-01-01

    In addition to an extensive review of the literature on exact and corrective displacement based methods of vibration analysis, a few theorems are proven concerning the various structural matrices involved in such analyses. In particular, the consistent mass matrix and the quasi-static mass matrix are shown to be equivalent, in the sense that the terms in their respective Taylor expansions are proportional to one another, and that they both lead to the same dynamic stiffness matrix when used with the appropriate stiffness matrix.

  2. Information matrix estimation procedures for cognitive diagnostic models.

    PubMed

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  3. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE PAGES

    Li, Ruipeng; Saad, Yousef

    2017-08-01

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  4. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ruipeng; Saad, Yousef

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  5. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis.

    PubMed

    Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki

    2015-08-01

    A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.

  6. Correction of spin diffusion during iterative automated NOE assignment

    NASA Astrophysics Data System (ADS)

    Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  7. ANALYSES OF FISH TISSUE BY VACUUM DISTILLATION/GAS CHROMATOGRAPHY/MASS SPECTROMETRY

    EPA Science Inventory

    The analyses of fish tissue using VD/GC/MS with surrogate-based matrix corrections is described. Techniques for equilibrating surrogate and analyte spikes with a tissue matrix are presented, and equilibrated spiked samples are used to document method performance. The removal of a...

  8. Active Correction of Aperture Discontinuities-Optimized Stroke Minimization. I. A New Adaptive Interaction Matrix Algorithm

    NASA Astrophysics Data System (ADS)

    Mazoyer, J.; Pueyo, L.; N'Diaye, M.; Fogarty, K.; Zimmerman, N.; Leboulleux, L.; St. Laurent, K. E.; Soummer, R.; Shaklan, S.; Norman, C.

    2018-01-01

    Future searches for bio-markers on habitable exoplanets will rely on telescope instruments that achieve extremely high contrast at small planet-to-star angular separations. Coronagraphy is a promising starlight suppression technique, providing excellent contrast and throughput for off-axis sources on clear apertures. However, the complexity of space- and ground-based telescope apertures goes on increasing over time, owing to the combination of primary mirror segmentation, the secondary mirror, and its support structures. These discontinuities in the telescope aperture limit the coronagraph performance. In this paper, we present ACAD-OSM, a novel active method to correct for the diffractive effects of aperture discontinuities in the final image plane of a coronagraph. Active methods use one or several deformable mirrors that are controlled with an interaction matrix to correct for the aberrations in the pupil. However, they are often limited by the amount of aberrations introduced by aperture discontinuities. This algorithm relies on the recalibration of the interaction matrix during the correction process to overcome this limitation. We first describe the ACAD-OSM technique and compare it to the previous active methods for the correction of aperture discontinuities. We then show its performance in terms of contrast and off-axis throughput for static aperture discontinuities (segmentation, struts) and for some aberrations evolving over the life of the instrument (residual phase aberrations, artifacts in the aperture, misalignments in the coronagraph design). This technique can now obtain the Earth-like planet detection threshold of {10}10 contrast on any given aperture over at least a 10% spectral bandwidth, with several coronagraph designs.

  9. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  10. Mitigation of Faraday rotation in ALOS-2/PALSAR-2 full polarimetric SAR imageries

    NASA Astrophysics Data System (ADS)

    Mohanty, Shradha; Singh, Gulab

    2016-05-01

    The ionosphere, which extends from 50-450 kms in earth's atmosphere, is a particularly important region with regards electromagnetic wave propagation and radio communications in the L-band and lower frequencies. These ions interact with the traversing electromagnetic wave and cause rotation of polarization of the radar signal. In this paper, a potentially computable method for quantifying Faraday rotation (FR), is discussed with the knowledge of full polarimetric ALOS/PALSAR data and ALOS-2/PALSAR-2 data. For a well calibrated monostatic, full-pol ALOS-2/PALSAR-2 data, the reciprocal symmetry of the received scattering matrix is violated due to FR. Apart from FR, other system parameters like residual system noise, channel amplitude, phase imbalance and cross-talk, also account for the non-symmetry. To correct for the FR effect, firstly the noise correction was performed. PALSAR/PALSAR-2 data was converted into 4×4 covariance matrix to calculate the coherence between cross-polarized elements. Covariance matrix was modified by the coherence factor. For FR corrections, the covariance matrix was converted into 4×4 coherency matrix. The elements of coherency matrix were used to estimate FR angle and correct for FR. Higher mean FR values during ALOS-PALSAR measurements can be seen in regions nearer to the equator and the values gradually decrease with increase in latitude. Moreover, temporal variations in FR can also be noticed over different years (2006-2010), with varying sunspot activities for the Niigata, Japan test site. With increasing sunspot activities expected during ALOS-2/PALSAR-2 observations, more striping effects were observed over Mumbai, India. This data has also been FR corrected, with mean FR values of about 8°, using the above mentioned technique.

  11. Higgs boson mass corrections in the μ ν SSM with effective potential methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Bin; Feng, Tai-Fu; Yang, Xiu-Yi; Zhao, Shu-Min; Ning, Guo-Zhu

    2017-04-01

    To solve the μ problem of the MSSM, the μ from ν supersymmetric standard model (μ ν SSM ) introduces three singlet right-handed neutrino superfields ν^ic, which lead to the mixing of the neutral components of the Higgs doublets with the sneutrinos, producing a relatively large C P -even neutral scalar mass matrix. In this work, we analytically diagonalize the C P -even neutral scalar mass matrix and analyze in detail how the mixing impacts the lightest Higgs boson mass. We also give an approximate expression for the lightest Higgs boson mass. Simultaneously, we consider the radiative corrections to the Higgs boson masses with effective potential methods.

  12. Matrix effect and correction by standard addition in quantitative liquid chromatographic-mass spectrometric analysis of diarrhetic shellfish poisoning toxins.

    PubMed

    Ito, Shinya; Tsukada, Katsuo

    2002-01-11

    An evaluation of the feasibility of liquid chromatography-mass spectrometry (LC-MS) with atmospheric pressure ionization was made for quantitation of four diarrhetic shellfish poisoning toxins, okadaic acid, dinophysistoxin-1, pectenotoxin-6 and yessotoxin in scallops. When LC-MS was applied to the analysis of scallop extracts, large signal suppressions were observed due to coeluting substances from the column. To compensate for these matrix signal suppressions, the standard addition method was applied. First, the sample was analyzed and then the sample involving the addition of calibration standards is analyzed. Although this method requires two LC-MS runs per analysis, effective correction of quantitative errors was found.

  13. On the Prediction of Mechanical Behavior of Particulate Composites Using an Improved Mori-Tanaka Method

    DTIC Science & Technology

    1997-01-01

    perturbed strain, [L/ L] P501263.PDF [Page: 12 of 122] UNCLASSIFIED viii €~j constrained strain, [L/ L] €£j eigenstrain , [L/ L] €£J c corrected... eigenstrain of phase-r material, [L/ L] £iJ u uncorrected eigenstrain of phase~r material, [L/ L] fijkl correction matrix of phase-r material... eigenstrains , [2] wher·e St.jkl is known as the Eshelby tensor. The tensor is a function of the matrix Poisson ratio and the shape of the inclusion

  14. A loop-counting method for covariate-corrected low-rank biclustering of gene-expression and genome-wide association study data.

    PubMed

    Rangan, Aaditya V; McGrouther, Caroline C; Kelsoe, John; Schork, Nicholas; Stahl, Eli; Zhu, Qian; Krishnan, Arjun; Yao, Vicky; Troyanskaya, Olga; Bilaloglu, Seda; Raghavan, Preeti; Bergen, Sarah; Jureus, Anders; Landen, Mikael

    2018-05-14

    A common goal in data-analysis is to sift through a large data-matrix and detect any significant submatrices (i.e., biclusters) that have a low numerical rank. We present a simple algorithm for tackling this biclustering problem. Our algorithm accumulates information about 2-by-2 submatrices (i.e., 'loops') within the data-matrix, and focuses on rows and columns of the data-matrix that participate in an abundance of low-rank loops. We demonstrate, through analysis and numerical-experiments, that this loop-counting method performs well in a variety of scenarios, outperforming simple spectral methods in many situations of interest. Another important feature of our method is that it can easily be modified to account for aspects of experimental design which commonly arise in practice. For example, our algorithm can be modified to correct for controls, categorical- and continuous-covariates, as well as sparsity within the data. We demonstrate these practical features with two examples; the first drawn from gene-expression analysis and the second drawn from a much larger genome-wide-association-study (GWAS).

  15. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    NASA Astrophysics Data System (ADS)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  16. A-TEEMTM, a new molecular fingerprinting technique: simultaneous absorbance-transmission and fluorescence excitation-emission matrix method

    NASA Astrophysics Data System (ADS)

    Quatela, Alessia; Gilmore, Adam M.; Steege Gall, Karen E.; Sandros, Marinella; Csatorday, Karoly; Siemiarczuk, Alex; (Ben Yang, Boqian; Camenen, Loïc

    2018-04-01

    We investigate the new simultaneous absorbance-transmission and fluorescence excitation-emission matrix method for rapid and effective characterization of the varying components from a mixture. The absorbance-transmission and fluorescence excitation-emission matrix method uniquely facilitates correction of fluorescence inner-filter effects to yield quantitative fluorescence spectral information that is largely independent of component concentration. This is significant because it allows one to effectively monitor quantitative component changes using multivariate methods and to generate and evaluate spectral libraries. We present the use of this novel instrument in different fields: i.e. tracking changes in complex mixtures including natural water, wine as well as monitoring stability and aggregation of hormones for biotherapeutics.

  17. Trust in Leadership DEOCS 4.1 Construct Validity Summary

    DTIC Science & Technology

    2017-08-01

    Item Corrected Item- Total Correlation Cronbach’s Alpha if Item Deleted Four-point Scale Items I can depend on my immediate supervisor to meet...1974) were used to assess the fit between the data and the factor. The BTS hypothesizes that the correlation matrix is an identity matrix. The...to reject the null hypothesis that the correlation matrix is an identity, and to conclude that the factor analysis is an appropriate method to

  18. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  19. Second-order standard addition for deconvolution and quantification of fatty acids of fish oil using GC-MS.

    PubMed

    Vosough, Maryam; Salemi, Amir

    2007-08-15

    In the present work two second-order calibration methods, generalized rank annihilation method (GRAM) and multivariate curve resolution-alternating least square (MCR-ALS) have been applied on standard addition data matrices obtained by gas chromatography-mass spectrometry (GC-MS) to characterize and quantify four unsaturated fatty acids cis-9-hexadecenoic acid (C16:1omega7c), cis-9-octadecenoic acid (C18:1omega9c), cis-11-eicosenoic acid (C20:1omega9) and cis-13-docosenoic acid (C22:1omega9) in fish oil considering matrix interferences. With these methods, the area does not need to be directly measured and predictions are more accurate. Because of non-trilinear conditions of GC-MS data matrices, at first MCR-ALS and GRAM have been used on uncorrected data matrices. In comparison to MCR-ALS, biased and imprecise concentrations (%R.S.D.=27.3) were obtained using GRAM without correcting the retention time-shift. As trilinearity is the essential requirement for implementing GRAM, the data need to be corrected. Multivariate rank alignment objectively corrects the run-to-run retention time variations between sample GC-MS data matrix and a standard addition GC-MS data matrix. Then, two second-order algorithms have been compared with each other. The above algorithms provided similar mean predictions, pure concentrations and spectral profiles. The results validated using standard mass spectra of target compounds. In addition, some of the quantification results were compared with the concentration values obtained using the selected mass chromatograms. As in the case of strong peak-overlap and the matrix effect, the classical univariate method of determination of the area of the peaks of the analytes will fail, the "second-order advantage" has solved this problem successfully.

  20. Multireference configuration interaction theory using cumulant reconstruction with internal contraction of density matrix renormalization group wave function.

    PubMed

    Saitow, Masaaki; Kurashige, Yuki; Yanai, Takeshi

    2013-07-28

    We report development of the multireference configuration interaction (MRCI) method that can use active space scalable to much larger size references than has previously been possible. The recent development of the density matrix renormalization group (DMRG) method in multireference quantum chemistry offers the ability to describe static correlation in a large active space. The present MRCI method provides a critical correction to the DMRG reference by including high-level dynamic correlation through the CI treatment. When the DMRG and MRCI theories are combined (DMRG-MRCI), the full internal contraction of the reference in the MRCI ansatz, including contraction of semi-internal states, plays a central role. However, it is thought to involve formidable complexity because of the presence of the five-particle rank reduced-density matrix (RDM) in the Hamiltonian matrix elements. To address this complexity, we express the Hamiltonian matrix using commutators, which allows the five-particle rank RDM to be canceled out without any approximation. Then we introduce an approximation to the four-particle rank RDM by using a cumulant reconstruction from lower-particle rank RDMs. A computer-aided approach is employed to derive the exceedingly complex equations of the MRCI in tensor-contracted form and to implement them into an efficient parallel computer code. This approach extends to the size-consistency-corrected variants of MRCI, such as the MRCI+Q, MR-ACPF, and MR-AQCC methods. We demonstrate the capability of the DMRG-MRCI method in several benchmark applications, including the evaluation of single-triplet gap of free-base porphyrin using 24 active orbitals.

  1. Development of a three-dimensional correction method for optical distortion of flow field inside a liquid droplet.

    PubMed

    Gim, Yeonghyeon; Ko, Han Seo

    2016-04-15

    In this Letter, a three-dimensional (3D) optical correction method, which was verified by simulation, was developed to reconstruct droplet-based flow fields. In the simulation, a synthetic phantom was reconstructed using a simultaneous multiplicative algebraic reconstruction technique with three detectors positioned at the synthetic object (represented by the phantom), with offset angles of 30° relative to each other. Additionally, a projection matrix was developed using the ray tracing method. If the phantom is in liquid, the image of the phantom can be distorted since the light passes through a convex liquid-vapor interface. Because of the optical distortion effect, the projection matrix used to reconstruct a 3D field should be supplemented by the revision ray, instead of the original projection ray. The revision ray can be obtained from the refraction ray occurring on the surface of the liquid. As a result, the error on the reconstruction field of the phantom could be reduced using the developed optical correction method. In addition, the developed optical method was applied to a Taylor cone which was caused by the high voltage between the droplet and the substrate.

  2. Level repulsion and band sorting in phononic crystals

    NASA Astrophysics Data System (ADS)

    Lu, Yan; Srivastava, Ankit

    2018-02-01

    In this paper we consider the problem of avoided crossings (level repulsion) in phononic crystals and suggest a computationally efficient strategy to distinguish them from normal cross points. This process is essential for the correct sorting of the phononic bands and, subsequently, for the accurate determination of mode continuation, group velocities, and emergent properties which depend on them such as thermal conductivity. Through explicit phononic calculations using generalized Rayleigh quotient, we identify exact locations of exceptional points in the complex wavenumber domain which results in level repulsion in the real domain. We show that in the vicinity of the exceptional point the relevant phononic eigenvalue surfaces resemble the surfaces of a 2 by 2 parameter-dependent matrix. Along a closed loop encircling the exceptional point we show that the phononic eigenvalues are exchanged, just as they are for the 2 by 2 matrix case. However, the behavior of the associated eigenvectors is shown to be more complex in the phononic case. Along a closed loop around an exceptional point, we show that the eigenvectors can flip signs multiple times unlike a 2 by 2 matrix where the flip of sign occurs only once. Finally, we exploit these eigenvector sign flips around exceptional points to propose a simple and efficient method of distinguishing them from normal crosses and of correctly sorting the band-structure. Our proposed method is roughly an order-of-magnitude faster than the zoom-in method and correctly identifies > 96% of the cases considered. Both its speed and accuracy can be further improved and we suggest some ways of achieving this. Our method is general and, as such, would be directly applicable to other eigenvalue problems where the eigenspectrum needs to be correctly sorted.

  3. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    PubMed

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Single Transducer Ultrasonic Imaging Method that Eliminates the Effect of Plate Thickness Variation in the Image

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    1996-01-01

    This article describes a single transducer ultrasonic imaging method that eliminates the effect of plate thickness variation in the image. The method thus isolates ultrasonic variations due to material microstructure. The use of this method can result in significant cost savings because the ultrasonic image can be interpreted correctly without the need for machining to achieve precise thickness uniformity during nondestructive evaluations of material development. The method is based on measurement of ultrasonic velocity. Images obtained using the thickness-independent methodology are compared with conventional velocity and c-scan echo peak amplitude images for monolithic ceramic (silicon nitride), metal matrix composite and polymer matrix composite materials. It was found that the thickness-independent ultrasonic images reveal and quantify correctly areas of global microstructural (pore and fiber volume fraction) variation due to the elimination of thickness effects. The thickness-independent ultrasonic imaging method described in this article is currently being commercialized under a cooperative agreement between NASA Lewis Research Center and Sonix, Inc.

  5. Recognition and defect detection of dot-matrix text via variation-model based learning

    NASA Astrophysics Data System (ADS)

    Ohyama, Wataru; Suzuki, Koushi; Wakabayashi, Tetsushi

    2017-03-01

    An algorithm for recognition and defect detection of dot-matrix text printed on products is proposed. Extraction and recognition of dot-matrix text contains several difficulties, which are not involved in standard camera-based OCR, that the appearance of dot-matrix characters is corrupted and broken by illumination, complex texture in the background and other standard characters printed on product packages. We propose a dot-matrix text extraction and recognition method which does not require any user interaction. The method employs detected location of corner points and classification score. The result of evaluation experiment using 250 images shows that recall and precision of extraction are 78.60% and 76.03%, respectively. Recognition accuracy of correctly extracted characters is 94.43%. Detecting printing defect of dot-matrix text is also important in the production scene to avoid illegal productions. We also propose a detection method for printing defect of dot-matrix characters. The method constructs a feature vector of which elements are classification scores of each character class and employs support vector machine to classify four types of printing defect. The detection accuracy of the proposed method is 96.68 %.

  6. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  7. Stray light correction on array spectroradiometers for optical radiation risk assessment in the workplace.

    PubMed

    Barlier-Salsi, A

    2014-12-01

    The European directive 2006/25/EC requires the employer to assess and, if necessary, measure the levels of exposure to optical radiation in the workplace. Array spectroradiometers can measure optical radiation from various types of sources; however poor stray light rejection affects their accuracy. A stray light correction matrix, using a tunable laser, was developed at the National Institute of Standards and Technology (NIST). As tunable lasers are very expensive, the purpose of this study was to implement this method using only nine low power lasers; other elements of the correction matrix being completed by interpolation and extrapolation. The correction efficiency was evaluated by comparing CCD spectroradiometers with and without correction and a scanning double monochromator device as reference. Similar to findings recorded by NIST, these experiments show that it is possible to reduce the spectral stray light by one or two orders of magnitude. In terms of workplace risk assessment, this spectral stray light correction method helps determine exposure levels, with an acceptable degree of uncertainty, for the majority of workplace situations. The level of uncertainty depends upon the model of spectroradiometers used; the best results are obtained with CCD detectors having an enhanced spectral sensitivity in the UV range. Thus corrected spectroradiometers require a validation against a scanning double monochromator spectroradiometer before using them for risk assessment in the workplace.

  8. Multi-spectrometer calibration transfer based on independent component analysis.

    PubMed

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  9. Cellular reflectarray antenna and method of making same

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R (Inventor)

    2011-01-01

    A method of manufacturing a cellular reflectarray antenna arranged in an m by n matrix of radiating elements for communication with a satellite includes steps of determining a delay .phi.m,n for each of said m by n matrix of elements of said cellular reflectarray antenna using sub-steps of: determining the longitude and latitude of operation, determining elevation and azimuth angles of the reflectarray with respect to the satellite and converting theta.sub.0 (.theta..sub.0) and phi.sub.0 (.phi..sub.0), determining .DELTA..beta..sub.m,n, the pointing vector correction, for a given inter-element spacing and wavelength, determining .DELTA..phi..sub.m,n, the spherical wave front correction factor, for a given radius from the central element and/or from measured data from the feed horn; and, determining a delay .phi.m,n for each of said m by n matrix of elements as a function of .DELTA..beta..sub.m,n and .DELTA..phi..sub.m,n.

  10. Cellular reflectarray antenna and method of making same

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R (Inventor)

    2010-01-01

    A method of manufacturing a cellular reflectarray antenna arranged in an m by n matrix of radiating elements for communication with a satellite includes steps of determining a delay .phi.m,n for each of said m by n matrix of elements of said cellular reflectarray antenna using sub-steps of: determining the longitude and latitude of operation, determining elevation and azimuth angles of the reflectarray with respect to the satellite and converting theta.sub.0 (.theta..sub.0) and phi.sub.0 (.phi..sub.0), determining .DELTA..beta..sub.m,n, the pointing vector correction, for a given inter-element spacing and wavelength, determining .DELTA..phi..sub.m,n, the spherical wave front correction factor, for a given radius from the central element and/or from measured data from the feed horn; and, determining a delay .phi.m,n for each of said m by n matrix of elements as a function of .DELTA..beta..sub.m,n and .DELTA..phi..sub.m,n..

  11. A 3D correction method for predicting the readings of a PinPoint chamber on the CyberKnife® M6™ machine

    NASA Astrophysics Data System (ADS)

    Zhang, Yongqian; Brandner, Edward; Ozhasoglu, Cihat; Lalonde, Ron; Heron, Dwight E.; Saiful Huq, M.

    2018-02-01

    The use of small fields in radiation therapy techniques has increased substantially in particular in stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT). However, as field size reduces further still, the response of the detector changes more rapidly with field size, and the effects of measurement uncertainties become increasingly significant due to the lack of lateral charged particle equilibrium, spectral changes as a function of field size, detector choice, and subsequent perturbations of the charged particle fluence. This work presents a novel 3D dose volume-to-point correction method to predict the readings of a 0.015 cc PinPoint chamber (PTW 31014) for both small static-fields and composite-field dosimetry formed by fixed cones on the CyberKnife® M6™ machine. A 3D correction matrix is introduced to link the 3D dose distribution to the response of the PinPoint chamber in water. The parameters of the correction matrix are determined by modeling its 3D dose response in circular fields created using the 12 fixed cones (5 mm-60 mm) on a CyberKnife® M6™ machine. A penalized least-square optimization problem is defined by fitting the calculated detector reading to the experimental measurement data to generate the optimal correction matrix; the simulated annealing algorithm is used to solve the inverse optimization problem. All the experimental measurements are acquired for every 2 mm chamber shift in the horizontal planes for each field size. The 3D dose distributions for the measurements are calculated using the Monte Carlo calculation with the MultiPlan® treatment planning system (Accuray Inc., Sunnyvale, CA, USA). The performance evaluation of the 3D conversion matrix is carried out by comparing the predictions of the output factors (OFs), off-axis ratios (OARs) and percentage depth dose (PDD) data to the experimental measurement data. The discrepancy of the measurement and the prediction data for composite fields is also performed for clinical SRS plans. The optimization algorithm used for generating the optimal correction factors is stable, and the resulting correction factors were smooth in the spatial domain. The measurement and prediction of OFs agree closely with percentage differences of less than 1.9% for all the 12 cones. The discrepancies between the prediction and the measurement PDD readings at 50 mm and 80 mm depth are 1.7% and 1.9%, respectively. The percentage differences of OARs between measurement and prediction data are less than 2% in the low dose gradient region, and 2%/1 mm discrepancies are observed within the high dose gradient regions. The differences between the measurement and prediction data for all the CyberKnife based SRS plans are less than 1%. These results demonstrate the existence and efficiency of the novel 3D correction method for small field dosimetry. The 3D correction matrix links the 3D dose distribution and the reading of the PinPoint chamber. The comparison between the predicted reading and the measurement data for static small fields (OFs, OARs and PDDs) yield discrepancies within 2% for low dose gradient regions and 2%/1 mm for high dose gradient regions; the discrepancies between the predicted and the measurement data are less than 1% for all the SRS plans. The 3D correction method provides an access to evaluate the clinical measurement data and can be applied to non-standard composite fields intensity modulated radiation therapy point dose verification.

  12. Improvement of non-destructive fissile mass assays in α low-level waste drums: A matrix correction method based on neutron capture gamma-rays and a neutron generator

    NASA Astrophysics Data System (ADS)

    Jallu, F.; Loche, F.

    2008-08-01

    Within the framework of radioactive waste control, non-destructive assay (NDA) methods may be employed. The active neutron interrogation (ANI) method is now well-known and effective in quantifying low α-activity fissile masses (mainly 235U, 239Pu, 241Pu) with low densities, i.e. less than about 0.4, in radioactive waste drums of volumes up to 200 l. The PROMpt Epithermal and THErmal interrogation Experiment (PROMETHEE [F. Jallu, A. Mariani, C. Passard, A.-C. Raoux, H. Toubon, Alpha low level waste control: improvement of the PROMETHEE 6 assay system performances. Nucl. Technol. 153 (January) (2006); C. Passard, A. Mariani, F. Jallu, J. Romeyer-Dherber, H. Recroix, M. Rodriguez, J. Loridon, C. Denis, PROMETHEE: an alpha low level waste assay system using passive and active neutron measurement methods. Nucl. Technol. 140 (December) (2002) 303-314]) based on ANI has been under development since 1996 to reach the incinerating α low level waste (LLW) criterion of about 50 Bq[α] per gram of crude waste (≈50 μg Pu) in 118 l drums on the date the drums are conditioned. Difficulties arise when dealing with matrices containing neutron energy moderators such as H and neutron absorbents such as Cl. These components may have a great influence on the fissile mass deduced from the neutron signal measured by ANI. For example, the calibration coefficient measured in a 118 l drum containing a cellulose matrix (density d = 0.144 g cm -3) may be 50 times higher than that obtained in a poly-vinyl-chloride matrix ( d = 0.253 g cm -3). Without any information on the matrix, the fissile mass is often overestimated due to safety procedures and by considering the most disadvantageous calibration coefficient corresponding to the most absorbing and moderating calibration matrix. The work discussed in this paper was performed at the CEA Nuclear Measurement Laboratory in France. It concerns the development of a matrix effect correction method, which consists in identifying and quantifying the matrix components by using prompt gamma-rays following neutron capture. The method aims to refine the value of the adequate calibration coefficient used for ANI analysis. This paper presents the final results obtained for 118 l waste drums with low α-activity and low density. This paper discusses the experimental and modelling studies and describes the development of correction abacuses based on gamma-ray spectrometry signals.

  13. Machine-learned cluster identification in high-dimensional data.

    PubMed

    Ultsch, Alfred; Lötsch, Jörn

    2017-02-01

    High-dimensional biomedical data are frequently clustered to identify subgroup structures pointing at distinct disease subtypes. It is crucial that the used cluster algorithm works correctly. However, by imposing a predefined shape on the clusters, classical algorithms occasionally suggest a cluster structure in homogenously distributed data or assign data points to incorrect clusters. We analyzed whether this can be avoided by using emergent self-organizing feature maps (ESOM). Data sets with different degrees of complexity were submitted to ESOM analysis with large numbers of neurons, using an interactive R-based bioinformatics tool. On top of the trained ESOM the distance structure in the high dimensional feature space was visualized in the form of a so-called U-matrix. Clustering results were compared with those provided by classical common cluster algorithms including single linkage, Ward and k-means. Ward clustering imposed cluster structures on cluster-less "golf ball", "cuboid" and "S-shaped" data sets that contained no structure at all (random data). Ward clustering also imposed structures on permuted real world data sets. By contrast, the ESOM/U-matrix approach correctly found that these data contain no cluster structure. However, ESOM/U-matrix was correct in identifying clusters in biomedical data truly containing subgroups. It was always correct in cluster structure identification in further canonical artificial data. Using intentionally simple data sets, it is shown that popular clustering algorithms typically used for biomedical data sets may fail to cluster data correctly, suggesting that they are also likely to perform erroneously on high dimensional biomedical data. The present analyses emphasized that generally established classical hierarchical clustering algorithms carry a considerable tendency to produce erroneous results. By contrast, unsupervised machine-learned analysis of cluster structures, applied using the ESOM/U-matrix method, is a viable, unbiased method to identify true clusters in the high-dimensional space of complex data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  15. Direct structural parameter identification by modal test results

    NASA Technical Reports Server (NTRS)

    Chen, J.-C.; Kuo, C.-P.; Garba, J. A.

    1983-01-01

    A direct identification procedure is proposed to obtain the mass and stiffness matrices based on the test measured eigenvalues and eigenvectors. The method is based on the theory of matrix perturbation in which the correct mass and stiffness matrices are expanded in terms of analytical values plus a modification matrix. The simplicity of the procedure enables real time operation during the structural testing.

  16. Standard addition with internal standardisation as an alternative to using stable isotope labelled internal standards to correct for matrix effects-Comparison and validation using liquid chromatography-​tandem mass spectrometric assay of vitamin D.

    PubMed

    Hewavitharana, Amitha K; Abu Kassim, Nur Sofiah; Shaw, Paul Nicholas

    2018-06-08

    With mass spectrometric detection in liquid chromatography, co-eluting impurities affect the analyte response due to ion suppression/enhancement. Internal standard calibration method, using co-eluting stable isotope labelled analogue of each analyte as the internal standard, is the most appropriate technique available to correct for these matrix effects. However, this technique is not without drawbacks, proved to be expensive because separate internal standard for each analyte is required, and the labelled compounds are expensive or require synthesising. Traditionally, standard addition method has been used to overcome the matrix effects in atomic spectroscopy and was a well-established method. This paper proposes the same for mass spectrometric detection, and demonstrates that the results are comparable to those with the internal standard method using labelled analogues, for vitamin D assay. As conventional standard addition procedure does not address procedural errors, we propose the inclusion of an additional internal standard (not co-eluting). Recoveries determined on human serum samples show that the proposed method of standard addition yields more accurate results than the internal standardisation using stable isotope labelled analogues. The precision of the proposed method of standard addition is superior to the conventional standard addition method. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Simple Pixel Structure Using Video Data Correction Method for Nonuniform Electrical Characteristics of Polycrystalline Silicon Thin-Film Transistors and Differential Aging Phenomenon of Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Hai-Jung In,; Oh-Kyong Kwon,

    2010-03-01

    A simple pixel structure using a video data correction method is proposed to compensate for electrical characteristic variations of driving thin-film transistors (TFTs) and the degradation of organic light-emitting diodes (OLEDs) in active-matrix OLED (AMOLED) displays. The proposed method senses the electrical characteristic variations of TFTs and OLEDs and stores them in external memory. The nonuniform emission current of TFTs and the aging of OLEDs are corrected by modulating video data using the stored data. Experimental results show that the emission current error due to electrical characteristic variation of driving TFTs is in the range from -63.1 to 61.4% without compensation, but is decreased to the range from -1.9 to 1.9% with the proposed correction method. The luminance error due to the degradation of an OLED is less than 1.8% when the proposed correction method is used for a 50% degraded OLED.

  18. Application of Quantitative Analytical Electron Microscopy to the Mineral Content of Insect Cuticle

    NASA Astrophysics Data System (ADS)

    Rasch, Ron; Cribb, Bronwen W.; Barry, John; Palmer, Christopher M.

    2003-04-01

    Quantification of calcium in the cuticle of the fly larva Exeretonevra angustifrons was undertaken at the micron scale using wavelength dispersive X-ray microanalysis, analytical standards, and a full matrix correction. Calcium and phosphorus were found to be present in the exoskeleton in a ratio that indicates amorphous calcium phosphate. This was confirmed through electron diffraction of the calcium-containing tissue. Due to the pragmatic difficulties of measuring light elements, it is not uncommon in the field of entomology to neglect the use of matrix corrections when performing microanalysis of bulk insect specimens. To determine, firstly, whether such a strategy affects the outcome and secondly, which matrix correction is preferable, phi-rho (z) and ZAF matrix corrections were contrasted with each other and without matrix correction. The best estimate of the mineral phase was found to be given by using the phi-rho (z) correction. When no correction was made, the ratio of Ca to P fell outside the range for amorphous calcium phosphate, possibly leading to flawed interpretation of the mineral form when used on its own.

  19. Compensation of matrix effects in gas chromatography-mass spectrometry analysis of pesticides using a combination of matrix matching and multiple isotopically labeled internal standards.

    PubMed

    Tsuchiyama, Tomoyuki; Katsuhara, Miki; Nakajima, Masahiro

    2017-11-17

    In the multi-residue analysis of pesticides using GC-MS, the quantitative results are adversely affected by a phenomenon known as the matrix effect. Although the use of matrix-matched standards is considered to be one of the most practical solutions to this problem, complete removal of the matrix effect is difficult in complex food matrices owing to their inconsistency. As a result, residual matrix effects can introduce analytical errors. To compensate for residual matrix effects, we have developed a novel method that employs multiple isotopically labeled internal standards (ILIS). The matrix effects of ILIS and pesticides were evaluated in spiked matrix extracts of various agricultural commodities, and the obtained data were subjected to simple statistical analysis. Based on the similarities between the patterns of variation in the analytical response, a total of 32 isotopically labeled compounds were assigned to 338 pesticides as internal standards. It was found that by utilizing multiple ILIS, residual matrix effects could be effectively compensated. The developed method exhibited superior quantitative performance compared with the common single-internal-standard method. The proposed method is more feasible for regulatory purposes than that using only predetermined correction factors and is considered to be promising for practical applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. An efficient gridding reconstruction method for multishot non-Cartesian imaging with correction of off-resonance artifacts.

    PubMed

    Meng, Yuguang; Lei, Hao

    2010-06-01

    An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. (c) 2010 Wiley-Liss, Inc.

  1. Apparatus And Method For Reconstructing Data Using Cross-Parity Stripes On Storage Media

    DOEpatents

    Hughes, James Prescott

    2003-06-17

    An apparatus and method for reconstructing missing data using cross-parity stripes on a storage medium is provided. The apparatus and method may operate on data symbols having sizes greater than a data bit. The apparatus and method makes use of a plurality of parity stripes for reconstructing missing data stripes. The parity symbol values in the parity stripes are used as a basis for determining the value of the missing data symbol in a data stripe. A correction matrix is shifted along the data stripes, correcting missing data symbols as it is shifted. The correction is performed from the outside data stripes towards the inner data stripes to thereby use previously reconstructed data symbols to reconstruct other missing data symbols.

  2. Charge Resolution of the Silicon Matrix of the ATIC Experiment

    NASA Technical Reports Server (NTRS)

    Zatsepin, V. I.; Adams, J. H., Jr.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Case, G.; Christl, M.; Ganel, O.; Fazely, A. R.; Ganel, O.; hide

    2002-01-01

    ATIC (Advanced Thin Ionization Calorimeter) is a balloon borne experiment designed to measure the cosmic ray composition for elements from hydrogen to iron and their energy spectra from approx.50 GeV to near 100 TeV. It consists of a Si-matrix detector to determine the charge of a CRT particle, a scintillator hodoscope for tracking, carbon interaction targets and a fully active BGO calorimeter. ATIC had its first flight from McMurdo, Antarctica from 28/12/2000 to 13/01/2001. The ATIC flight collected approximately 25 million events. The silicon matrix of the ATIC spectrometer is designed to resolve individual elements from proton to iron. To provide this resolution careful calibration of each pixel of the silicon matrix is required. Firstly, for each electronic channel of the matrix the pedestal value was subtracted taking into account its drift during the flight. The muon calibration made before the flight was used then to convert electric signals (in ADC channel number) to energy deposits in each pixel. However, the preflight muon calibration was not accurate enough for the purpose, because of lack of statistics in each pixel. To improve charge resolution the correction was done for the position of Helium peak in each pixel during the flight . The other way to set electric signals in electronics channels of the Si-matrix to one scale was correction for electric channel gains accurately measured in laboratory. In these measurements it was found that small different nonlinearities for different channels are present in the region of charge Z > 20. The correction for these non-linearities was not done yet. In linear approximation the method provides practically the same resolution as muon calibration plus He-peak correction. For searching a pixel with the signal of primary particle an indication from the cascade in the calorimeter was used. For this purpose a trajectory was reconstructed using weight centers of energy deposits in BGO layers. The point of intersection of this trajectory with Si-matrix and its RMS was determined. The pixel with maximal signal in 3sigma region was taken as sought. The signal in this pixel was corrected by trajectory zenith angle. The preliminary results on charge resolution of the Si-matrix in the range from protons to iron are presented.

  3. Absolutely and uniformly convergent iterative approach to inverse scattering with an infinite radius of convergence

    DOEpatents

    Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA

    2007-05-01

    A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.

  4. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  5. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  6. Invisible data matrix detection with smart phone using geometric correction and Hough transform

    NASA Astrophysics Data System (ADS)

    Sun, Halit; Uysalturk, Mahir C.; Karakaya, Mahmut

    2016-04-01

    Two-dimensional data matrices are used in many different areas that provide quick and automatic data entry to the computer system. Their most common usage is to automatically read labeled products (books, medicines, food, etc.) and recognize them. In Turkey, alcohol beverages and tobacco products are labeled and tracked with the invisible data matrices for public safety and tax purposes. In this application, since data matrixes are printed on a special paper with a pigmented ink, it cannot be seen under daylight. When red LEDs are utilized for illumination and reflected light is filtered, invisible data matrices become visible and decoded by special barcode readers. Owing to their physical dimensions, price and requirement of special training to use; cheap, small sized and easily carried domestic mobile invisible data matrix reader systems are required to be delivered to every inspector in the law enforcement units. In this paper, we first developed an apparatus attached to the smartphone including a red LED light and a high pass filter. Then, we promoted an algorithm to process captured images by smartphones and to decode all information stored in the invisible data matrix images. The proposed algorithm mainly involves four stages. In the first step, data matrix code is processed by Hough transform processing to find "L" shaped pattern. In the second step, borders of the data matrix are found by using the convex hull and corner detection methods. Afterwards, distortion of invisible data matrix corrected by geometric correction technique and the size of every module is fixed in rectangular shape. Finally, the invisible data matrix is scanned line by line in the horizontal axis to decode it. Based on the results obtained from the real test images of invisible data matrix captured with a smartphone, the proposed algorithm indicates high accuracy and low error rate.

  7. Correction method for stripe nonuniformity.

    PubMed

    Qian, Weixian; Chen, Qian; Gu, Guohua; Guan, Zhiqiang

    2010-04-01

    Stripe nonuniformity is very typical in line infrared focal plane arrays (IR-FPA) and uncooled staring IR-FPA. In this paper, the mechanism of the stripe nonuniformity is analyzed, and the gray-scale co-occurrence matrix theory and optimization theory are studied. Through these efforts, the stripe nonuniformity correction problem is translated into the optimization problem. The goal of the optimization is to find the minimal energy of the image's line gradient. After solving the constrained nonlinear optimization equation, the parameters of the stripe nonuniformity correction are obtained and the stripe nonuniformity correction is achieved. The experiments indicate that this algorithm is effective and efficient.

  8. Quantitative mass spectrometry methods for pharmaceutical analysis

    PubMed Central

    Loos, Glenn; Van Schepdael, Ann

    2016-01-01

    Quantitative pharmaceutical analysis is nowadays frequently executed using mass spectrometry. Electrospray ionization coupled to a (hybrid) triple quadrupole mass spectrometer is generally used in combination with solid-phase extraction and liquid chromatography. Furthermore, isotopically labelled standards are often used to correct for ion suppression. The challenges in producing sensitive but reliable quantitative data depend on the instrumentation, sample preparation and hyphenated techniques. In this contribution, different approaches to enhance the ionization efficiencies using modified source geometries and improved ion guidance are provided. Furthermore, possibilities to minimize, assess and correct for matrix interferences caused by co-eluting substances are described. With the focus on pharmaceuticals in the environment and bioanalysis, different separation techniques, trends in liquid chromatography and sample preparation methods to minimize matrix effects and increase sensitivity are discussed. Although highly sensitive methods are generally aimed for to provide automated multi-residue analysis, (less sensitive) miniaturized set-ups have a great potential due to their ability for in-field usage. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644982

  9. Correction of electrode modelling errors in multi-frequency EIT imaging.

    PubMed

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  10. Interlaboratory Comparison of Sample Preparation Methods, Database Expansions, and Cutoff Values for Identification of Yeasts by Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry Using a Yeast Test Panel

    PubMed Central

    Vlek, Anneloes; Kolecka, Anna; Khayhan, Kantarawee; Theelen, Bart; Groenewald, Marizeth; Boel, Edwin

    2014-01-01

    An interlaboratory study using matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) to determine the identification of clinically important yeasts (n = 35) was performed at 11 clinical centers, one company, and one reference center using the Bruker Daltonics MALDI Biotyper system. The optimal cutoff for the MALDI-TOF MS score was investigated using receiver operating characteristic (ROC) curve analyses. The percentages of correct identifications were compared for different sample preparation methods and different databases. Logistic regression analysis was performed to analyze the association between the number of spectra in the database and the percentage of strains that were correctly identified. A total of 5,460 MALDI-TOF MS results were obtained. Using all results, the area under the ROC curve was 0.95 (95% confidence interval [CI], 0.94 to 0.96). With a sensitivity of 0.84 and a specificity of 0.97, a cutoff value of 1.7 was considered optimal. The overall percentage of correct identifications (formic acid-ethanol extraction method, score ≥ 1.7) was 61.5% when the commercial Bruker Daltonics database (BDAL) was used, and it increased to 86.8% by using an extended BDAL supplemented with a Centraalbureau voor Schimmelcultures (CBS)-KNAW Fungal Biodiversity Centre in-house database (BDAL+CBS in-house). A greater number of main spectra (MSP) in the database was associated with a higher percentage of correct identifications (odds ratio [OR], 1.10; 95% CI, 1.05 to 1.15; P < 0.01). The results from the direct transfer method ranged from 0% to 82.9% correct identifications, with the results of the top four centers ranging from 71.4% to 82.9% correct identifications. This study supports the use of a cutoff value of 1.7 for the identification of yeasts using MALDI-TOF MS. The inclusion of enough isolates of the same species in the database can enhance the proportion of correctly identified strains. Further optimization of the preparation methods, especially of the direct transfer method, may contribute to improved diagnosis of yeast-related infections. PMID:24920782

  11. ANALYSIS OF A CLASSIFICATION ERROR MATRIX USING CATEGORICAL DATA TECHNIQUES.

    USGS Publications Warehouse

    Rosenfield, George H.; Fitzpatrick-Lins, Katherine

    1984-01-01

    Summary form only given. A classification error matrix typically contains tabulation results of an accuracy evaluation of a thematic classification, such as that of a land use and land cover map. The diagonal elements of the matrix represent the counts corrected, and the usual designation of classification accuracy has been the total percent correct. The nondiagonal elements of the matrix have usually been neglected. The classification error matrix is known in statistical terms as a contingency table of categorical data. As an example, an application of these methodologies to a problem of remotely sensed data concerning two photointerpreters and four categories of classification indicated that there is no significant difference in the interpretation between the two photointerpreters, and that there are significant differences among the interpreted category classifications. However, two categories, oak and cottonwood, are not separable in classification in this experiment at the 0. 51 percent probability. A coefficient of agreement is determined for the interpreted map as a whole, and individually for each of the interpreted categories. A conditional coefficient of agreement for the individual categories is compared to other methods for expressing category accuracy which have already been presented in the remote sensing literature.

  12. A Study of Influencing Factors on the Tensile Response of a Titanium Matrix Composite With Weak Interfacial Bonding

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Arnold, Steven M.

    2000-01-01

    The generalized method of cells micromechanics model is utilized to analyze the tensile stress-strain response of a representative titanium matrix composite with weak interfacial bonding. The fiber/matrix interface is modeled through application of a displacement discontinuity between the fiber and matrix once a critical debonding stress has been exceeded. Unidirectional composites with loading parallel and perpendicular to the fibers are examined, as well as a cross-ply laminate. For each of the laminates studied, analytically obtained results are compared to experimental data. The application of residual stresses through a cool-down process was found to have a significant effect on the tensile response. For the unidirectional laminate with loading applied perpendicular to the fibers, fiber packing and fiber shape were shown to have a significant effect on the predicted tensile response. Furthermore, the interface was characterized through the use of semi-emperical parameters including an interfacial compliance and a "debond stress;" defined as the stress level across the interface which activates fiber/matrix debonding. The results in this paper demonstrate that if architectural factors are correctly accounted for and the interface is appropriately characterized, the macro-level composite behavior can be correctly predicted without modifying any of the fiber or matrix constituent properties.

  13. [Research on partial least squares for determination of impurities in the presence of high concentration of matrix by ICP-AES].

    PubMed

    Wang, Yan-peng; Gong, Qi; Yu, Sheng-rong; Liu, You-yan

    2012-04-01

    A method for detecting trace impurities in high concentration matrix by ICP-AES based on partial least squares (PLS) was established. The research showed that PLS could effectively correct the interference caused by high level of matrix concentration error and could withstand higher concentrations of matrix than multicomponent spectral fitting (MSF). When the mass ratios of matrix to impurities were from 1 000 : 1 to 20 000 : 1, the recoveries of standard addition were between 95% and 105% by PLS. For the system in which interference effect has nonlinear correlation with the matrix concentrations, the prediction accuracy of normal PLS method was poor, but it can be improved greatly by using LIN-PPLS, which was based on matrix transformation of sample concentration. The contents of Co, Pb and Ga in stream sediment (GBW07312) were detected by MSF, PLS and LIN-PPLS respectively. The results showed that the prediction accuracy of LIN-PPLS was better than PLS, and the prediction accuracy of PLS was better than MSF.

  14. Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration.

    PubMed

    Abouei, Elham; Lee, Anthony M D; Pahlevaninezhad, Hamid; Hohert, Geoffrey; Cua, Michelle; Lane, Pierre; Lam, Stephen; MacAulay, Calum

    2018-01-01

    We present a method for the correction of motion artifacts present in two- and three-dimensional in vivo endoscopic images produced by rotary-pullback catheters. This method can correct for cardiac/breathing-based motion artifacts and catheter-based motion artifacts such as nonuniform rotational distortion (NURD). This method assumes that en face tissue imaging contains slowly varying structures that are roughly parallel to the pullback axis. The method reduces motion artifacts using a dynamic time warping solution through a cost matrix that measures similarities between adjacent frames in en face images. We optimize and demonstrate the suitability of this method using a real and simulated NURD phantom and in vivo endoscopic pulmonary optical coherence tomography and autofluorescence images. Qualitative and quantitative evaluations of the method show an enhancement of the image quality. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  15. Accuracy of the energy-corrected sudden (ECS) scaling procedure for rotational excitation of CO by collisions with Ar

    NASA Technical Reports Server (NTRS)

    Green, S.; Cochrane, D. L.; Truhlar, D. G.

    1986-01-01

    The utility of the energy-corrected sudden (ECS) scaling method is evaluated on the basis of how accurately it predicts the entire matrix of state-to-state rate constants, when the fundamental rate constants are independently known. It is shown for the case of Ar-CO collisions at 500 K that when a critical impact parameter is about 1.75-2.0 A, the ECS method yields excellent excited state rates on the average and has an rms error of less than 20 percent.

  16. Analyte quantification with comprehensive two-dimensional gas chromatography: assessment of methods for baseline correction, peak delineation, and matrix effect elimination for real samples.

    PubMed

    Samanipour, Saer; Dimitriou-Christidis, Petros; Gros, Jonas; Grange, Aureline; Samuel Arey, J

    2015-01-02

    Comprehensive two-dimensional gas chromatography (GC×GC) is used widely to separate and measure organic chemicals in complex mixtures. However, approaches to quantify analytes in real, complex samples have not been critically assessed. We quantified 7 PAHs in a certified diesel fuel using GC×GC coupled to flame ionization detector (FID), and we quantified 11 target chlorinated hydrocarbons in a lake water extract using GC×GC with electron capture detector (μECD), further confirmed qualitatively by GC×GC with electron capture negative chemical ionization time-of-flight mass spectrometer (ENCI-TOFMS). Target analyte peak volumes were determined using several existing baseline correction algorithms and peak delineation algorithms. Analyte quantifications were conducted using external standards and also using standard additions, enabling us to diagnose matrix effects. We then applied several chemometric tests to these data. We find that the choice of baseline correction algorithm and peak delineation algorithm strongly influence the reproducibility of analyte signal, error of the calibration offset, proportionality of integrated signal response, and accuracy of quantifications. Additionally, the choice of baseline correction and the peak delineation algorithm are essential for correctly discriminating analyte signal from unresolved complex mixture signal, and this is the chief consideration for controlling matrix effects during quantification. The diagnostic approaches presented here provide guidance for analyte quantification using GC×GC. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  17. An Evaluation of Unit and ½ Mass Correction Approaches as a ...

    EPA Pesticide Factsheets

    Rare earth elements (REE) and certain alkaline earths can produce M+2 interferences in ICP-MS because they have sufficiently low second ionization energies. Four REEs (150Sm, 150Nd, 156Gd and 156Dy) produce false positives on 75As and 78Se and 132Ba can produce a false positive on 66Zn. Currently, US EPA Method 200.8 does not address these as sources of false positives. Additionally, these M+2 false positives are typically enhanced if collision cell technology is utilized to reduce polyatomic interferences associated with ICP-MS detection. A preliminary evaluation indicates that instrumental tuning conditions can impact the observed M+2/M+1 ratio and in turn the false positives generated on Zn, As and Se. Both unit and ½ mass approaches will be evaluated to correct for these false positives relative to the benchmark concentrations estimates from a triple quadrupole ICP-MS using standard solutions. The impact of matrix on these M+2 corrections will be evaluated over multiple analysis days with a focus on evaluating internal standards that mirror the matrix induced shifts in the M+2 ion transmission. The goal of this evaluation is to move away from fixed M+2 corrective approaches and move towards sample specific approaches that mimic the sample matrix induced variability while attempting to address intra-day variability of the M+2 correction factors through the use of internal standards. Oral Presentation via webinar for EPA Laboratory Technical Informati

  18. Kinetic-energy matrix elements for atomic Hylleraas-CI wave functions.

    PubMed

    Harris, Frank E

    2016-05-28

    Hylleraas-CI is a superposition-of-configurations method in which each configuration is constructed from a Slater-type orbital (STO) product to which is appended (linearly) at most one interelectron distance rij. Computations of the kinetic energy for atoms by this method have been difficult due to the lack of formulas expressing these matrix elements for general angular momentum in terms of overlap and potential-energy integrals. It is shown here that a strategic application of angular-momentum theory, including the use of vector spherical harmonics, enables the reduction of all atomic kinetic-energy integrals to overlap and potential-energy matrix elements. The new formulas are validated by showing that they yield correct results for a large number of integrals published by other investigators.

  19. Application of Temperature Sensitivities During Iterative Strain-Gage Balance Calibration Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    A new method is discussed that may be used to correct wind tunnel strain-gage balance load predictions for the influence of residual temperature effects at the location of the strain-gages. The method was designed for the iterative analysis technique that is used in the aerospace testing community to predict balance loads from strain-gage outputs during a wind tunnel test. The new method implicitly applies temperature corrections to the gage outputs during the load iteration process. Therefore, it can use uncorrected gage outputs directly as input for the load calculations. The new method is applied in several steps. First, balance calibration data is analyzed in the usual manner assuming that the balance temperature was kept constant during the calibration. Then, the temperature difference relative to the calibration temperature is introduced as a new independent variable for each strain--gage output. Therefore, sensors must exist near the strain--gages so that the required temperature differences can be measured during the wind tunnel test. In addition, the format of the regression coefficient matrix needs to be extended so that it can support the new independent variables. In the next step, the extended regression coefficient matrix of the original calibration data is modified by using the manufacturer specified temperature sensitivity of each strain--gage as the regression coefficient of the corresponding temperature difference variable. Finally, the modified regression coefficient matrix is converted to a data reduction matrix that the iterative analysis technique needs for the calculation of balance loads. Original calibration data and modified check load data of NASA's MC60D balance are used to illustrate the new method.

  20. A novel baseline-correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo(a)pyrene in vegetable oil samples.

    PubMed

    Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun

    2011-07-07

    In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.

  1. Determination of 18 kinds of trace impurities in the vanadium battery grade vanadyl sulfate by ICP-OES

    NASA Astrophysics Data System (ADS)

    Yong, Cheng

    2018-03-01

    The method that direct determination of 18 kinds of trace impurities in the vanadium battery grade vanadyl sulfate by inductively coupled plasma atomic emission spectrometry (ICP-OES) was established, and the detection range includes 0.001% ∼ 0.100% of Fe, Cr, Ni, Cu, Mn, Mo, Pb, As, Co, P, Ti, Zn and 0.005% ∼ 0.100% of K, Na, Ca, Mg, Si, Al. That the influence of the matrix effects, spectral interferences and background continuum superposition in the high concentrations of vanadium ions and sulfate coexistence system had been studied, and then the following conclusions were obtained: the sulfate at this concentration had no effect on the determination, but the matrix effects or continuous background superposition which were generated by high concentration of vanadium ions had negative interference on the determination of potassium and sodium, and it produced a positive interference on the determination of the iron and other impurity elements, so that the impacts of high vanadium matrix were eliminated by the matrix matching and combining synchronous background correction measures. Through the spectral interference test, the paper classification summarized the spectral interferences of vanadium matrix and between the impurity elements, and the analytical lines, the background correction regions and working parameters of the spectrometer were all optimized. The technical performance index of the analysis method is that the background equivalent concentration -0.0003%(Na)~0.0004%(Cu), the detection limit of the element is 0.0001%∼ 0.0003%, RSD<10% when the element content is in the range from 0.001% to 0.007%, RSD< 20% even if the element content is in the range from 0.0001% to 0.001% that is beyond the scope of the method of detection, recoveries is 91.0% ∼ 110.0%.

  2. Fiber-based polarization-sensitive OCT of the human retina with correction of system polarization distortions

    PubMed Central

    Braaf, Boy; Vermeer, Koenraad A.; de Groot, Mattijs; Vienola, Kari V.; de Boer, Johannes F.

    2014-01-01

    In polarization-sensitive optical coherence tomography (PS-OCT) the use of single-mode fibers causes unpredictable polarization distortions which can result in increased noise levels and erroneous changes in calculated polarization parameters. In the current paper this problem is addressed by a new Jones matrix analysis method that measures and corrects system polarization distortions as a function of wavenumber by spectral analysis of the sample surface polarization state and deeper located birefringent tissue structures. This method was implemented on a passive-component depth-multiplexed swept-source PS-OCT system at 1040 nm which was theoretically modeled using Jones matrix calculus. High-resolution B-scan images are presented of the double-pass phase retardation, diattenuation, and relative optic axis orientation to show the benefits of the new analysis method for in vivo imaging of the human retina. The correction of system polarization distortions yielded reduced phase retardation noise, and better estimates of the diattenuation and the relative optic axis orientation in weakly birefringent tissues. The clinical potential of the system is shown by en face visualization of the phase retardation and optic axis orientation of the retinal nerve fiber layer in a healthy volunteer and a glaucoma patient with nerve fiber loss. PMID:25136498

  3. Symbolic algebra approach to the calculation of intraocular lens power following cataract surgery

    NASA Astrophysics Data System (ADS)

    Hjelmstad, David P.; Sayegh, Samir I.

    2013-03-01

    We present a symbolic approach based on matrix methods that allows for the analysis and computation of intraocular lens power following cataract surgery. We extend the basic matrix approach corresponding to paraxial optics to include astigmatism and other aberrations. The symbolic approach allows for a refined analysis of the potential sources of errors ("refractive surprises"). We demonstrate the computation of lens powers including toric lenses that correct for both defocus (myopia, hyperopia) and astigmatism. A specific implementation in Mathematica allows an elegant and powerful method for the design and analysis of these intraocular lenses.

  4. Formic Acid-Based Direct, On-Plate Testing of Yeast and Corynebacterium Species by Bruker Biotyper Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry

    PubMed Central

    Theel, Elitza S.; Schmitt, Bryan H.; Hall, Leslie; Cunningham, Scott A.; Walchak, Robert C.; Patel, Robin

    2012-01-01

    An on-plate testing method using formic acid was evaluated on the Bruker Biotyper matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) mass spectrometry system using 90 yeast and 78 Corynebacterium species isolates, and 95.6 and 81.1% of yeast and 96.1 and 92.3% of Corynebacterium isolates were correctly identified to the genus and species levels, respectively. The on-plate method using formic acid yielded identification percentages similar to those for the conventional but more laborious tube-based extraction. PMID:22760034

  5. Formic acid-based direct, on-plate testing of yeast and Corynebacterium species by Bruker Biotyper matrix-assisted laser desorption ionization-time of flight mass spectrometry.

    PubMed

    Theel, Elitza S; Schmitt, Bryan H; Hall, Leslie; Cunningham, Scott A; Walchak, Robert C; Patel, Robin; Wengenack, Nancy L

    2012-09-01

    An on-plate testing method using formic acid was evaluated on the Bruker Biotyper matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry system using 90 yeast and 78 Corynebacterium species isolates, and 95.6 and 81.1% of yeast and 96.1 and 92.3% of Corynebacterium isolates were correctly identified to the genus and species levels, respectively. The on-plate method using formic acid yielded identification percentages similar to those for the conventional but more laborious tube-based extraction.

  6. A rapid and high-precision method for sulfur isotope δ(34)S determination with a multiple-collector inductively coupled plasma mass spectrometer: matrix effect correction and applications for water samples without chemical purification.

    PubMed

    Lin, An-Jun; Yang, Tao; Jiang, Shao-Yong

    2014-04-15

    Previous studies have indicated that prior chemical purification of samples, although complex and time-consuming, is essential in obtaining precise and accurate results for sulfur isotope ratios using multiple-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). In this study, we introduce a new, rapid and precise MC-ICP-MS method for sulfur isotope determination from water samples without chemical purification. The analytical work was performed on an MC-ICP-MS instrument with medium mass resolution (m/Δm ~ 3000). Standard-sample bracketing (SSB) was used to correct samples throughout the analytical sessions. Reference materials included an Alfa-S (ammonium sulfate) standard solution, ammonium sulfate provided by the lab of the authors and fresh seawater from the South China Sea. A range of matrix-matched Alfa-S standard solutions and ammonium sulfate solutions was used to investigate the matrix (salinity) effect (matrix was added in the form of NaCl). A seawater sample was used to confirm the reliability of the method. Using matrix-matched (salinity-matched) Alfa-S as the working standard, the measured δ(34)S value of AS (-6.73 ± 0.09‰) was consistent with the reference value (-6.78 ± 0.07‰) within the uncertainty, suggesting that this method could be recommended for the measurement of water samples without prior chemical purification. The δ(34)S value determination for the unpurified seawater also yielded excellent results (21.03 ± 0.18‰) that are consistent with the reference value (20.99‰), thus confirming the feasibility of the technique. The data and the results indicate that it is feasible to use MC-ICP-MS and matrix-matched working standards to measure the sulfur isotopic compositions of water samples directly without chemical purification. In comparison with the existing MC-ICP-MS techniques, the new method is better for directly measuring δ(34)S values in water samples with complex matrices; therefore, it can significantly accelerate analytical turnover. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  8. Color correction with blind image restoration based on multiple images using a low-rank model

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  9. Using artificial neural networks (ANN) for open-loop tomography

    NASA Astrophysics Data System (ADS)

    Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus

    2011-09-01

    The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.

  10. METHOD 8261: USING SURROGATES TO MEASURE MATRIX EFFECTS AND CORRECT ANALYTICAL RESULTS

    EPA Science Inventory

    Vacuum distillation uses a specialized apparatus. This apparatus has been developed and patented by
    the EPA. Through the Federal Technology Transfer Act this invention has been made available for commercialization. Available vendors for this instrumentation are being evaluat...

  11. Combined group ECC protection and subgroup parity protection

    DOEpatents

    Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin

    2013-06-18

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.

  12. Kinetic-energy matrix elements for atomic Hylleraas-CI wave functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Frank E., E-mail: harris@qtp.ufl.edu

    Hylleraas-CI is a superposition-of-configurations method in which each configuration is constructed from a Slater-type orbital (STO) product to which is appended (linearly) at most one interelectron distance r{sub ij}. Computations of the kinetic energy for atoms by this method have been difficult due to the lack of formulas expressing these matrix elements for general angular momentum in terms of overlap and potential-energy integrals. It is shown here that a strategic application of angular-momentum theory, including the use of vector spherical harmonics, enables the reduction of all atomic kinetic-energy integrals to overlap and potential-energy matrix elements. The new formulas are validatedmore » by showing that they yield correct results for a large number of integrals published by other investigators.« less

  13. Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion

    NASA Astrophysics Data System (ADS)

    Jakobsen, M.; Wu, R. S.

    2016-12-01

    Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.

  14. Galaxy two-point covariance matrix estimation for next generation surveys

    NASA Astrophysics Data System (ADS)

    Howlett, Cullan; Percival, Will J.

    2017-12-01

    We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.

  15. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.

    2018-03-01

    In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.

  16. Testing the criterion for correct convergence in the complex Langevin method

    NASA Astrophysics Data System (ADS)

    Nagata, Keitaro; Nishimura, Jun; Shimasaki, Shinji

    2018-05-01

    Recently the complex Langevin method (CLM) has been attracting attention as a solution to the sign problem, which occurs in Monte Carlo calculations when the effective Boltzmann weight is not real positive. An undesirable feature of the method, however, was that it can happen in some parameter regions that the method yields wrong results even if the Langevin process reaches equilibrium without any problem. In our previous work, we proposed a practical criterion for correct convergence based on the probability distribution of the drift term that appears in the complex Langevin equation. Here we demonstrate the usefulness of this criterion in two solvable theories with many dynamical degrees of freedom, i.e., two-dimensional Yang-Mills theory with a complex coupling constant and the chiral Random Matrix Theory for finite density QCD, which were studied by the CLM before. Our criterion can indeed tell the parameter regions in which the CLM gives correct results.

  17. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  18. Inductively Coupled Plasma Optical Emission Spectrometry for Rare Earth Elements Analysis

    NASA Astrophysics Data System (ADS)

    He, Man; Hu, Bin; Chen, Beibei; Jiang, Zucheng

    2017-01-01

    Inductively coupled plasma optical emission spectrometry (ICP-OES) merits multielements capability, high sensitivity, good reproducibility, low matrix effect and wide dynamic linear range for rare earth elements (REEs) analysis. But the spectral interference in trace REEs analysis by ICP-OES is a serious problem due to the complicated emission spectra of REEs, which demands some correction technology including interference factor method, derivative spectrum, Kalman filtering algorithm and partial least-squares (PLS) method. Matrix-matching calibration, internal standard, correction factor and sample dilution are usually employed to overcome or decrease the matrix effect. Coupled with various sample introduction techniques, the analytical performance of ICP-OES for REEs analysis would be improved. Compared with conventional pneumatic nebulization (PN), acid effect and matrix effect are decreased to some extent in flow injection ICP-OES, with higher tolerable matrix concentration and better reproducibility. By using electrothermal vaporization as sample introduction system, direct analysis of solid samples by ICP-OES is achieved and the vaporization behavior of refractory REEs with high boiling point, which can easily form involatile carbides in the graphite tube, could be improved by using chemical modifier, such as polytetrafluoroethylene and 1-phenyl-3-methyl-4-benzoyl-5-pyrazone. Laser ablation-ICP-OES is suitable for the analysis of both conductive and nonconductive solid samples, with the absolute detection limit of ng-pg level and extremely low sample consumption (0.2 % of that in conventional PN introduction). ICP-OES has been extensively employed for trace REEs analysis in high-purity materials, and environmental and biological samples.

  19. GafChromic EBT film dosimetry with flatbed CCD scanner: a novel background correction method and full dose uncertainty analysis.

    PubMed

    Saur, Sigrun; Frengen, Jomar

    2008-07-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.

  20. Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.

    2017-03-01

    We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.

  1. Validation of an isotope dilution, ICP-MS method based on internal mass bias correction for the determination of trace concentrations of Hg in sediment cores.

    PubMed

    Ciceri, E; Recchia, S; Dossi, C; Yang, L; Sturgeon, R E

    2008-01-15

    The development and validation of a method for the determination of mercury in sediments using a sector field inductively coupled plasma mass spectrometer (SF-ICP-MS) for detection is described. The utilization of isotope dilution (ID) calibration is shown to solve analytical problems related to matrix composition. Mass bias is corrected using an internal mass bias correction technique, validated against the traditional standard bracketing method. The overall analytical protocol is validated against NRCC PACS-2 marine sediment CRM. The estimated limit of detection is 12ng/g. The proposed procedure was applied to the analysis of a real sediment core sampled to a depth of 160m in Lake Como, where Hg concentrations ranged from 66 to 750ng/g.

  2. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  3. Partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    PubMed Central

    Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saur, Sigrun; Frengen, Jomar; Department of Oncology and Radiotherapy, St. Olavs University Hospital, N-7006 Trondheim

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scansmore » of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16x16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.« less

  5. VOLATILE ORGANIC COMPOUND DETERMINATIONS USING SURROGATE-BASED CORRECTION FOR METHOD AND MATRIX EFFECTS

    EPA Science Inventory

    The principal properties related to analyte recovery in a vacuum distillate are boiling point and relative volatility. The basis for selecting compounds to measure the relationship between these properties and recovery for a vacuum distillation is presented. Surrogates are incorp...

  6. Multi-site Field Verification of Laboratory Derived FDOM Sensor Corrections: The Good, the Bad and the Ugly

    NASA Astrophysics Data System (ADS)

    Saraceno, J.; Shanley, J. B.; Aulenbach, B. T.

    2014-12-01

    Fluorescent dissolved organic matter (FDOM) is an excellent proxy for dissolved organic carbon (DOC) in natural waters. Through this relationship, in situ FDOM can be utilized to capture both high frequency time series and long term fluxes of DOC in small streams. However, in order to calculate accurate DOC fluxes for comparison across sites, in situ FDOM data must be compensated for matrix effects. Key matrix effects, include temperature, turbidity and the inner filter effect due to color. These interferences must be compensated for to develop a reasonable relationship between FDOM and DOC. In this study, we applied laboratory-derived correction factors to real time data from the five USGS WEBB headwater streams in order to gauge their effectiveness across a range of matrix effects. The good news is that laboratory derived correction factors improved the predicative relationship (higher r2) between DOC and FDOM when compared to uncorrected data. The relative importance of each matrix effect (i.e. temperature) varied by site and by time, implying that each and every matrix effect should be compensated for when available. In general, temperature effects were more important on longer time scales, while corrections for turbidity and DOC inner filter effects were most prevalent during hydrologic events, when the highest instantaneous flux of DOC occurred. Unfortunately, even when corrected for matrix effects, in situ FDOM is a weaker predictor of DOC than A254, a common surrogate for DOC, implying that either DOC fluoresces at varying degrees (but should average out over time), that some matrix effects (e.g. pH) are either unaccounted for or laboratory-derived correction factors do not encompass the site variability of particles and organics. The least impressive finding is that the inherent dependence on three variables in the FDOM correction algorithm increases the likelihood of record data gaps which increases the uncertainty in calculated DOC flux values.

  7. Automatic face naming by learning discriminative affinity matrices from weakly labeled images.

    PubMed

    Xiao, Shijie; Xu, Dong; Wu, Jianxin

    2015-10-01

    Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.

  8. Corrigendum: New Form of Kane's Equations of Motion for Constrained Systems

    NASA Technical Reports Server (NTRS)

    Roithmayr, Carlos M.; Bajodah, Abdulrahman H.; Hodges, Dewey H.; Chen, Ye-Hwa

    2007-01-01

    A correction to the previously published article "New Form of Kane's Equations of Motion for Constrained Systems" is presented. Misuse of the transformation matrix between time rates of change of the generalized coordinates and generalized speeds (sometimes called motion variables) resulted in a false conclusion concerning the symmetry of the generalized inertia matrix. The generalized inertia matrix (sometimes referred to as the mass matrix) is in fact symmetric and usually positive definite when one forms nonminimal Kane's equations for holonomic or simple nonholonomic systems, systems subject to nonlinear nonholonomic constraints, and holonomic or simple nonholonomic systems subject to impulsive constraints according to Refs. 1, 2, and 3, respectively. The mass matrix is of course symmetric when one forms minimal equations for holonomic or simple nonholonomic systems using Kane s method as set forth in Ref. 4.

  9. Beta value coupled wave theory for nonslanted reflection gratings.

    PubMed

    Neipp, Cristian; Francés, Jorge; Gallego, Sergi; Bleda, Sergio; Martínez, Francisco Javier; Pascual, Inmaculada; Beléndez, Augusto

    2014-01-01

    We present a modified coupled wave theory to describe the properties of nonslanted reflection volume diffraction gratings. The method is based on the beta value coupled wave theory, which will be corrected by using appropriate boundary conditions. The use of this correction allows predicting the efficiency of the reflected order for nonslanted reflection gratings embedded in two media with different refractive indices. The results obtained by using this method will be compared to those obtained using a matrix method, which gives exact solutions in terms of Mathieu functions, and also to Kogelnik's coupled wave theory. As will be demonstrated, the technique presented in this paper means a significant improvement over Kogelnik's coupled wave theory.

  10. Beta Value Coupled Wave Theory for Nonslanted Reflection Gratings

    PubMed Central

    Neipp, Cristian; Francés, Jorge; Gallego, Sergi; Bleda, Sergio; Martínez, Francisco Javier; Pascual, Inmaculada; Beléndez, Augusto

    2014-01-01

    We present a modified coupled wave theory to describe the properties of nonslanted reflection volume diffraction gratings. The method is based on the beta value coupled wave theory, which will be corrected by using appropriate boundary conditions. The use of this correction allows predicting the efficiency of the reflected order for nonslanted reflection gratings embedded in two media with different refractive indices. The results obtained by using this method will be compared to those obtained using a matrix method, which gives exact solutions in terms of Mathieu functions, and also to Kogelnik's coupled wave theory. As will be demonstrated, the technique presented in this paper means a significant improvement over Kogelnik's coupled wave theory. PMID:24723811

  11. Texture operator for snow particle classification into snowflake and graupel

    NASA Astrophysics Data System (ADS)

    Nurzyńska, Karolina; Kubo, Mamoru; Muramoto, Ken-ichiro

    2012-11-01

    In order to improve the estimation of precipitation, the coefficients of Z-R relation should be determined for each snow type. Therefore, it is necessary to identify the type of falling snow. Consequently, this research addresses a problem of snow particle classification into snowflake and graupel in an automatic manner (as these types are the most common in the study region). Having correctly classified precipitation events, it is believed that it will be possible to estimate the related parameters accurately. The automatic classification system presented here describes the images with texture operators. Some of them are well-known from the literature: first order features, co-occurrence matrix, grey-tone difference matrix, run length matrix, and local binary pattern, but also a novel approach to design simple local statistic operators is introduced. In this work the following texture operators are defined: mean histogram, min-max histogram, and mean-variance histogram. Moreover, building a feature vector, which is based on the structure created in many from mentioned algorithms is also suggested. For classification, the k-nearest neighbourhood classifier was applied. The results showed that it is possible to achieve correct classification accuracy above 80% by most of the techniques. The best result of 86.06%, was achieved for operator built from a structure achieved in the middle stage of the co-occurrence matrix calculation. Next, it was noticed that describing an image with two texture operators does not improve the classification results considerably. In the best case the correct classification efficiency was 87.89% for a pair of texture operators created from local binary pattern and structure build in a middle stage of grey-tone difference matrix calculation. This also suggests that the information gathered by each texture operator is redundant. Therefore, the principal component analysis was applied in order to remove the unnecessary information and additionally reduce the length of the feature vectors. The improvement of the correct classification efficiency for up to 100% is possible for methods: min-max histogram, texture operator built from structure achieved in a middle stage of co-occurrence matrix calculation, texture operator built from a structure achieved in a middle stage of grey-tone difference matrix creation, and texture operator based on a histogram, when the feature vector stores 99% of initial information.

  12. Matrix effects on organic pollutants analysis in marine sediment

    NASA Astrophysics Data System (ADS)

    Azis, M. Y.; Asia, L.; Piram, A.; Buchari, B.; Doumenq, P.; Setiyanto, H.

    2018-05-01

    Interference from the matrix sample can influence of the accurate analytical method. Accelerated Solvent Extraction and their purification methods were tried to separate the organic micropollutants respectively in marine sediment. Those matrix were as organic pollutants evaluation in marine environment. Polychlorinated Biphenyls (PCBs) and Organochlorine pesticides (OCPs) are two examples organic pollutant in environment which are carcinogenic and mutagenic. Marine sediments are important matrices of information regarding the human activities in coastal areas as well as the fate and behavior of organic pollutants, which are persistent in long-term. This research purpose to evaluate the matrice effect and the recovery from marine sediment spiking with several standar solution and deuterium of molecular target from organic pollutants in not polluted sample of sediment. Matrice samples was tested from indicate in unpolluted location. The methods were evaluated with standard calibration curve (linearity < 0.999, LOQ various ranged 0.5-1000 pg.μL-1 and LOD > LOQ). Recovery (YE) relative, Matrice Effect (ME) relative correction with deuteriated standar were evaluated the interference the matrix. Interference effect for OCPs compounds were higher than PCBs in marine sediment.

  13. Combined group ECC protection and subgroup parity protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Cheng, Dong; Heidelberger, Philip

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less

  14. M-estimator for the 3D symmetric Helmert coordinate transformation

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  15. Application of coordinate transform on ball plate calibration

    NASA Astrophysics Data System (ADS)

    Wei, Hengzheng; Wang, Weinong; Ren, Guoying; Pei, Limei

    2015-02-01

    For the ball plate calibration method with coordinate measurement machine (CMM) equipped with laser interferometer, it is essential to adjust the ball plate parallel to the direction of laser beam. It is very time-consuming. To solve this problem, a method based on coordinate transformation between machine system and object system is presented. With the fixed points' coordinates of the ball plate measured in the object system and machine system, the transformation matrix between the coordinate systems is calculated. The laser interferometer measurement data error due to the placement of ball plate can be corrected with this transformation matrix. Experimental results indicate that this method is consistent with the handy adjustment method. It avoids the complexity of ball plate adjustment. It also can be applied to the ball beam calibration.

  16. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  17. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  18. Characterization for imperfect polarizers under imperfect conditions.

    PubMed

    Nee, S M; Yoo, C; Cole, T; Burge, D

    1998-01-01

    The principles for measuring the extinction ratio and transmittance of a polarizer are formulated by use of the principal Mueller matrix, which includes both polarization and depolarization. The extinction ratio is about half of the depolarization, and the contrast is the inverse of the extinction ratio. Errors in the extinction ratio caused by partially polarized incident light and the misalignment of polarizers can be corrected by the devised zone average method and the null method. Used with a laser source, the null method can measure contrasts for very good polarizers. Correct algorithms are established to deduce the depolarization for three comparable polarizers calibrated mutually. These methods are tested with wire-grid polarizers used in the 3-5-microm wavelength region with a laser source and also a lamp source. The contrasts obtained from both methods agree.

  19. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  20. A comparison of visual outcomes in three different types of monofocal intraocular lenses

    PubMed Central

    Shetty, Vijay; Haldipurkar, Suhas S; Gore, Rujuta; Dhamankar, Rita; Paik, Anirban; Setia, Maninder Singh

    2015-01-01

    AIM To compare the visual outcomes (distance and near) in patients opting for three different types of monofocal intraocular lens (IOL) (Matrix Aurium, AcrySof single piece, and AcrySof IQ lens). METHODS The present study is a cross-sectional analysis of secondary clinical data collected from 153 eyes (52 eyes in Matrix Aurium, 48 in AcrySof single piece, and 53 in AcrySof IQ group) undergoing cataract surgery (2011-2012). We compared near vision, distance vision, distance corrected near vision in these three types of lenses on day 15 (±3) post-surgery. RESULTS About 69% of the eyes in the Matrix Aurium group had good uncorrected distance vision post-surgery; the proportion was 48% and 57% in the AcrySof single piece and AcrySof IQ group (P=0.09). The proportion of eyes with good distance corrected near vision were 38%, 33%, and 15% in the Matrix Aurium, AcrySof single piece, and AcrySof IQ groups respectively (P=0.02). Similarly, The proportion with good “both near and distance vision” were 38%, 33%, and 15% in the Matrix Aurium, AcrySof single piece, and AcrySof IQ groups respectively (P=0.02). It was only the Matrix Aurium group which had significantly better both “distance and near vision” compared with the AcrySof IQ group (odds ratio: 5.87, 95% confidence intervals: 1.68 to 20.56). CONCLUSION Matrix Aurium monofocal lenses may be a good option for those patients who desire to have a good near as well as distance vision post-surgery. PMID:26682168

  1. Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory

    DOE PAGES

    Liu, C.; Liu, J.; Yao, Y. X.; ...

    2017-01-16

    Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less

  2. Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Liu, J.; Yao, Y. X.

    Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less

  3. DISSOLVED ORGANIC FLUOROPHORES IN SOUTHEASTERN US COASTAL WATERS: CORRECTION METHOD FOR ELIMINATING RAYLEIGH AND RAMAN SCATTERING PEAKS IN EXCITATION-EMISSION MATRICES

    EPA Science Inventory

    Fluorescence-based observations provide useful, sensitive information concerning the nature and distribution of colored dissolved organic matter (CDOM) in coastal and freshwater environments. The excitation-emission matrix (EEM) technique has become widely used for evaluating sou...

  4. SU-F-R-32: Evaluation of MRI Acquisition Parameter Variations On Texture Feature Extraction Using ACR Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Y; Wang, J; Wang, C

    Purpose: To investigate the sensitivity of classic texture features to variations of MRI acquisition parameters. Methods: This study was performed on American College of Radiology (ACR) MRI Accreditation Program Phantom. MR imaging was acquired on a GE 750 3T scanner with XRM explain gradient, employing a T1-weighted images (TR/TE=500/20ms) with the following parameters as the reference standard: number of signal average (NEX) = 1, matrix size = 256×256, flip angle = 90°, slice thickness = 5mm. The effect of the acquisition parameters on texture features with and without non-uniformity correction were investigated respectively, while all the other parameters were keptmore » as reference standard. Protocol parameters were set as follows: (a). NEX = 0.5, 2 and 4; (b).Phase encoding steps = 128, 160 and 192; (c). Matrix size = 128×128, 192×192 and 512×512. 32 classic texture features were generated using the classic gray level run length matrix (GLRLM) and gray level co-occurrence matrix (GLCOM) from each image data set. Normalized range ((maximum-minimum)/mean) was calculated to determine variation among the scans with different protocol parameters. Results: For different NEX, 31 out of 32 texture features’ range are within 10%. For different phase encoding steps, 31 out of 32 texture features’ range are within 10%. For different acquisition matrix size without non-uniformity correction, 14 out of 32 texture features’ range are within 10%; for different acquisition matrix size with non-uniformity correction, 16 out of 32 texture features’ range are within 10%. Conclusion: Initial results indicated that those texture features that range within 10% are less sensitive to variations in T1-weighted MRI acquisition parameters. This might suggest that certain texture features might be more reliable to be used as potential biomarkers in MR quantitative image analysis.« less

  5. Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.

    PubMed

    Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying

    2016-03-21

    Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.

  6. Orbit-product representation and correction of Gaussian belief propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K; Chertkov, Michael; Chernyak, Vladimir

    We present a new interpretation of Gaussian belief propagation (GaBP) based on the 'zeta function' representation of the determinant as a product over orbits of a graph. We show that GaBP captures back-tracking orbits of the graph and consider how to correct this estimate by accounting for non-backtracking orbits. We show that the product over non-backtracking orbits may be interpreted as the determinant of the non-backtracking adjacency matrix of the graph with edge weights based on the solution of GaBP. An efficient method is proposed to compute a truncated correction factor including all non-backtracking orbits up to a specified length.

  7. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  8. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  9. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE PAGES

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.; ...

    2018-03-06

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  10. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  11. Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad

    2006-01-01

    This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'prop-burn-prop' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons are made with three methods using modified linearized covariance propagation. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. The second method does not model the maneuver but includes a process noise matrix to account for the uncertainty in its magnitude. The third method, which is essentially a hybrid of the first two, includes the nominal portion of the maneuver via the state transition matrix and uses a process noise matrix to account for the magnitude uncertainty. The first method is unable to produce the final orbit covariance except in the case of zero maneuver uncertainty. The second method yields good accuracy for the final covariance matrix but fails to model the final orbital state accurately. Agreement between the simulated covariance data produced by this method and the Monte Carlo truth data fell within 0.5-2.5 percent over a range of maneuver sizes that span two orders of magnitude (0.1-20 m/s). The third method, which yields a combination of good accuracy in the computation of the final covariance matrix and correct accounting for the presence of the maneuver in the nominal orbit, is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, PC. However, applications for the two other methods exist and are briefly discussed. Although the process model ("prop-burn-prop") that was studied is very simple - point-mass gravitational effects due to the Earth combined with an impulsive delta-V in the velocity direction for the maneuver - generalizations to more complex scenarios, including high fidelity force models, finite duration maneuvers, and maneuver pointing errors, are straightforward and are discussed in the conclusion.

  12. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  13. Development of a direct procedure for the measurement of sulfur isotope variability in beers by MC-ICP-MS.

    PubMed

    Giner Martínez-Sierra, J; Santamaria-Fernandez, R; Hearn, R; Marchante Gayón, J M; García Alonso, J I

    2010-04-14

    In this work, a multi-collector inductively coupled plasma mass spectrometer (MC-ICP-MS) was evaluated for the direct measurement of sulfur stable isotope ratios in beers as a first step toward a general study of the natural isotope variability of sulfur in foods and beverages. Sample preparation consisted of a simple dilution of the beers with 1% (v/v) HNO(3). It was observed that different sulfur isotope ratios were obtained for different dilutions of the same sample indicating that matrix effects affected differently the transmission of the sulfur ions at masses 32, 33, and 34 in the mass spectrometer. Correction for mass bias related matrix effects was evaluated using silicon internal standardization. For that purpose, silicon isotopes at masses 29 and 30 were included in the sulfur cup configuration and the natural silicon content in beers used for internal mass bias correction. It was observed that matrix effects on differential ion transmission could be corrected adequately using silicon internal standardization. The natural isotope variability of sulfur has been evaluated by measuring 26 different beer brands. Measured delta(34)S values ranged from -0.2 to 13.8 per thousand. Typical combined standard uncertainties of the measured delta(34)S values were < or = 2 per thousand. The method has therefore great potential to study sulfur isotope variability in foods and beverages.

  14. Coherent-Anomaly Method in Critical Phenomena. III. Mean-Field Transfer-Matrix Method in the 2D Ising Model

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Katori, Makoto; Suzuki, Masuo

    1987-11-01

    Two kinds of systematic mean-field transfer-matrix methods are formulated in the 2-dimensional Ising spin system, by introducing Weiss-like and Bethe-like approximations. All the critical exponents as well as the true critical point can be estimated in these methods following the CAM procedure. The numerical results of the above system are Tc*≃2.271 (J/kB), γ{=}γ'≃1.749, β≃0.131 and δ≃15.1. The specific heat is confirmd to be continuous and to have a logarithmic divergence at the true critical point, i.e., α{=}α'{=}0. Thus, the finite-degree-of-approximation scaling ansatz is shown to be correct and very powerful in practical estimations of the critical exponents as well as the true critical point.

  15. Automated artifact detection and removal for improved tensor estimation in motion-corrupted DTI data sets using the combination of local binary patterns and 2D partial least squares.

    PubMed

    Zhou, Zhenyu; Liu, Wei; Cui, Jiali; Wang, Xunheng; Arias, Diana; Wen, Ying; Bansal, Ravi; Hao, Xuejun; Wang, Zhishun; Peterson, Bradley S; Xu, Dongrong

    2011-02-01

    Signal variation in diffusion-weighted images (DWIs) is influenced both by thermal noise and by spatially and temporally varying artifacts, such as rigid-body motion and cardiac pulsation. Motion artifacts are particularly prevalent when scanning difficult patient populations, such as human infants. Although some motion during data acquisition can be corrected using image coregistration procedures, frequently individual DWIs are corrupted beyond repair by sudden, large amplitude motion either within or outside of the imaging plane. We propose a novel approach to identify and reject outlier images automatically using local binary patterns (LBP) and 2D partial least square (2D-PLS) to estimate diffusion tensors robustly. This method uses an enhanced LBP algorithm to extract texture features from a local texture feature of the image matrix from the DWI data. Because the images have been transformed to local texture matrices, we are able to extract discriminating information that identifies outliers in the data set by extending a traditional one-dimensional PLS algorithm to a two-dimension operator. The class-membership matrix in this 2D-PLS algorithm is adapted to process samples that are image matrix, and the membership matrix thus represents varying degrees of importance of local information within the images. We also derive the analytic form of the generalized inverse of the class-membership matrix. We show that this method can effectively extract local features from brain images obtained from a large sample of human infants to identify images that are outliers in their textural features, permitting their exclusion from further processing when estimating tensors using the DWIs. This technique is shown to be superior in performance when compared with visual inspection and other common methods to address motion-related artifacts in DWI data. This technique is applicable to correct motion artifact in other magnetic resonance imaging (MRI) techniques (e.g., the bootstrapping estimation) that use univariate or multivariate regression methods to fit MRI data to a pre-specified model. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Calibration of remotely sensed proportion or area estimates for misclassification error

    Treesearch

    Raymond L. Czaplewski; Glenn P. Catts

    1992-01-01

    Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...

  17. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  18. Exact first order scattering correction for vector radiative transfer in coupled atmosphere and ocean systems

    NASA Astrophysics Data System (ADS)

    Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing

    2012-06-01

    We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.

  19. Novel, improved sample preparation for rapid, direct identification from positive blood cultures using matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry.

    PubMed

    Schubert, Sören; Weinert, Kirsten; Wagner, Chris; Gunzl, Beatrix; Wieser, Andreas; Maier, Thomas; Kostrzewa, Markus

    2011-11-01

    Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) is widely used for rapid and reliable identification of bacteria and yeast grown on agar plates. Moreover, MALDI-TOF MS also holds promise for bacterial identification from blood culture (BC) broths in hospital laboratories. The most important technical step for the identification of bacteria from positive BCs by MALDI-TOF MS is sample preparation to remove blood cells and host proteins. We present a method for novel, rapid sample preparation using differential lysis of blood cells. We demonstrate the efficacy and ease of use of this sample preparation and subsequent MALDI-TOF MS identification, applying it to a total of 500 aerobic and anaerobic BCs reported to be positive by a Bactec 9240 system. In 86.5% of all BCs, the microorganism species were correctly identified. Moreover, in 18/27 mixed cultures at least one isolate was correctly identified. A novel method that adjusts the score value for MALDI-TOF MS results is proposed, further improving the proportion of correctly identified samples. The results of the present study show that the MALDI-TOF MS-based method allows rapid (<20 minutes) bacterial identification directly from positive BCs and with high accuracy. Copyright © 2011 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  20. Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.

    PubMed

    Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei

    2013-04-01

    The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.

  1. LA-ICP-MS depth profile analysis of apatite: Protocol and implications for (U-Th)/He thermochronometry

    NASA Astrophysics Data System (ADS)

    Johnstone, Samuel; Hourigan, Jeremy; Gallagher, Christopher

    2013-05-01

    Heterogeneous concentrations of α-producing nuclides in apatite have been recognized through a variety of methods. The presence of zonation in apatite complicates both traditional α-ejection corrections and diffusive models, both of which operate under the assumption of homogeneous concentrations. In this work we develop a method for measuring radial concentration profiles of 238U and 232Th in apatite by laser ablation ICP-MS depth profiling. We then focus on one application of this method, removing bias introduced by applying inappropriate α-ejection corrections. Formal treatment of laser ablation ICP-MS depth profile calibration for apatite includes construction and calibration of matrix-matched standards and quantification of rates of elemental fractionation. From this we conclude that matrix-matched standards provide more robust monitors of fractionation rate and concentrations than doped silicate glass standards. We apply laser ablation ICP-MS depth profiling to apatites from three unknown populations and small, intact crystals of Durango fluorapatite. Accurate and reproducible Durango apatite dates suggest that prolonged exposure to laser drilling does not impact cooling ages. Intracrystalline concentrations vary by at least a factor of 2 in the majority of the samples analyzed, but concentration variation only exceeds 5x in 5 grains and 10x in 1 out of the 63 grains analyzed. Modeling of synthetic concentration profiles suggests that for concentration variations of 2x and 10x individual homogeneous versus zonation dependent α-ejection corrections could lead to age bias of >5% and >20%, respectively. However, models based on measured concentration profiles only generated biases exceeding 5% in 13 of the 63 cases modeled. Application of zonation dependent α-ejection corrections did not significantly reduce the age dispersion present in any of the populations studied. This suggests that factors beyond homogeneous α-ejection corrections are the dominant source of overdispersion in apatite (U-Th)/He cooling ages.

  2. First Industrial Tests of a Matrix Monitor Correction for the Differential Die-away Technique of Historical Waste Drums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, Rodolphe; Passard, Christian; Perot, Bertrand

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA NC La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT). In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (LMN) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor, namely a 3He proportional counter located insidemore » the measurement cavity. After feasibility studies performed with LMN's PROMETHEE 6 laboratory measurement cell and with MCNPX simulations, this paper presents first experimental tests performed on the industrial ACC (hulls and nozzles compaction facility) measurement system. A calculation vs. experiment benchmark has been carried out by performing dedicated calibration measurements with a representative drum and {sup 235}U samples. The comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)« less

  3. A square-wave wavelength modulation system for automatic background correction in carbon furnace atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Bezur, L.; Marshall, J.; Ottaway, J. M.

    A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.

  4. The influence of pairing correlations on the isospin symmetry breaking corrections of superallowed Fermi beta decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cal Latin-Small-Letter-Dotless-I k, A. E., E-mail: engincalik@yahoo.com; Gerceklioglu, M.; Selam, C.

    2013-05-15

    Within the framework of quasi-particle random phase approximation, the isospin breaking correction of superallowed 0{sup +} {yields} 0{sup +} beta decay and unitarity of Cabibbo-Kobayashi-Maskawa mixing matrix have been investigated. The broken isotopic symmetry of nuclear part of Hamiltonian has been restored by Pyatov's method. The isospin symmetry breaking correction with pairing correlations has been compared with the previous results without pairing. The effect of pairing interactions has been examined for nine superallowed Fermi beta decays; their parent nuclei are {sup 26}Al, {sup 34}Cl, {sup 38}K, {sup 42}Sc, {sup 46}V, {sup 50}Mn, {sup 54}Co, {sup 62}Ga, {sup 74}Rb.

  5. Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors

    NASA Astrophysics Data System (ADS)

    Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.

    2007-12-01

    Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.

  6. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    PubMed

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  7. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    PubMed Central

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  8. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices

    PubMed Central

    Westgate, Philip M.

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator. PMID:27818539

  9. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    PubMed

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  10. Isotope Inversion Experiment evaluating the suitability of calibration in surrogate matrix for quantification via LC-MS/MS-Exemplary application for a steroid multi-method.

    PubMed

    Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H

    2016-05-30

    For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Generalized non-equilibrium vertex correction method in coherent medium theory for quantum transport simulation of disordered nanoelectronics

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Ke, Youqi

    In realistic nanoelectronics, disordered impurities/defects are inevitable and play important roles in electron transport. However, due to the lack of effective quantum transport method, the important effects of disorders remain poorly understood. Here, we report a generalized non-equilibrium vertex correction (NVC) method with coherent potential approximation to treat the disorder effects in quantum transport simulation. With this generalized NVC method, any averaged product of two single-particle Green's functions can be obtained by solving a set of simple linear equations. As a result, the averaged non-equilibrium density matrix and various important transport properties, including averaged current, disordered induced current fluctuation and the averaged shot noise, can all be efficiently computed in a unified scheme. Moreover, a generalized form of conditionally averaged non-equilibrium Green's function is derived to incorporate with density functional theory to enable first-principles simulation. We prove the non-equilibrium coherent potential equals the non-equilibrium vertex correction. Our approach provides a unified, efficient and self-consistent method for simulating non-equilibrium quantum transport through disorder nanoelectronics. Shanghaitech start-up fund.

  12. Broad-band Lg Attenuation Tomography in Eastern Eurasia and The Resolution, Uncertainty and Data Predication

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Xu, X.

    2017-12-01

    The broad band Lg 1/Q tomographic models in eastern Eurasia are inverted from source- and site-corrected path 1/Q data. The path 1/Q are measured between stations (or events) by the two-station (TS), reverse two-station (RTS) and reverse two-event (RTE) methods, respectively. Because path 1/Q are computed using logarithm of the product of observed spectral ratios and simplified 1D geometrical spreading correction, they are subject to "modeling errors" dominated by uncompensated 3D structural effects. We have found in Chen and Xie [2017] that these errors closely follow normal distribution after the long-tailed outliers are screened out (similar to teleseismic travel time residuals). We thus rigorously analyze the statistics of these errors collected from repeated samplings of station (and event) pairs from 1.0 to 10.0Hz and reject about 15% outliers at each frequency band. The resultant variance of Δ/Q decreases with frequency as 1/f2. The 1/Q tomography using screened data is now a stochastic inverse problem with solutions approximate the means of Gaussian random variables and the model covariance matrix is that of Gaussian variables with well-known statistical behavior. We adopt a new SVD based tomographic method to solve for 2D Q image together with its resolution and covariance matrices. The RTS and RTE yield the most reliable 1/Q data free of source and site effects, but the path coverage is rather sparse due to very strict recording geometry. The TS absorbs the effects of non-unit site response ratios into 1/Q data. The RTS also yields site responses, which can then be corrected from the path 1/Q of TS to make them also free of site effect. The site corrected TS data substantially improve path coverage, allowing able to solve for 1/Q tomography up to 6.0Hz. The model resolution and uncertainty are first quantitively accessed by spread functions (fulfilled by resolution matrix) and covariance matrix. The reliably retrieved Q models correlate well with the distinct tectonic blocks featured by the most recent major deformations and vary with frequencies. With the 1/Q tomographic model and its covariance matrix, we can formally estimate the uncertainty of any path-specific Lg 1/Q prediction. This new capability significantly benefits source estimation for which reliable uncertainty estimate is especially important.

  13. A comparison of five partial volume correction methods for Tau and Amyloid PET imaging with [18F]THK5351 and [11C]PIB.

    PubMed

    Shidahara, Miho; Thomas, Benjamin A; Okamura, Nobuyuki; Ibaraki, Masanobu; Matsubara, Keisuke; Oyama, Senri; Ishikawa, Yoichi; Watanuki, Shoichi; Iwata, Ren; Furumoto, Shozo; Tashiro, Manabu; Yanai, Kazuhiko; Gonda, Kohsuke; Watabe, Hiroshi

    2017-08-01

    To suppress partial volume effect (PVE) in brain PET, there have been many algorithms proposed. However, each methodology has different property due to its assumption and algorithms. Our aim of this study was to investigate the difference among partial volume correction (PVC) method for tau and amyloid PET study. We investigated two of the most commonly used PVC methods, Müller-Gärtner (MG) and geometric transfer matrix (GTM) and also other three methods for clinical tau and amyloid PET imaging. One healthy control (HC) and one Alzheimer's disease (AD) PET studies of both [ 18 F]THK5351 and [ 11 C]PIB were performed using a Eminence STARGATE scanner (Shimadzu Inc., Kyoto, Japan). All PET images were corrected for PVE by MG, GTM, Labbé (LABBE), Regional voxel-based (RBV), and Iterative Yang (IY) methods, with segmented or parcellated anatomical information processed by FreeSurfer, derived from individual MR images. PVC results of 5 algorithms were compared with the uncorrected data. In regions of high uptake of [ 18 F]THK5351 and [ 11 C]PIB, different PVCs demonstrated different SUVRs. The degree of difference between PVE uncorrected and corrected depends on not only PVC algorithm but also type of tracer and subject condition. Presented PVC methods are straight-forward to implement but the corrected images require careful interpretation as different methods result in different levels of recovery.

  14. Numerical stability analysis of two-dimensional solute transport along a discrete fracture in a porous rock matrix

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Kolditz, Olaf

    2015-07-01

    This work reports numerical stability conditions in two-dimensional solute transport simulations including discrete fractures surrounded by an impermeable rock matrix. We use an advective-dispersive problem described in Tang et al. (1981) and examine the stability of the Crank-Nicolson Galerkin finite element method (CN-GFEM). The stability conditions are analyzed in terms of the spatial discretization length perpendicular to the fracture, the flow velocity, the diffusion coefficient, the matrix porosity, the fracture aperture, and the fracture longitudinal dispersivity. In addition, we verify applicability of the recently developed finite element method-flux corrected transport (FEM-FCT) method by Kuzmin () to suppress oscillations in the hybrid system, with a comparison to the commonly utilized Streamline Upwinding/Petrov-Galerkin (SUPG) method. Major findings of this study are (1) the mesh von Neumann number (Fo) ≥ 0.373 must be satisfied to avoid undershooting in the matrix, (2) in addition to an upper bound, the Courant number also has a lower bound in the fracture in cases of low dispersivity, and (3) the FEM-FCT method can effectively suppress the oscillations in both the fracture and the matrix. The results imply that, in cases of low dispersivity, prerefinement of a numerical mesh is not sufficient to avoid the instability in the hybrid system if a problem involves evolutionary flow fields and dynamic material parameters. Applying the FEM-FCT method to such problems is recommended if negative concentrations cannot be tolerated and computing time is not a strong issue.

  15. Comparison of matrix method and ray tracing in the study of complex optical systems

    NASA Astrophysics Data System (ADS)

    Anterrieu, Eric; Perez, Jose-Philippe

    2000-06-01

    In the context of the classical study of optical systems within the geometrical Gauss approximation, the cardinal elements are efficiently obtained with the aid of the transfer matrix between the input and output planes of the system. In order to take into account the geometrical aberrations, a ray tracing approach, using the Snell- Descartes laws, has been implemented in an interactive software. Both methods are applied for measuring the correction to be done to a human eye suffering from ametropia. This software may be used by optometrists and ophthalmologists for solving the problems encountered when considering this pathology. The ray tracing approach gives a significant improvement and could be very helpful for a better understanding of an eventual surgical act.

  16. Density matrix renormalization group for a highly degenerate quantum system: Sliding environment block approach

    NASA Astrophysics Data System (ADS)

    Schmitteckert, Peter

    2018-04-01

    We present an infinite lattice density matrix renormalization group sweeping procedure which can be used as a replacement for the standard infinite lattice blocking schemes. Although the scheme is generally applicable to any system, its main advantages are the correct representation of commensurability issues and the treatment of degenerate systems. As an example we apply the method to a spin chain featuring a highly degenerate ground-state space where the new sweeping scheme provides an increase in performance as well as accuracy by many orders of magnitude compared to a recently published work.

  17. Role of vertex corrections in the matrix formulation of the random phase approximation for the multiorbital Hubbard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altmeyer, Michaela; Guterding, Daniel; Hirschfeld, P. J.

    2016-12-21

    In the framework of a multiorbital Hubbard model description of superconductivity, a matrix formulation of the superconducting pairing interaction that has been widely used is designed to treat spin, charge, and orbital fluctuations within a random phase approximation (RPA). In terms of Feynman diagrams, this takes into account particle-hole ladder and bubble contributions as expected. It turns out, however, that this matrix formulation also generates additional terms which have the diagrammatic structure of vertex corrections. Furthermore we examine these terms and discuss the relationship between the matrix-RPA superconducting pairing interaction and the Feynman diagrams that it sums.

  18. Tracking Multiple Video Targets with an Improved GM-PHD Tracker

    PubMed Central

    Zhou, Xiaolong; Yu, Hui; Liu, Honghai; Li, Youfu

    2015-01-01

    Tracking multiple moving targets from a video plays an important role in many vision-based robotic applications. In this paper, we propose an improved Gaussian mixture probability hypothesis density (GM-PHD) tracker with weight penalization to effectively and accurately track multiple moving targets from a video. First, an entropy-based birth intensity estimation method is incorporated to eliminate the false positives caused by noisy video data. Then, a weight-penalized method with multi-feature fusion is proposed to accurately track the targets in close movement. For targets without occlusion, a weight matrix that contains all updated weights between the predicted target states and the measurements is constructed, and a simple, but effective method based on total weight and predicted target state is proposed to search the ambiguous weights in the weight matrix. The ambiguous weights are then penalized according to the fused target features that include spatial-colour appearance, histogram of oriented gradient and target area and further re-normalized to form a new weight matrix. With this new weight matrix, the tracker can correctly track the targets in close movement without occlusion. For targets with occlusion, a robust game-theoretical method is used. Finally, the experiments conducted on various video scenarios validate the effectiveness of the proposed penalization method and show the superior performance of our tracker over the state of the art. PMID:26633422

  19. Nonlinear Adjustment with or without Constraints, Applicable to Geodetic Models

    DTIC Science & Technology

    1989-03-01

    corrections are neglected, resulting in the familiar (linearized) observation equations. In matrix notation, the latter are expressed by V = A X + I...where A is the design matrix, x=X -x is the column-vector of parametric corrections , VzLa-L b is the column-vector of residuals, and L=L -Lb is the...X0 . corresponds to the set ua of model-surface 0 coordinates describing the initial point P. The final set of parametric corrections , X, then

  20. Identification of anaerobic bacteria by Bruker Biotyper matrix-assisted laser desorption ionization-time of flight mass spectrometry with on-plate formic acid preparation.

    PubMed

    Schmitt, Bryan H; Cunningham, Scott A; Dailey, Aaron L; Gustafson, Daniel R; Patel, Robin

    2013-03-01

    Identification of anaerobic bacteria using phenotypic methods is often time-consuming; methods such as 16S rRNA gene sequencing are costly and may not be readily available. We evaluated 253 clinical isolates of anaerobic bacteria using the Bruker MALDI Biotyper (Bruker Daltonics, Billerica, MA) matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) system with a user-supplemented database and an on-plate formic acid-based preparation method and compared results to those of conventional identification using biochemical testing or 16S rRNA gene sequencing. A total of 179 (70.8%) and 232 (91.7%) isolates were correctly identified to the species and genus levels, respectively, using manufacturer-recommended score cutoffs. MALDI-TOF MS offers a rapid, inexpensive method for identification of anaerobic bacteria.

  1. Bovine Acellular Dermal Matrix for Levator Lengthening in Thyroid-Related Upper-Eyelid Retraction.

    PubMed

    Sun, Jing; Liu, Xingtong; Zhang, Yidan; Huang, Yazhuo; Zhong, Sisi; Fang, Sijie; Zhuang, Ai; Li, Yinwei; Zhou, Huifang; Fan, Xianqun

    2018-05-02

    BACKGROUND Eyelid retraction is the most common and often the first sign of thyroid eye disease (TED). Upper-eyelid retraction causes both functional and cosmetic problems. In order to correct the position of the upper eyelid, surgery is required. Many procedures have demonstrated good outcomes in mild and moderate cases; however, unpredictable results have been obtained in severe cases. Dryden introduced an upper-eyelid-lengthening procedure, which used scleral grafts, but outcomes were unsatisfactory. A new technique is introduced in this study as a reasonable alternative for TED-related severe upper-eyelid retraction correction. MATERIAL AND METHODS An innovative technique for levator lengthening using bovine acellular dermal matrix as a spacer graft is introduced for severe upper-eyelid retraction secondary to TED. Additionally, 2 modifications were introduced: the fibrous cords scattered on the surface of the levator aponeurosis were excised and the orbital fat pad anterior to the aponeurosis was dissected and sutured into the skin closure in a "skin-tarsus-fat-skin" fashion. RESULTS The modified levator-lengthening surgery was performed on 32 eyelids in 26 patients consisting of 21 women and 5 men (mean age, 37.8 years; age range, 19-67 years). After corrective surgery, the average upper margin reflex distance was lowered from 7.7±0.85 mm to 3.3±0.43 mm. Eighteen cases (69%) had perfect results, while 6 cases (23%) had acceptable results. CONCLUSIONS A modified levator-lengthening procedure using bovine acellular dermal matrix as a spacer graft ameliorated both the symptoms and signs of severe upper-eyelid retraction secondary to TED. This procedure is a reasonable alternative for correction of TED-related severe upper-eyelid retraction.

  2. Bovine Acellular Dermal Matrix for Levator Lengthening in Thyroid-Related Upper-Eyelid Retraction

    PubMed Central

    Sun, Jing; Liu, Xingtong; Zhang, Yidan; Huang, Yazhuo; Zhong, Sisi; Fang, Sijie; Zhuang, Ai; Li, Yinwei; Zhou, Huifang

    2018-01-01

    Background Eyelid retraction is the most common and often the first sign of thyroid eye disease (TED). Upper-eyelid retraction causes both functional and cosmetic problems. In order to correct the position of the upper eyelid, surgery is required. Many procedures have demonstrated good outcomes in mild and moderate cases; however, unpredictable results have been obtained in severe cases. Dryden introduced an upper-eyelid-lengthening procedure, which used scleral grafts, but outcomes were unsatisfactory. A new technique is introduced in this study as a reasonable alternative for TED-related severe upper-eyelid retraction correction. Material/Methods An innovative technique for levator lengthening using bovine acellular dermal matrix as a spacer graft is introduced for severe upper-eyelid retraction secondary to TED. Additionally, 2 modifications were introduced: the fibrous cords scattered on the surface of the levator aponeurosis were excised and the orbital fat pad anterior to the aponeurosis was dissected and sutured into the skin closure in a “skin-tarsus-fat-skin” fashion. Results The modified levator-lengthening surgery was performed on 32 eyelids in 26 patients consisting of 21 women and 5 men (mean age, 37.8 years; age range, 19–67 years). After corrective surgery, the average upper margin reflex distance was lowered from 7.7±0.85 mm to 3.3±0.43 mm. Eighteen cases (69%) had perfect results, while 6 cases (23%) had acceptable results. Conclusions A modified levator-lengthening procedure using bovine acellular dermal matrix as a spacer graft ameliorated both the symptoms and signs of severe upper-eyelid retraction secondary to TED. This procedure is a reasonable alternative for correction of TED-related severe upper-eyelid retraction. PMID:29718902

  3. Biomechanically based simulation of brain deformations for intraoperative image correction: coupling of elastic and fluid models

    NASA Astrophysics Data System (ADS)

    Hagemann, Alexander; Rohr, Karl; Stiehl, H. Siegfried

    2000-06-01

    In order to improve the accuracy of image-guided neurosurgery, different biomechanical models have been developed to correct preoperative images w.r.t. intraoperative changes like brain shift or tumor resection. All existing biomechanical models simulate different anatomical structures by using either appropriate boundary conditions or by spatially varying material parameter values, while assuming the same physical model for all anatomical structures. In general, this leads to physically implausible results, especially in the case of adjacent elastic and fluid structures. Therefore, we propose a new approach which allows to couple different physical models. In our case, we simulate rigid, elastic, and fluid regions by using the appropriate physical description for each material, namely either the Navier equation or the Stokes equation. To solve the resulting differential equations, we derive a linear matrix system for each region by applying the finite element method (FEM). Thereafter, the linear matrix systems are linked together, ending up with one overall linear matrix system. Our approach has been tested using synthetic as well as tomographic images. It turns out from experiments, that the integrated treatment of rigid, elastic, and fluid regions significantly improves the prediction results in comparison to a pure linear elastic model.

  4. A positional misalignment correction method for Fourier ptychographic microscopy based on simulated annealing

    NASA Astrophysics Data System (ADS)

    Sun, Jiasong; Zhang, Yuzhen; Chen, Qian; Zuo, Chao

    2017-02-01

    Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illuminations and a phase retrieval algorithm to surpass the diffraction limit of a low numerical aperture (NA) objective lens. In current FPM imaging platforms, accurate knowledge of LED matrix's position is critical to achieve good recovery quality. Furthermore, considering such a wide field-of-view (FOV) in FPM, different regions in the FOV have different sensitivity of LED positional misalignment. In this work, we introduce an iterative method to correct position errors based on the simulated annealing (SA) algorithm. To improve the efficiency of this correcting process, large number of iterations for several images with low illumination NAs are firstly implemented to estimate the initial values of the global positional misalignment model through non-linear regression. Simulation and experimental results are presented to evaluate the performance of the proposed method and it is demonstrated that this method can both improve the quality of the recovered object image and relax the LED elements' position accuracy requirement while aligning the FPM imaging platforms.

  5. Downscaling RCP8.5 daily temperatures and precipitation in Ontario using localized ensemble optimal interpolation (EnOI) and bias correction

    NASA Astrophysics Data System (ADS)

    Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping

    2017-10-01

    A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3.9 and 6.5 °C for 2050s and 2080s relative to 1990s in Ontario, respectively; Cooling degree days and hot days will significantly increase over southern Ontario and heating degree days and cold days will significantly decrease in northern Ontario. Annual total precipitation will increase over Ontario and heavy precipitation events will increase as well. These results are consistent with conclusions in many other studies in the literature.

  6. Comparison of phenotypic methods and matrix-assisted laser desorption ionisation time-of-flight mass spectrometry for the identification of aero-tolerant Actinomyces spp. isolated from soft-tissue infections.

    PubMed

    Ng, L S Y; Sim, J H C; Eng, L C; Menon, S; Tan, T Y

    2012-08-01

    Aero-tolerant Actinomyces spp. are an under-recognised cause of cutaneous infections, in part because identification using conventional phenotypic methods is difficult and may be inaccurate. Matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF MS) is a promising new technique for bacterial identification, but with limited data on the identification of aero-tolerant Actinomyces spp. This study evaluated the accuracy of a phenotypic biochemical kit, MALDI-TOF MS and genotypic identification methods for the identification of this problematic group of organisms. Thirty aero-tolerant Actinomyces spp. were isolated from soft-tissue infections over a 2-year period. Species identification was performed by 16 s rRNA sequencing and genotypic results were compared with results obtained by API Coryne and MALDI-TOF MS. There was poor agreement between API Coryne and genotypic identification, with only 33% of isolates correctly identified to the species level. MALDI-TOF MS correctly identified 97% of isolates to the species level, with 33% of identifications achieved with high confidence scores. MALDI-TOF MS is a promising new tool for the identification of aero-tolerant Actinomyces spp., but improvement of the database is required in order to increase the confidence level of identification.

  7. Evaluation of VITEK mass spectrometry (MS), a matrix-assisted laser desorption ionization time-of-flight MS system for identification of anaerobic bacteria.

    PubMed

    Lee, Wonmok; Kim, Myungsook; Yong, Dongeun; Jeong, Seok Hoon; Lee, Kyungwon; Chong, Yunsop

    2015-01-01

    By conventional methods, the identification of anaerobic bacteria is more time consuming and requires more expertise than the identification of aerobic bacteria. Although the matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) systems are relatively less studied, they have been reported to be a promising method for the identification of anaerobes. We evaluated the performance of the VITEK MS in vitro diagnostic (IVD; 1.1 database; bioMérieux, France) in the identification of anaerobes. We used 274 anaerobic bacteria isolated from various clinical specimens. The results for the identification of the bacteria by VITEK MS were compared to those obtained by phenotypic methods and 16S rRNA gene sequencing. Among the 249 isolates included in the IVD database, the VITEK MS correctly identified 209 (83.9%) isolates to the species level and an additional 18 (7.2%) at the genus level. In particular, the VITEK MS correctly identified clinically relevant and frequently isolated anaerobic bacteria to the species level. The remaining 22 isolates (8.8%) were either not identified or misidentified. The VITEK MS could not identify the 25 isolates absent from the IVD database to the species level. The VITEK MS showed reliable identifications for clinically relevant anaerobic bacteria.

  8. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  9. An analytical X-ray CdTe detector response matrix for incomplete charge collection correction for photon energies up to 300 keV

    NASA Astrophysics Data System (ADS)

    Kurková, Dana; Judas, Libor

    2018-05-01

    Gamma and X-ray energy spectra measured with semiconductor detectors suffer from various distortions, one of them being so-called "tailing" caused by an incomplete charge collection. Using the Hecht equation, a response matrix of size 321 × 321 was constructed which was used to correct the effect of incomplete charge collection. The correction matrix was constructed analytically for an arbitrary energy bin and the size of the energy bin thus defines the width of the spectral window. The correction matrix can be applied separately from other possible spectral corrections or it can be incorporated into an already existing response matrix of the detector. The correction was tested and its adjustable parameters were optimized on the line spectra of 57Co measured with a cadmium telluride (CdTe) detector in a spectral range from 0 up to 160 keV. The best results were obtained when the values of the free path of holes were spread over a range from 0.4 to 1.0 cm and weighted by a Gauss function. The model with the optimized parameter values was then used to correct the line spectra of 152Eu in a spectral range from 0 up to 530 keV. An improvement in the energy resolution at full width at half maximum from 2.40 % ± 0.28 % to 0.96 % ± 0.28 % was achieved at 344.27 keV. Spectra of "narrow spectrum series" beams, N120, N150, N200, N250 and N300, generated with tube voltages of 120 kV, 150 kV, 200 kV, 250 kV and 300 kV respectively, and measured with the CdTe detector, were corrected in the spectral range from 0 to 160 keV (N120 and N150) and from 0 to 530 keV (N200, N250, N300). All the measured spectra correspond both qualitatively and quantitatively to the available reference data after the correction. To obtain better correspondence between N150, N200, N250 and N300 spectra and the reference data, lower values of the free paths of holes (range from 0.16 to 0.65 cm) were used for X-ray spectra correction, which suggests energy dependence of the phenomenon.

  10. Injection molding lens metrology using software configurable optical test system

    NASA Astrophysics Data System (ADS)

    Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian

    2016-10-01

    Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.

  11. [The uncertainty evaluation of analytical results of 27 elements in geological samples by X-ray fluorescence spectrometry].

    PubMed

    Wang, Yi-Ya; Zhan, Xiu-Chun

    2014-04-01

    Evaluating uncertainty of analytical results with 165 geological samples by polarized dispersive X-ray fluorescence spectrometry (P-EDXRF) has been reported according to the internationally accepted guidelines. One hundred sixty five pressed pellets of similar matrix geological samples with reliable values were analyzed by P-EDXRF. These samples were divided into several different concentration sections in the concentration ranges of every component. The relative uncertainties caused by precision and accuracy of 27 components were evaluated respectively. For one element in one concentration, the relative uncertainty caused by precision can be calculated according to the average value of relative standard deviation with different concentration level in one concentration section, n = 6 stands for the 6 results of one concentration level. The relative uncertainty caused by accuracy in one concentration section can be evaluated by the relative standard deviation of relative deviation with different concentration level in one concentration section. According to the error propagation theory, combining the precision uncertainty and the accuracy uncertainty into a global uncertainty, this global uncertainty acted as method uncertainty. This model of evaluating uncertainty can solve a series of difficult questions in the process of evaluating uncertainty, such as uncertainties caused by complex matrix of geological samples, calibration procedure, standard samples, unknown samples, matrix correction, overlap correction, sample preparation, instrument condition and mathematics model. The uncertainty of analytical results in this method can act as the uncertainty of the results of the similar matrix unknown sample in one concentration section. This evaluation model is a basic statistical method owning the practical application value, which can provide a strong base for the building of model of the following uncertainty evaluation function. However, this model used a lot of samples which cannot simply be applied to other types of samples with different matrix samples. The number of samples is too large to adapt to other type's samples. We will strive for using this study as a basis to establish a reasonable basis of mathematical statistics function mode to be applied to different types of samples.

  12. Associative Flow Rule Used to Include Hydrostatic Stress Effects in Analysis of Strain-Rate-Dependent Deformation of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Roberts, Gary D.

    2004-01-01

    designing reliable composite engine cases that are lighter than the metal cases in current use. The types of polymer matrix composites that are likely to be used in such an application have a deformation response that is nonlinear and that varies with strain rate. The nonlinearity and the strain-rate dependence of the composite response are due primarily to the matrix constituent. Therefore, in developing material models to be used in the design of impact-resistant composite engine cases, the deformation of the polymer matrix must be correctly analyzed. However, unlike in metals, the nonlinear response of polymers depends on the hydrostatic stresses, which must be accounted for within an analytical model. By applying micromechanics techniques along with given fiber properties, one can also determine the effects of the hydrostatic stresses in the polymer on the overall composite deformation response. First efforts to account for the hydrostatic stress effects in the composite deformation applied purely empirical methods that relied on composite-level data. In later efforts, to allow polymer properties to be characterized solely on the basis of polymer data, researchers at the NASA Glenn Research Center developed equations to model the polymers that were based on a non-associative flow rule, and efforts to use these equations to simulate the deformation of representative polymer materials were reasonably successful. However, these equations were found to have difficulty in correctly analyzing the multiaxial stress states found in the polymer matrix constituent of a composite material. To correct these difficulties, and to allow for the accurate simulation of the nonlinear strain-rate-dependent deformation analysis of polymer matrix composites, in the efforts reported here Glenn researchers reformulated the polymer constitutive equations from basic principles using the concept of an associative flow rule. These revised equations were characterized and validated in an experimental program carried out through a university grant with the Ohio State University, wherein tensile and shear deformation data were obtained for a representative polymer for strain rates ranging from quasi-static to high rates of several hundred per second. Tensile deformation data also were obtained over a variety of strain rates and fiber orientation angles for a representative polymer matrix composite composed using the polymer.

  13. A recurrence matrix solution for the dynamic response of aircraft in gusts

    NASA Technical Reports Server (NTRS)

    Houbolt, John C

    1951-01-01

    A systematic procedure developed for the calculation of the structural response of aircraft flying through a gust by use of difference equations in the solution of dynamic problems is first illustrated by means of a simple-damped-oscillator example. A detailed analysis is then given which leads to a recurrence matrix equation for the determination of the response of an airplane in a gust. The method takes into account wing bending and twisting deformations, fuselage deflection, vertical and pitching motion of the airplane, and some tail forces. The method is based on aerodynamic strip theory, but compressibility and three-dimensional aerodynamic effects can be taken into account approximately by means of over-all corrections. Either a sharp-edge gust or a gust of arbitrary shape in the spanwise or flight directions may be treated. In order to aid in the application of the method to any specific case, a suggested computational procedure is included. The possibilities of applying the method to a variety of transient aircraft problems, such as landing, are brought out. A brief review of matrix algebra, covering the extent to which it is used in the analysis, is also included. (author)

  14. Efficiency of methods for Karl Fischer determination of water in oils based on oven evaporation and azeotropic distillation.

    PubMed

    Larsson, William; Jalbert, Jocelyn; Gilbert, Roland; Cedergren, Anders

    2003-03-15

    The efficiency of azeotropic distillation and oven evaporation techniques for trace determination of water in oils has recently been questioned by the National Institute of Standards and Technology (NIST), on the basis of measurements of the residual water found after the extraction step. The results were obtained by volumetric Karl Fischer (KF) titration in a medium containing a large excess of chloroform (> or = 65%), a proposed prerequisite to ensure complete release of water from the oil matrix. In this work, the extent of this residual water was studied by means of a direct zero-current potentiometric technique using a KF medium containing more than 80% chloroform, which is well above the concentration recommended by NIST. A procedure is described that makes it possible to correct the results for dilution errors as well as for chemical interference effects caused by the oil matrix. The corrected values were found to be in the range of 0.6-1.5 ppm, which should be compared with the 12-34 ppm (uncorrected values) reported by NIST for the same oils. From this, it is concluded that the volumetric KF method used by NIST gives results that are much too high.

  15. Expansion of the Scope of AOAC First Action Method 2012.25--Single-Laboratory Validation of Triphenylmethane Dye and Leuco Metabolite Analysis in Shrimp, Tilapia, Catfish, and Salmon by LC-MS/MS.

    PubMed

    Andersen, Wendy C; Casey, Christine R; Schneider, Marilyn J; Turnipseed, Sherri B

    2015-01-01

    Prior to conducting a collaborative study of AOAC First Action 2012.25 LC-MS/MS analytical method for the determination of residues of three triphenylmethane dyes (malachite green, crystal violet, and brilliant green) and their metabolites (leucomalachite green and leucocrystal violet) in seafood, a single-laboratory validation of method 2012.25 was performed to expand the scope of the method to other seafood matrixes including salmon, catfish, tilapia, and shrimp. The validation included the analysis of fortified and incurred residues over multiple weeks to assess analyte stability in matrix at -80°C, a comparison of calibration methods over the range 0.25 to 4 μg/kg, study of matrix effects for analyte quantification, and qualitative identification of targeted analytes. Method accuracy ranged from 88 to 112% with 13% RSD or less for samples fortified at 0.5, 1.0, and 2.0 μg/kg. Analyte identification and determination limits were determined by procedures recommended both by the U. S. Food and Drug Administration and the European Commission. Method detection limits and decision limits ranged from 0.05 to 0.24 μg/kg and 0.08 to 0.54 μg/kg, respectively. AOAC First Action Method 2012.25 with an extracted matrix calibration curve and internal standard correction is suitable for the determination of triphenylmethane dyes and leuco metabolites in salmon, catfish, tilapia, and shrimp by LC-MS/MS at a residue determination level of 0.5 μg/kg or below.

  16. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  17. Species identification of clinical isolates of anaerobic bacteria: a comparison of two matrix-assisted laser desorption ionization-time of flight mass spectrometry systems.

    PubMed

    Justesen, Ulrik Stenz; Holm, Anette; Knudsen, Elisa; Andersen, Line Bisgaard; Jensen, Thøger Gorm; Kemp, Michael; Skov, Marianne Nielsine; Gahrn-Hansen, Bente; Møller, Jens Kjølseth

    2011-12-01

    We compared two matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) systems (Shimadzu/SARAMIS and Bruker) on a collection of consecutive clinically important anaerobic bacteria (n = 290). The Bruker system had more correct identifications to the species level (67.2% versus 49.0%), but also more incorrect identifications (7.9% versus 1.4%). The system databases need to be optimized to increase identification levels. However, MALDI-TOF MS in its present version seems to be a fast and inexpensive method for identification of most clinically important anaerobic bacteria.

  18. Covariance Matrix Adaptation Evolutionary Strategy for Drift Correction of Electronic Nose Data

    NASA Astrophysics Data System (ADS)

    Di Carlo, S.; Falasconi, M.; Sanchez, E.; Sberveglieri, G.; Scionti, A.; Squillero, G.; Tonda, A.

    2011-09-01

    Electronic Noses (ENs) might represent a simple, fast, high sample throughput and economic alternative to conventional analytical instruments [1]. However, gas sensors drift still limits the EN adoption in real industrial setups due to high recalibration effort and cost [2]. In fact, pattern recognition (PaRC) models built in the training phase become useless after a period of time, in some cases a few weeks. Although algorithms to mitigate the drift date back to the early 90 this is still a challenging issue for the chemical sensor community [3]. Among other approaches, adaptive drift correction methods adjust the PaRC model in parallel with data acquisition without need of periodic calibration. Self-Organizing Maps (SOMs) [4] and Adaptive Resonance Theory (ART) networks [5] have been already tested in the past with fair success. This paper presents and discusses an original methodology based on a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6], suited for stochastic optimization of complex problems.

  19. Modal density of rectangular structures in a wide frequency range

    NASA Astrophysics Data System (ADS)

    Parrinello, A.; Ghiringhelli, G. L.

    2018-04-01

    A novel approach to investigate the modal density of a rectangular structure in a wide frequency range is presented. First, the modal density is derived, in the whole frequency range of interest, on the basis of sound transmission through the infinite counterpart of the structure; then, it is corrected by means of the low-frequency modal behavior of the structure, taking into account actual size and boundary conditions. A statistical analysis reveals the connection between the modal density of the structure and the transmission of sound through its thickness. A transfer matrix approach is used to compute the required acoustic parameters, making it possible to deal with structures having arbitrary stratifications of different layers. A finite element method is applied on coarse grids to derive the first few eigenfrequencies required to correct the modal density. Both the transfer matrix approach and the coarse grids involved in the finite element analysis grant high efficiency. Comparison with alternative formulations demonstrates the effectiveness of the proposed methodology.

  20. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    NASA Astrophysics Data System (ADS)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  1. Using Redundancy To Reduce Errors in Magnetometer Readings

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor; Zak, Michail

    2004-01-01

    A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.

  2. Rapid detection of meticillin-resistant Staphylococcus aureus bacteraemia using combined three-hour short-incubation matrix-assisted laser desorption/ionization time-of-flight MS identification and Alere Culture Colony PBP2a detection test.

    PubMed

    Delport, Johannes Andries; Mohorovic, Ivor; Burn, Sandi; McCormick, John Kenneth; Schaus, David; Lannigan, Robert; John, Michael

    2016-07-01

    Meticillin-resistant Staphylococcus aureus (MRSA) bloodstream infection is responsible for significant morbidity, with mortality rates as high as 60 % if not treated appropriately. We describe a rapid method to detect MRSA in blood cultures using a combined three-hour short-incubation BRUKER matrix-assisted laser desorption/ionization time-of-flight MS BioTyper protocol and a qualitative immunochromatographic assay, the Alere Culture Colony Test PBP2a detection test. We compared this combined method with a molecular method detecting the nuc and mecA genes currently performed in our laboratory. One hundred and seventeen S. aureus blood cultures were tested of which 35 were MRSA and 82 were meticillin-sensitive S. aureus (MSSA). The rapid combined test correctly identified 100 % (82/82) of the MSSA and 85.7 % (30/35) of the MRSA after 3 h. There were five false negative results where the isolates were correctly identified as S. aureus, but PBP2a was not detected by the Culture Colony Test. The combined method has a sensitivity of 87.5 %, specificity of 100 %, a positive predictive value of 100 % and a negative predictive value of 94.3 % with the prevalence of MRSA in our S. aureus blood cultures. The combined rapid method offers a significant benefit to early detection of MRSA in positive blood cultures.

  3. A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Wolford, David S.

    2012-01-01

    NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.

  4. LANDSAT-D investigations in snow hydrology

    NASA Technical Reports Server (NTRS)

    Dozier, J.

    1983-01-01

    Progress on the registration of TM data to digital topographic data; on comparison of TM, MSS and NOAA meteorological satellite data for snowcover mapping; and on radiative transfer models for atmospheric correction is reported. Some methods for analyzing spatial contiguity of snow within the snow covered area were selected. The methods are based on a two-channel version of the grey level co-occurence matrix, combined with edge detection derived from an algorithm for computing slopes and exposures from digital terrain data.

  5. Revised error propagation of 40Ar/39Ar data, including covariances

    NASA Astrophysics Data System (ADS)

    Vermeesch, Pieter

    2015-12-01

    The main advantage of the 40Ar/39Ar method over conventional K-Ar dating is that it does not depend on any absolute abundance or concentration measurements, but only uses the relative ratios between five isotopes of the same element -argon- which can be measured with great precision on a noble gas mass spectrometer. The relative abundances of the argon isotopes are subject to a constant sum constraint, which imposes a covariant structure on the data: the relative amount of any of the five isotopes can always be obtained from that of the other four. Thus, the 40Ar/39Ar method is a classic example of a 'compositional data problem'. In addition to the constant sum constraint, covariances are introduced by a host of other processes, including data acquisition, blank correction, detector calibration, mass fractionation, decay correction, interference correction, atmospheric argon correction, interpolation of the irradiation parameter, and age calculation. The myriad of correlated errors arising during the data reduction are best handled by casting the 40Ar/39Ar data reduction protocol in a matrix form. The completely revised workflow presented in this paper is implemented in a new software platform, Ar-Ar_Redux, which takes raw mass spectrometer data as input and generates accurate 40Ar/39Ar ages and their (co-)variances as output. Ar-Ar_Redux accounts for all sources of analytical uncertainty, including those associated with decay constants and the air ratio. Knowing the covariance matrix of the ages removes the need to consider 'internal' and 'external' uncertainties separately when calculating (weighted) mean ages. Ar-Ar_Redux is built on the same principles as its sibling program in the U-Pb community (U-Pb_Redux), thus improving the intercomparability of the two methods with tangible benefits to the accuracy of the geologic time scale. The program can be downloaded free of charge from http://redux.london-geochron.com.

  6. H4: A challenging system for natural orbital functional approximations

    NASA Astrophysics Data System (ADS)

    Ramos-Cordoba, Eloy; Lopez, Xabier; Piris, Mario; Matito, Eduard

    2015-10-01

    The correct description of nondynamic correlation by electronic structure methods not belonging to the multireference family is a challenging issue. The transition of D2h to D4h symmetry in H4 molecule is among the most simple archetypal examples to illustrate the consequences of missing nondynamic correlation effects. The resurgence of interest in density matrix functional methods has brought several new methods including the family of Piris Natural Orbital Functionals (PNOF). In this work, we compare PNOF5 and PNOF6, which include nondynamic electron correlation effects to some extent, with other standard ab initio methods in the H4 D4h/D2h potential energy surface (PES). Thus far, the wrongful behavior of single-reference methods at the D2h-D4h transition of H4 has been attributed to wrong account of nondynamic correlation effects, whereas in geminal-based approaches, it has been assigned to a wrong coupling of spins and the localized nature of the orbitals. We will show that actually interpair nondynamic correlation is the key to a cusp-free qualitatively correct description of H4 PES. By introducing interpair nondynamic correlation, PNOF6 is shown to avoid cusps and provide the correct smooth PES features at distances close to the equilibrium, total and local spin properties along with the correct electron delocalization, as reflected by natural orbitals and multicenter delocalization indices.

  7. Measurement of macrocyclic trichothecene in floor dust of water-damaged buildings using gas chromatography/tandem mass spectrometry—dust matrix effects

    PubMed Central

    Saito, Rena; Park, Ju-Hyeong; LeBouf, Ryan; Green, Brett J.; Park, Yeonmi

    2017-01-01

    Gas chromatography-tandem mass spectrometry (GC-MS/MS) was used to detect fungal secondary metabolites. Detection of verrucarol, the hydrolysis product of Stachybotrys chartarum macrocyclic trichothecene (MCT), was confounded by matrix effects associated with heterogeneous indoor environmental samples. In this study, we examined the role of dust matrix effects associated with GC-MS/ MS to better quantify verrucarol in dust as a measure of total MCT. The efficiency of the internal standard (ISTD, 1,12-dodecanediol), and application of a matrix-matched standard correction method in measuring MCT in floor dust of water-damaged buildings was additionally examined. Compared to verrucarol, ISTD had substantially higher matrix effects in the dust extracts. The results of the ISTD evaluation showed that without ISTD adjustment, there was a 280% ion enhancement in the dust extracts compared to neat solvent. The recovery of verrucarol was 94% when the matrix-matched standard curve without the ISTD was used. Using traditional calibration curves with ISTD adjustment, none of the 21 dust samples collected from water damaged buildings were detectable. In contrast, when the matrix-matched calibration curves without ISTD adjustment were used, 38% of samples were detectable. The study results suggest that floor dust of water-damaged buildings may contain MCT. However, the measured levels of MCT in dust using the GC-MS/MS method could be significantly under- or overestimated, depending on the matrix effects, the inappropriate ISTD, or combination of the two. Our study further shows that the routine application of matrix-matched calibration may prove useful in obtaining accurate measurements of MCT in dust derived from damp indoor environments, while no isotopically labeled verrucarol is available. PMID:26853932

  8. Measurement of macrocyclic trichothecene in floor dust of water-damaged buildings using gas chromatography/tandem mass spectrometry-dust matrix effects.

    PubMed

    Saito, Rena; Park, Ju-Hyeong; LeBouf, Ryan; Green, Brett J; Park, Yeonmi

    2016-01-01

    Gas chromatography-tandem mass spectrometry (GC-MS/MS) was used to detect fungal secondary metabolites. Detection of verrucarol, the hydrolysis product of Stachybotrys chartarum macrocyclic trichothecene (MCT), was confounded by matrix effects associated with heterogeneous indoor environmental samples. In this study, we examined the role of dust matrix effects associated with GC-MS/MS to better quantify verrucarol in dust as a measure of total MCT. The efficiency of the internal standard (ISTD, 1,12-dodecanediol), and application of a matrix-matched standard correction method in measuring MCT in floor dust of water-damaged buildings was additionally examined. Compared to verrucarol, ISTD had substantially higher matrix effects in the dust extracts. The results of the ISTD evaluation showed that without ISTD adjustment, there was a 280% ion enhancement in the dust extracts compared to neat solvent. The recovery of verrucarol was 94% when the matrix-matched standard curve without the ISTD was used. Using traditional calibration curves with ISTD adjustment, none of the 21 dust samples collected from water damaged buildings were detectable. In contrast, when the matrix-matched calibration curves without ISTD adjustment were used, 38% of samples were detectable. The study results suggest that floor dust of water-damaged buildings may contain MCT. However, the measured levels of MCT in dust using the GC-MS/MS method could be significantly under- or overestimated, depending on the matrix effects, the inappropriate ISTD, or combination of the two. Our study further shows that the routine application of matrix-matched calibration may prove useful in obtaining accurate measurements of MCT in dust derived from damp indoor environments, while no isotopically labeled verrucarol is available.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  10. Radiance and polarization of multiple scattered light from haze and clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1968-08-01

    The radiance and polarization of multiple scattered light is calculated from the Stokes' vectors by a Monte Carlo method. The exact scattering matrix for a typical haze and for a cloud whose spherical drops have an average radius of 12 mu is calculated from the Mie theory. The Stokes' vector is transformed in a collision by this scattering matrix and the rotation matrix. The two angles that define the photon direction after scattering are chosen by a random process that correctly simulates the actual distribution functions for both angles. The Monte Carlo results for Rayleigh scattering compare favorably with well known tabulated results. Curves are given of the reflected and transmitted radiances and polarizations for both the haze and cloud models and for several solar angles, optical thicknesses, and surface albedos. The dependence on these various parameters is discussed.

  11. Nonlinear Penalized Estimation of True Q-Matrix in Cognitive Diagnostic Models

    ERIC Educational Resources Information Center

    Xiang, Rui

    2013-01-01

    A key issue of cognitive diagnostic models (CDMs) is the correct identification of Q-matrix which indicates the relationship between attributes and test items. Previous CDMs typically assumed a known Q-matrix provided by domain experts such as those who developed the questions. However, misspecifications of Q-matrix had been discovered in the past…

  12. Assessing Fit of Item Response Models Using the Information Matrix Test

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jorg-Tobias

    2012-01-01

    The information matrix can equivalently be determined via the expectation of the Hessian matrix or the expectation of the outer product of the score vector. The identity of these two matrices, however, is only valid in case of a correctly specified model. Therefore, differences between the two versions of the observed information matrix indicate…

  13. Distributed Relaxation Multigrid and Defect Correction Applied to the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Diskin, B.; Brandt, A.

    1999-01-01

    The distributed-relaxation multigrid and defect- correction methods are applied to the two- dimensional compressible Navier-Stokes equations. The formulation is intended for high Reynolds number applications and several applications are made at a laminar Reynolds number of 10,000. A staggered- grid arrangement of variables is used; the coupled pressure and internal energy equations are solved together with multigrid, requiring a block 2x2 matrix solution. Textbook multigrid efficiencies are attained for incompressible and slightly compressible simulations of the boundary layer on a flat plate. Textbook efficiencies are obtained for compressible simulations up to Mach numbers of 0.7 for a viscous wake simulation.

  14. Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Singer, J.; Armstrong, J. T.

    2016-12-01

    Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.

  15. Evaluation of the Vitek MS Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry System for Identification of Clinically Relevant Filamentous Fungi.

    PubMed

    McMullen, Allison R; Wallace, Meghan A; Pincus, David H; Wilkey, Kathy; Burnham, C A

    2016-08-01

    Invasive fungal infections have a high rate of morbidity and mortality, and accurate identification is necessary to guide appropriate antifungal therapy. With the increasing incidence of invasive disease attributed to filamentous fungi, rapid and accurate species-level identification of these pathogens is necessary. Traditional methods for identification of filamentous fungi can be slow and may lack resolution. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has emerged as a rapid and accurate method for identification of bacteria and yeasts, but a paucity of data exists on the performance characteristics of this method for identification of filamentous fungi. The objective of our study was to evaluate the accuracy of the Vitek MS for mold identification. A total of 319 mold isolates representing 43 genera recovered from clinical specimens were evaluated. Of these isolates, 213 (66.8%) were correctly identified using the Vitek MS Knowledge Base, version 3.0 database. When a modified SARAMIS (Spectral Archive and Microbial Identification System) database was used to augment the version 3.0 Knowledge Base, 245 (76.8%) isolates were correctly identified. Unidentified isolates were subcultured for repeat testing; 71/319 (22.3%) remained unidentified. Of the unidentified isolates, 69 were not in the database. Only 3 (0.9%) isolates were misidentified by MALDI-TOF MS (including Aspergillus amoenus [n = 2] and Aspergillus calidoustus [n = 1]) although 10 (3.1%) of the original phenotypic identifications were not correct. In addition, this methodology was able to accurately identify 133/144 (93.6%) Aspergillus sp. isolates to the species level. MALDI-TOF MS has the potential to expedite mold identification, and misidentifications are rare. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  16. A simple method to determine evaporation and compensate for liquid losses in small-scale cell culture systems.

    PubMed

    Wiegmann, Vincent; Martinez, Cristina Bernal; Baganz, Frank

    2018-04-24

    Establish a method to indirectly measure evaporation in microwell-based cell culture systems and show that the proposed method allows compensating for liquid losses in fed-batch processes. A correlation between evaporation and the concentration of Na + was found (R 2  = 0.95) when using the 24-well-based miniature bioreactor system (micro-Matrix) for a batch culture with GS-CHO. Based on these results, a method was developed to counteract evaporation with periodic water additions based on measurements of the Na + concentration. Implementation of this method resulted in a reduction of the relative liquid loss after 15 days of a fed-batch cultivation from 36.7 ± 6.7% without volume corrections to 6.9 ± 6.5% with volume corrections. A procedure was established to indirectly measure evaporation through a correlation with the level of Na + ions in solution and deriving a simple formula to account for liquid losses.

  17. Density-functional expansion methods: Grand challenges.

    PubMed

    Giese, Timothy J; York, Darrin M

    2012-03-01

    We discuss the source of errors in semiempirical density functional expansion (VE) methods. In particular, we show that VE methods are capable of well-reproducing their standard Kohn-Sham density functional method counterparts, but suffer from large errors upon using one or more of these approximations: the limited size of the atomic orbital basis, the Slater monopole auxiliary basis description of the response density, and the one- and two-body treatment of the core-Hamiltonian matrix elements. In the process of discussing these approximations and highlighting their symptoms, we introduce a new model that supplements the second-order density-functional tight-binding model with a self-consistent charge-dependent chemical potential equalization correction; we review our recently reported method for generalizing the auxiliary basis description of the atomic orbital response density; and we decompose the first-order potential into a summation of additive atomic components and many-body corrections, and from this examination, we provide new insights and preliminary results that motivate and inspire new approximate treatments of the core-Hamiltonian.

  18. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry for rapid identification of fungal rhinosinusitis pathogens.

    PubMed

    Huang, Yanfei; Wang, Jinglin; Zhang, Mingxin; Zhu, Min; Wang, Mei; Sun, Yufeng; Gu, Haitong; Cao, Jingjing; Li, Xue; Zhang, Shaoya; Lu, Xinxin

    2017-03-01

    Filamentous fungi are among the most important pathogens, causing fungal rhinosinusitis (FRS). Current laboratory diagnosis of FRS pathogens mainly relies on phenotypic identification by culture and microscopic examination, which is time consuming and expertise dependent. Although matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS has been employed to identify various fungi, its efficacy in the identification of FRS fungi is less clear. A total of 153 FRS isolates obtained from patients were analysed at the Clinical Laboratory at the Beijing Tongren Hospital affiliated to the Capital Medical University, between January 2014 and December 2015. They were identified by traditional phenotypic methods and Bruker MALDI-TOF MS (Bruker, Biotyper version 3.1), respectively. Discrepancies between the two methods were further validated by sequencing. Among the 153 isolates, 151 had correct species identification using MALDI-TOF MS (Bruker, Biot 3.1, score ≥2.0 or 2.3). MALDI-TOF MS enabled identification of some very closely related species that were indistinguishable by conventional phenotypic methods, including 1/10 Aspergillus versicolor, 3/20 Aspergillus flavus, 2/30 Aspergillus fumigatus and 1/20 Aspergillus terreus, which were misidentified by conventional phenotypic methods as Aspergillus nidulans, Aspergillus oryzae, Aspergillus japonicus and Aspergillus nidulans, respectively. In addition, 2/2 Rhizopus oryzae and 1/1 Rhizopus stolonifer that were identified only to the genus level by the phenotypic method were correctly identified by MALDI-TOF MS. MALDI-TOF MS is a rapid and accurate technique, and could replace the conventional phenotypic method for routine identification of FRS fungi in clinical microbiology laboratories.

  19. Many-body expansion of the Fock matrix in the fragment molecular orbital method

    NASA Astrophysics Data System (ADS)

    Fedorov, Dmitri G.; Kitaura, Kazuo

    2017-09-01

    A many-body expansion of the Fock matrix in the fragment molecular orbital method is derived up to three-body terms for restricted Hartree-Fock and density functional theory in the atomic orbital basis and compared to the expansion in the basis of fragment molecular orbitals (MOs). The physical nature of many-body corrections is revealed in terms of charge transfer terms. An improvement of the fragment MO expansion is proposed by adding exchange to the embedding. The accuracy of all developed methods is demonstrated in comparison to unfragmented results for polyalanines, a water cluster, Trp-cage (PDB: 1L2Y) and crambin (PDB: 1CRN) proteins, a zeolite cluster, a Si nano-wire, and a boron nitride ribbon. The physical nature of metallicity is discussed, and it is shown what kinds of metallic systems can be treated by fragment-based methods. The density of states is calculated for a fully closed and a partially open nano-ring of boron nitride with a diameter of 105 nm.

  20. A comparative study of methods for describing non-adiabatic coupling: diabatic representation of the 1Sigma +/1Pi HOH and HHO conical intersections

    NASA Astrophysics Data System (ADS)

    Dobbyn, Abigail J.; Knowles, Peter J.

    A number of established techniques for obtaining diabatic electronic states in small molecules are critically compared for the example of the X and B states in the water molecule, which contribute to the two lowest-energy conical intersections. Integration of the coupling matrix elements and analysis of configuration mixing coefficients both produce reliable diabatic states globally. Methods relying on diagonalization of dipole moment and angular momentum operators are shown to fail in large regions of coordinate space. However, the use of transition angular momentum matrix elements involving the A state, which is degenerate with B at the conical intersections, is successful globally, provided that an appropriate choice of coordinates is made. Long range damping of non-adiabatic coupling to give correct asymptotic mixing angles also is investigated.

  1. Nondestructive determination of activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chabalier, B.

    1996-08-01

    Characterization and appraisal tests include the measurement of activity in raw waste and waste packages. After conditioning, variations in density, matrix composition, and geometry make evaluation of the radionuclide activity in a package destined for storage nearly impossible without measurements and with a low uncertainty. Various nondestructive measuring techniques that use ionizing radiation are employed to characterize waste packages and raw waste. Gamma spectrometry is the most widely used technique because of its simple operation and low cost. This technique is used to quantify the beta-gamma and alpha activity of gamma-emitting radionuclides as well as to check the radioactive homogeneitymore » of the waste packages. Numerous systems for directly measuring waste packages have been developed. Two types of methods may be distinguished, depending on whether results that come from the measurements are weighted by an experimentally determined corrective term or by calculation. Through the MARCO and CARACO measuring systems, a method is described that allows one to quantify the activity of the beta-gamma and alpha radionuclides contained in either a waste package or raw waste whose geometries and material compositions are more or less accurately known. This method is based on (a) measurement by gamma spectrometry of the beta-gamma and alpha activity of the gamma-emitting radionuclides contained in the waste package and (b) the application of calculated corrections; thus, the limitations imposed by reference package geometry and matrix are avoided.« less

  2. Effects of motion and b-matrix correction for high resolution DTI with short-axis PROPELLER-EPI

    PubMed Central

    Aksoy, Murat; Skare, Stefan; Holdsworth, Samantha; Bammer, Roland

    2010-01-01

    Short-axis PROPELLER-EPI (SAP-EPI) has been proven to be very effective in providing high-resolution diffusion-weighted and diffusion tensor data. The self-navigation capabilities of SAP-EPI allow one to correct for motion, phase errors, and geometric distortion. However, in the presence of patient motion, the change in the effective diffusion-encoding direction (i.e. the b-matrix) between successive PROPELLER ‘blades’ can decrease the accuracy of the estimated diffusion tensors, which might result in erroneous reconstruction of white matter tracts in the brain. In this study, we investigate the effects of alterations in the b-matrix as a result of patient motion on the example of SAP-EPI DTI and eliminate these effects by incorporating our novel single-step non-linear diffusion tensor estimation scheme into the SAP-EPI post-processing procedure. Our simulations and in-vivo studies showed that, in the presence of patient motion, correcting the b-matrix is necessary in order to get more accurate diffusion tensor and white matter pathway reconstructions. PMID:20222149

  3. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    PubMed

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.

    2016-12-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.

  5. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  6. Corrected score estimation in the proportional hazards model with misclassified discrete covariates

    PubMed Central

    Zucker, David M.; Spiegelman, Donna

    2013-01-01

    SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700

  7. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  8. Quantitative methods for compensation of matrix effects and self-absorption in Laser Induced Breakdown Spectroscopy signals of solids

    NASA Astrophysics Data System (ADS)

    Takahashi, Tomoko; Thornton, Blair

    2017-12-01

    This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.

  9. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry: a new possibility for the identification and typing of anaerobic bacteria.

    PubMed

    Nagy, Elizabeth

    2014-01-01

    Anaerobic bacteria predominate in the normal flora of humans and are important, often life-threatening pathogens in mixed infections originating from the indigenous microbiota. The isolation and identification of anaerobes by phenotypic and DNA-based molecular methods at a species level is time-consuming and laborious. Following the successful adaptation of the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry for the routine laboratory identification of bacteria, the extensive development of a database has been initiated to use this method for the identification of anaerobic bacteria. Not only frequently isolated anaerobic species, but also newly recognized and taxonomically rearranged genera and species can be identified using direct smear samples or whole-cell protein extraction, and even phylogenetically closely related species can be identified correctly by means of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Typing of anaerobic bacteria on a subspecies level, determination of antibiotic resistance and direct identification of blood culture isolates will revolutionize anaerobe bacteriology in the near future.

  10. Solving matrix effects exploiting the second-order advantage in the resolution and determination of eight tetracycline antibiotics in effluent wastewater by modelling liquid chromatography data with multivariate curve resolution-alternating least squares and unfolded-partial least squares followed by residual bilinearization algorithms II. Prediction and figures of merit.

    PubMed

    García, M D Gil; Culzoni, M J; De Zan, M M; Valverde, R Santiago; Galera, M Martínez; Goicoechea, H C

    2008-02-01

    A new powerful algorithm (unfolded-partial least squares followed by residual bilinearization (U-PLS/RBL)) was applied for first time on second-order liquid chromatography with diode array detection (LC-DAD) data and compared with a well-known established method (multivariate curve resolution-alternating least squares (MCR-ALS)) for the simultaneous determination of eight tetracyclines (tetracycline, oxytetracycline, meclocycline, minocycline, metacycline, chlortetracycline, demeclocycline and doxycycline) in wastewaters. Tetracyclines were pre-concentrated using Oasis Max C18 cartridges and then separated on a Thermo Aquasil C18 (150 mm x 4.6mm, 5 microm) column. The whole method was validated using Milli-Q water samples and both univariate and multivariate analytical figures of merit were obtained. Additionally, two data pre-treatment were applied (baseline correction and piecewise direct standardization), which allowed to correct the effect of breakthrough and to reduce the total interferences retained after pre-concentration of wastewaters. The results showed that the eight tetracycline antibiotics can be successfully determined in wastewaters, the drawbacks due to matrix interferences being adequately handled and overcome by using U-PSL/RBL.

  11. Refraction traveltime tomography based on damped wave equation for irregular topographic model

    NASA Astrophysics Data System (ADS)

    Park, Yunhui; Pyun, Sukjoon

    2018-03-01

    Land seismic data generally have time-static issues due to irregular topography and weathered layers at shallow depths. Unless the time static is handled appropriately, interpretation of the subsurface structures can be easily distorted. Therefore, static corrections are commonly applied to land seismic data. The near-surface velocity, which is required for static corrections, can be inferred from first-arrival traveltime tomography, which must consider the irregular topography, as the land seismic data are generally obtained in irregular topography. This paper proposes a refraction traveltime tomography technique that is applicable to an irregular topographic model. This technique uses unstructured meshes to express an irregular topography, and traveltimes calculated from the frequency-domain damped wavefields using the finite element method. The diagonal elements of the approximate Hessian matrix were adopted for preconditioning, and the principle of reciprocity was introduced to efficiently calculate the Fréchet derivative. We also included regularization to resolve the ill-posed inverse problem, and used the nonlinear conjugate gradient method to solve the inverse problem. As the damped wavefields were used, there were no issues associated with artificial reflections caused by unstructured meshes. In addition, the shadow zone problem could be circumvented because this method is based on the exact wave equation, which does not require a high-frequency assumption. Furthermore, the proposed method was both robust to an initial velocity model and efficient compared to full wavefield inversions. Through synthetic and field data examples, our method was shown to successfully reconstruct shallow velocity structures. To verify our method, static corrections were roughly applied to the field data using the estimated near-surface velocity. By comparing common shot gathers and stack sections with and without static corrections, we confirmed that the proposed tomography algorithm can be used to correct the statics of land seismic data.

  12. An Uncertainty Structure Matrix for Models and Simulations

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Blattnig, Steve R.; Hemsch, Michael J.; Luckring, James M.; Tripathi, Ram K.

    2008-01-01

    Software that is used for aerospace flight control and to display information to pilots and crew is expected to be correct and credible at all times. This type of software is typically developed under strict management processes, which are intended to reduce defects in the software product. However, modeling and simulation (M&S) software may exhibit varying degrees of correctness and credibility, depending on a large and complex set of factors. These factors include its intended use, the known physics and numerical approximations within the M&S, and the referent data set against which the M&S correctness is compared. The correctness and credibility of an M&S effort is closely correlated to the uncertainty management (UM) practices that are applied to the M&S effort. This paper describes an uncertainty structure matrix for M&S, which provides a set of objective descriptions for the possible states of UM practices within a given M&S effort. The columns in the uncertainty structure matrix contain UM elements or practices that are common across most M&S efforts, and the rows describe the potential levels of achievement in each of the elements. A practitioner can quickly look at the matrix to determine where an M&S effort falls based on a common set of UM practices that are described in absolute terms that can be applied to virtually any M&S effort. The matrix can also be used to plan those steps and resources that would be needed to improve the UM practices for a given M&S effort.

  13. Absorption and scattering of light by nonspherical particles. [in atmosphere

    NASA Technical Reports Server (NTRS)

    Bohren, C. F.

    1986-01-01

    Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.

  14. Markov model of the loan portfolio dynamics considering influence of management and external economic factors

    NASA Astrophysics Data System (ADS)

    Bozhalkina, Yana; Timofeeva, Galina

    2016-12-01

    Mathematical model of loan portfolio in the form of a controlled Markov chain with discrete time is considered. It is assumed that coefficients of migration matrix depend on corrective actions and external factors. Corrective actions include process of receiving applications, interaction with existing solvent and insolvent clients. External factors are macroeconomic indicators, such as inflation and unemployment rates, exchange rates, consumer price indices, etc. Changes in corrective actions adjust the intensity of transitions in the migration matrix. The mathematical model for forecasting the credit portfolio structure taking into account a cumulative impact of internal and external changes is obtained.

  15. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  16. Adaptive optics vision simulation and perceptual learning system based on a 35-element bimorph deformable mirror.

    PubMed

    Dai, Yun; Zhao, Lina; Xiao, Fei; Zhao, Haoxin; Bao, Hua; Zhou, Hong; Zhou, Yifeng; Zhang, Yudong

    2015-02-10

    An adaptive optics visual simulation combined with a perceptual learning (PL) system based on a 35-element bimorph deformable mirror (DM) was established. The larger stroke and smaller size of the bimorph DM made the system have larger aberration correction or superposition ability and be more compact. By simply modifying the control matrix or the reference matrix, select correction or superposition of aberrations was realized in real time similar to a conventional adaptive optics closed-loop correction. PL function was first integrated in addition to conventional adaptive optics visual simulation. PL training undertaken with high-order aberrations correction obviously improved the visual function of adult anisometropic amblyopia. The preliminary application of high-order aberrations correction with PL training on amblyopia treatment was being validated with a large scale population, which might have great potential in amblyopia treatment and visual performance maintenance.

  17. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  18. On corrected formula for irradiated graphene quantum conductivity

    NASA Astrophysics Data System (ADS)

    Firsova, N. E.

    2017-09-01

    Graphene membrane irradiated by weak activating periodic electric field in terahertz range is considered. The corrected formula for the graphene quantum conductivity is found. The obtained formula gives complex conjugate results when radiation polarization direction is clockwise or it is opposite clockwise. The found formula allows us to see that the graphene membrane is an oscillating contour. Its eigen frequency coincides with a singularity point of the conductivity and depends on the electrons concentration. So the graphene membrane could be used as an antenna or a transistor and its eigen frequency could be tuned by doping in a large terahertz-infrared frequency range. The obtained formula allows us also to calculate the graphene membrane quantum inductivity and capacitance. The found dependence on electrons concentration is consistent with experiments. The method of the proof is based on study of the time-dependent density matrix. The exact solution of von Neumann equation for density matrix is found for our case in linear approximation on the external field. On this basis the induced current is studied and then the formula for quantum conductivity as a function of external field frequency and temperature is obtained. The method of the proof suggested in this paper could be used to study other problems. The found formula for quantum conductivity can be used to correct the SPPs Dispersion Relation and for the description of radiation process. It would be useful to take the obtained results into account when constructing devices containing graphene membrane nanoantenna. Such project could make it possible to create wireless communications among nanosystems. This would be promising research area of energy harvesting applications.

  19. Color Correction Parameter Estimation on the Smartphone and Its Application to Automatic Tongue Diagnosis.

    PubMed

    Hu, Min-Chun; Cheng, Ming-Hsun; Lan, Kun-Chan

    2016-01-01

    An automatic tongue diagnosis framework is proposed to analyze tongue images taken by smartphones. Different from conventional tongue diagnosis systems, our input tongue images are usually in low resolution and taken under unknown lighting conditions. Consequently, existing tongue diagnosis methods cannot be directly applied to give accurate results. We use the SVM (support vector machine) to predict the lighting condition and the corresponding color correction matrix according to the color difference of images taken with and without flash. We also modify the state-of-the-art work of fur and fissure detection for tongue images by taking hue information into consideration and adding a denoising step. Our method is able to correct the color of tongue images under different lighting conditions (e.g. fluorescent, incandescent, and halogen illuminant) and provide a better accuracy in tongue features detection with less processing complexity than the prior work. In this work, we proposed an automatic tongue diagnosis framework which can be applied to smartphones. Unlike the prior work which can only work in a controlled environment, our system can adapt to different lighting conditions by employing a novel color correction parameter estimation scheme.

  20. The Influence of Non-spectral Matrix Effects on the Accuracy of Isotope Ratio Measurement by MC-ICP-MS

    NASA Astrophysics Data System (ADS)

    Barling, J.; Shiel, A.; Weis, D.

    2006-12-01

    Non-spectral interferences in ICP-MS are caused by matrix elements effecting the ionisation and transmission of analyte elements. They are difficult to identify in MC-ICP-MS isotopic data because affected analyses exhibit normal mass dependent isotope fractionation. We have therefore investigated a wide range of matrix elements for both stable and radiogenic isotope systems using a Nu Plasma MC-ICP-MS. Matrix elements commonly enhance analyte sensitivity and change the instrumental mass bias experienced by analyte elements. These responses vary with element and therefore have important ramifications for the correction of data for instrumental mass bias by use of an external element (e.g. Pb and many non-traditional stable isotope systems). For Pb isotope measurements (Tl as mass bias element), Mg, Al, Ca, and Fe were investigated as matrix elements. All produced signal enhancement in Pb and Tl. Signal enhancement varied from session to session but for Ca and Al enhancement in Pb was less than for Tl while for Mg and Fe enhancement levels for Pb and Tl were similar. After correction for instrumental mass fractionation using Tl, Mg effected Pb isotope ratios were heavy (e.g. ^{208}Pb/204Pbmatrix > ^{208}Pb/204Pbtrue) for both moderate and high [Mg] while Ca effected Pb showed little change at moderate [Ca] but were light at high [Ca]. ^{208}Pb/204Pbmatrix - ^{208}Pb/204Pbtrue for all elements ranged from +0.0122 to - 0.0177. Isotopic shifts of similar magnitude are observed between Pb analyses of samples that have seen either one or two passes through chemistry (Nobre Silva et al, 2005). The double pass purified aliquots always show better reproducibility. These studies show that the presence of matrix can have a significant effect on the accuracy and reproducibility of replicate Pb isotope analyses. For non-traditional stable isotope systems (e.g. Mo(Zr), Cd(Ag)), the different responses of analyte and mass bias elements to the presence of matrix can result in del/amu for measured & mass bias corrected data that disagree outside of error. Either or both values can be incorrect. For samples, unlike experiments, the correct del/amu is not known in advance. Therefore, for sample analyses to be considered accurate, both measured and exponentially corrected del/amu should agree.

  1. A Coulomb-Like Off-Shell T-Matrix with the Correct Coulomb Phase Shift

    NASA Astrophysics Data System (ADS)

    Oryu, Shinsho; Watanabe, Takashi; Hiratsuka, Yasuhisa; Togawa, Yoshio

    2017-03-01

    We confirm the reliability of the well-known Coulomb renormalization method (CRM). It is found that the CRM is only available for a very-long-range screened Coulomb potential (SCP). However, such an SCP calculation in momentum space is considerably difficult because of the cancelation of significant digits. In contrast to the CRM, we propose a new method by using an on-shell equivalent SCP and the rest term. The two-potential theory with r-space is introduced, which defines fully the off-shell Coulomb amplitude.

  2. Spectral statistics of the uni-modular ensemble

    NASA Astrophysics Data System (ADS)

    Joyner, Christopher H.; Smilansky, Uzy; Weidenmüller, Hans A.

    2017-09-01

    We investigate the spectral statistics of Hermitian matrices in which the elements are chosen uniformly from U(1) , called the uni-modular ensemble (UME), in the limit of large matrix size. Using three complimentary methods; a supersymmetric integration method, a combinatorial graph-theoretical analysis and a Brownian motion approach, we are able to derive expressions for 1 / N corrections to the mean spectral moments and also analyse the fluctuations about this mean. By addressing the same ensemble from three different point of view, we can critically compare their relative advantages and derive some new results.

  3. Rapid identification of pathogens directly from blood culture bottles by Bruker matrix-assisted laser desorption laser ionization-time of flight mass spectrometry versus routine methods.

    PubMed

    Jamal, Wafaa; Saleem, Rola; Rotimi, Vincent O

    2013-08-01

    The use of matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for identification of microorganisms directly from blood culture is an exciting dimension to the microbiologists. We evaluated the performance of Bruker SepsiTyper kit™ (STK) for direct identification of bacteria from positive blood culture. This was done in parallel with conventional methods. Nonrepetitive positive blood cultures from 160 consecutive patients were prospectively evaluated by both methods. Of 160 positive blood cultures, the STK identified 114 (75.6%) isolates and routine conventional method 150 (93%). Thirty-six isolates were misidentified or not identified by the kit. Of these, 5 had score of >2.000 and 31 had an unreliable low score of <1.7. Four of 8 yeasts were identified correctly. The average turnaround time using the STK was 35 min, including extraction steps and 30:12 to 36:12 h with routine method. The STK holds promise for timely management of bacteremic patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Determination of trace rare earth elements in gadolinium aluminate by inductively coupled plasma time of flight mass spectrometry

    NASA Astrophysics Data System (ADS)

    Saha, Abhijit; Deb, S. B.; Nagar, B. K.; Saxena, M. K.

    An analytical methodology was developed for the precise quantification of ten trace rare earth elements (REEs), namely, La, Ce, Pr, Nd, Sm, Eu, Tb, Dy, Ho, and Tm, in gadolinium aluminate (GdAlO3) employing an ultrasonic nebulizer (USN)-desolvating device based inductively coupled plasma mass spectrometry (ICP-MS). A microwave digestion procedure was optimized for digesting 100 mg of the refractory oxide using a mixture of sulphuric acid (H2SO4), phosphoric acid (H3PO4) and water (H2O) with 1400 W power, 10 min ramp and 60 min hold time. An USN-desolvating sample introduction system was employed to enhance analyte sensitivities by minimizing their oxide ion formation in the plasma. Studies on the effect of various matrix concentrations on the analyte intensities revealed that precise quantification of the analytes was possible with matrix level of 250 mg L- 1. The possibility of using indium as an internal standard was explored and applied to correct for matrix effect and variation in analyte sensitivity under plasma operating conditions. Individual oxide ion formation yields were determined in matrix matched solution and employed for correcting polyatomic interferences of light REE (LREE) oxide ions on the intensities of middle and heavy rare earth elements (MREEs and HREEs). Recoveries of ≥ 90% were achieved for the analytes employing standard addition technique. Three real samples were analyzed for traces of REEs by the proposed method and cross validated for Eu and Nd by isotope dilution mass spectrometry (IDMS). The results show no significant difference in the values at 95% confidence level. The expanded uncertainty (coverage factor 1σ) in the determination of trace REEs in the samples were found to be between 3 and 8%. The instrument detection limits (IDLs) and the method detection limits (MDLs) for the ten REEs lie in the ranges 1-5 ng L- 1 and 7-64 μg kg- 1 respectively.

  5. The vector radiative transfer numerical model of coupled ocean-atmosphere system using the matrix-operator method

    NASA Astrophysics Data System (ADS)

    Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu

    2005-10-01

    The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.

  6. Updated User's Guide for Sammy: Multilevel R-Matrix Fits to Neutron Data Using Bayes' Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Nancy M

    2008-10-01

    In 1980 the multilevel multichannel R-matrix code SAMMY was released for use in analysis of neutron-induced cross section data at the Oak Ridge Electron Linear Accelerator. Since that time, SAMMY has evolved to the point where it is now in use around the world for analysis of many different types of data. SAMMY is not limited to incident neutrons but can also be used for incident protons, alpha particles, or other charged particles; likewise, Coulomb exit hannels can be included. Corrections for a wide variety of experimental conditions are available in the code: Doppler and resolution broadening, multiple-scattering corrections formore » capture or reaction yields, normalizations and backgrounds, to name but a few. The fitting procedure is Bayes' method, and data and parameter covariance matrices are properly treated within the code. Pre- and post-processing capabilities are also available, including (but not limited to) connections with the Evaluated Nuclear Data Files. Though originally designed for use in the resolved resonance region, SAMMY also includes a treatment for data analysis in the unresolved resonance region.« less

  7. Calcium Isotope Analysis with "Peak Cut" Method on Column Chemistry

    NASA Astrophysics Data System (ADS)

    Zhu, H.; Zhang, Z.; Liu, F.; Li, X.

    2017-12-01

    To eliminate isobaric interferences from elemental and molecular isobars (e.g., 40K+, 48Ti+, 88Sr2+, 24Mg16O+, 27Al16O+) on Ca isotopes during mass determination, samples should be purified through ion-exchange column chemistry before analysis. However, large Ca isotopic fractionation has been observed during column chemistry (Russell and Papanastassiou, 1978; Zhu et al., 2016). Therefore, full recovery during column chemistry is greatly needed, otherwise uncertainties would be caused by poor recovery (Zhu et al., 2016). Generally, matrix effects could be enhanced by full recovery, as other elements might overlap with Ca cut during column chemistry. Matrix effects and full recovery are difficult to balance and both need to be considered for high-precision analysis of stable Ca isotopes. Here, we investigate the influence of poor recovery on δ44/40Ca using TIMS with the double spike technique. The δ44/40Ca values of IAPSO seawater, ML3B-G and BHVO-2 in different Ca subcats (e.g., 0-20, 20-40, 40-60, 60-80, 80-100%) with 20% Ca recovery on column chemistry display limited variation after correction by the 42Ca-43Ca double spike technique with the exponential law. Notably, δ44/40Ca of each Ca subcut is quite consistent with δ44/40Ca of Ca cut with full recovery within error. Our results indicate that the 42Ca-43Ca double spike technique can simultaneously correct both of the Ca isotopic fractionation that occurred during column chemistry and thermal ionization mass spectrometry (TIMS) determination properly, because both of the isotopic fractionation occurred during analysis follow the exponential law well. Therefore, we propose the "peak cut" method on Ca column chemistry for samples with complex matrix effects. Briefly, for samples with low Ca contents, we can add the double spike before column chemistry, and only collect the middle of the Ca eluate and abandon the both sides of Ca eluate that might overlap with other elements (e.g., K, Sr). This method would eliminate matrix effects and improve efficiency for the column chemistry.

  8. An empirical model for polarized and cross-polarized scattering from a vegetation layer

    NASA Technical Reports Server (NTRS)

    Liu, H. L.; Fung, A. K.

    1988-01-01

    An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.

  9. A spectrally tunable LED sphere source enables accurate calibration of tristimulus colorimeters

    NASA Astrophysics Data System (ADS)

    Fryc, I.; Brown, S. W.; Ohno, Y.

    2006-02-01

    The Four-Color Matrix method (FCM) was developed to improve the accuracy of chromaticity measurements of various display colors. The method is valid for each type of display having similar spectra. To develop the Four-Color correction matrix, spectral measurements of primary red, green, blue, and white colors of a display are needed. Consequently, a calibration facility should be equipped with a number of different displays. This is very inconvenient and expensive. A spectrally tunable light source (STS) that can mimic different display spectral distributions would eliminate the need for maintaining a wide variety of displays and would enable a colorimeter to be calibrated for a number of different displays using the same setup. Simulations show that an STS that can create red, green, blue and white distributions that are close to the real spectral power distribution (SPD) of a display works well with the FCM for the calibration of colorimeters.

  10. Determination of plutonium in nitric acid solutions using energy dispersive L X-ray fluorescence with a low power X-ray generator

    NASA Astrophysics Data System (ADS)

    Py, J.; Groetz, J.-E.; Hubinois, J.-C.; Cardona, D.

    2015-04-01

    This work presents the development of an in-line energy dispersive L X-ray fluorescence spectrometer set-up, with a low power X-ray generator and a secondary target, for the determination of plutonium concentration in nitric acid solutions. The intensity of the L X-rays from the internal conversion and gamma rays emitted by the daughter nuclei from plutonium is minimized and corrected, in order to eliminate the interferences with the L X-ray fluorescence spectrum. The matrix effects are then corrected by the Compton peak method. A calibration plot for plutonium solutions within the range 0.1-20 g L-1 is given.

  11. Biotransformation and adsorption of pharmaceutical and personal care products by activated sludge after correcting matrix effects.

    PubMed

    Deng, Yu; Li, Bing; Yu, Ke; Zhang, Tong

    2016-02-15

    This study reported significant suppressive matrix effects in analyses of six pharmaceutical and personal care products (PPCPs) in activated sludge, sterilized activated sludge and untreated sewage by ultra-performance liquid chromatography-tandem mass spectrometry. Quantitative matrix evaluation on selected PPCPs supplemented the limited quantification data of matrix effects on mass spectrometric determination of PPCPs in complex environment samples. The observed matrix effects were chemical-specific and matrix-dependent, with the most pronounced average effect (-55%) was found on sulfadiazine in sterilized activated sludge. After correcting the matrix effects by post-spiking known amount of PPCPs, the removal mechanisms and biotransformation kinetics of selected PPCPs in activated sludge system were revealed by batch experiment. Experimental data elucidated that the removal of target PPCPs in the activated sludge process was mainly by biotransformation while contributions of adsorption, hydrolysis and volatilization could be neglected. High biotransformation efficiency (52%) was observed on diclofenac while other three compounds (sulfadiazine, sulfamethoxazole and roxithromycin) were partially biotransformed by ~40%. The other two compounds, trimethoprim and carbamazepine, showed recalcitrant to biotransformation of the activated sludge. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  13. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  14. IA and PA network-based computation of coordinating combat behaviors in the military MAS

    NASA Astrophysics Data System (ADS)

    Xia, Zuxun; Fang, Huijia

    2004-09-01

    In the military multi-agent system every agent needs to analyze the dependent and temporal relations among the tasks or combat behaviors for working-out its plans and getting the correct behavior sequences, it could guarantee good coordination, avoid unexpected damnification and guard against bungling the change of winning a battle due to the possible incorrect scheduling and conflicts. In this paper IA and PA network based computation of coordinating combat behaviors is put forward, and emphasize particularly on using 5x5 matrix to represent and compute the temporal binary relation (between two interval-events, two point-events or between one interval-event and one point-event), this matrix method makes the coordination computing convenience than before.

  15. The importance of reference materials in doping-control analysis.

    PubMed

    Mackay, Lindsey G; Kazlauskas, Rymantas

    2011-08-01

    Currently a large range of pure substance reference materials are available for calibration of doping-control methods. These materials enable traceability to the International System of Units (SI) for the results generated by World Anti-Doping Agency (WADA)-accredited laboratories. Only a small number of prohibited substances have threshold limits for which quantification is highly important. For these analytes only the highest quality reference materials that are available should be used. Many prohibited substances have no threshold limits and reference materials provide essential identity confirmation. For these reference materials the correct identity is critical and the methods used to assess identity in these cases should be critically evaluated. There is still a lack of certified matrix reference materials to support many aspects of doping analysis. However, in key areas a range of urine matrix materials have been produced for substances with threshold limits, for example 19-norandrosterone and testosterone/epitestosterone (T/E) ratio. These matrix-certified reference materials (CRMs) are an excellent independent means of checking method recovery and bias and will typically be used in method validation and then regularly as quality-control checks. They can be particularly important in the analysis of samples close to threshold limits, in which measurement accuracy becomes critical. Some reference materials for isotope ratio mass spectrometry (IRMS) analysis are available and a matrix material certified for steroid delta values is currently under production. In other new areas, for example the Athlete Biological Passport, peptide hormone testing, designer steroids, and gene doping, reference material needs still need to be thoroughly assessed and prioritised.

  16. Identification of Francisella tularensis by whole-cell matrix-assisted laser desorption ionization-time of flight mass spectrometry: fast, reliable, robust, and cost-effective differentiation on species and subspecies levels.

    PubMed

    Seibold, E; Maier, T; Kostrzewa, M; Zeman, E; Splettstoesser, W

    2010-04-01

    Francisella tularensis, the causative agent of tularemia, is a potential agent of bioterrorism. The phenotypic discrimination of closely related, but differently virulent, Francisella tularensis subspecies with phenotyping methods is difficult and time-consuming, often producing ambiguous results. As a fast and simple alternative, matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) was applied to 50 different strains of the genus Francisella to assess its ability to identify and discriminate between strains according to their designated species and subspecies. Reference spectra from five representative strains of Francisella philomiragia, Francisella tularensis subsp. tularensis, Francisella tularensis subsp. holarctica, Francisella tularensis subsp. mediasiatica, and Francisella tularensis subsp. novicida were established and evaluated for their capability to correctly identify Francisella species and subspecies by matching a collection of spectra from 45 blind-coded Francisella strains against a database containing the five reference spectra and 3,287 spectra from other microorganisms. As a reference method for identification of strains from the genus Francisella, 23S rRNA gene sequencing was used. All strains were correctly identified, with both methods showing perfect agreement at the species level as well as at the subspecies level. The identification of Francisella strains by MALDI-TOF MS and subsequent database matching was reproducible using biological replicates, different culture media, different cultivation times, different serial in vitro passages of the same strain, different preparation protocols, and different mass spectrometers.

  17. Development of a Postcolumn Infused-Internal Standard Liquid Chromatography Mass Spectrometry Method for Quantitative Metabolomics Studies.

    PubMed

    Liao, Hsiao-Wei; Chen, Guan-Yuan; Wu, Ming-Shiang; Liao, Wei-Chih; Lin, Ching-Hung; Kuo, Ching-Hua

    2017-02-03

    Quantitative metabolomics has become much more important in clinical research in recent years. Individual differences in matrix effects (MEs) and the injection order effect are two major factors that reduce the quantification accuracy in liquid chromatography-electrospray ionization-mass spectrometry-based (LC-ESI-MS) metabolomics studies. This study proposed a postcolumn infused-internal standard (PCI-IS) combined with a matrix normalization factor (MNF) strategy to improve the analytical accuracy of quantitative metabolomics. The PCI-IS combined with the MNF method was applied for a targeted metabolomics study of amino acids (AAs). D8-Phenylalanine was used as the PCI-IS, and it was postcolumn-infused into the ESI interface for calibration purposes. The MNF was used to bridge the AA response in a standard solution with the plasma samples. The MEs caused signal changes that were corrected by dividing the AA signal intensities by the PCI-IS intensities after adjustment with the MNF. After the method validation, we evaluated the method applicability for breast cancer research using 100 plasma samples. The quantification results revealed that the 11 tested AAs exhibit an accuracy between 88.2 and 110.7%. The principal component analysis score plot revealed that the injection order effect can be successfully removed, and most of the within-group variation of the tested AAs decreased after the PCI-IS correction. Finally, targeted metabolomics studies on the AAs showed that tryptophan was expressed more in malignant patients than in the benign group. We anticipate that a similar approach can be applied to other endogenous metabolites to facilitate quantitative metabolomics studies.

  18. Human cell structure-driven model construction for predicting protein subcellular location from biological images.

    PubMed

    Shao, Wei; Liu, Mingxia; Zhang, Daoqiang

    2016-01-01

    The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting. We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method. The dataset and code can be downloaded from https://github.com/shaoweinuaa/. dqzhang@nuaa.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Mathematical investigation of one-way transform matrix options.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, James Arlin

    2006-01-01

    One-way transforms have been used in weapon systems processors since the mid- to late-1970s in order to help recognize insertion of correct pre-arm information while maintaining abnormal-environment safety. Level-One, Level-Two, and Level-Three transforms have been designed. The Level-One and Level-Two transforms have been implemented in weapon systems, and both of these transforms are equivalent to matrix multiplication applied to the inserted information. The Level-Two transform, utilizing a 6 x 6 matrix, provided the basis for the ''System 2'' interface definition for Unique-Signal digital communication between aircraft and attached weapons. The investigation described in this report was carried out to findmore » out if there were other size matrices that would be equivalent to the 6 x 6 Level-Two matrix. One reason for the investigation was to find out whether or not other dimensions were possible, and if so, to derive implementation options. Another important reason was to more fully explore the potential for inadvertent inversion. The results were that additional implementation methods were discovered, but no inversion weaknesses were revealed.« less

  20. Apparatus and method for identification of matrix materials in which transuranic elements are embedded using thermal neutron capture gamma-ray emission

    DOEpatents

    Close, D.A.; Franks, L.A.; Kocimski, S.M.

    1984-08-16

    An invention is described that enables the quantitative simultaneous identification of the matrix materials in which fertile and fissile nuclides are embedded to be made along with the quantitative assay of the fertile and fissile materials. The invention also enables corrections for any absorption of neutrons by the matrix materials and by the measurement apparatus by the measurement of the prompt and delayed neutron flux emerging from a sample after the sample is interrogated by simultaneously applied neutrons and gamma radiation. High energy electrons are directed at a first target to produce gamma radiation. A second target receives the resulting pulsed gamma radiation and produces neutrons from the interaction with the gamma radiation. These neutrons are slowed by a moderator surrounding the sample and bathe the sample uniformly, generating second gamma radiation in the interaction. The gamma radiation is then resolved and quantitatively detected, providing a spectroscopic signature of the constituent elements contained in the matrix and in the materials within the vicinity of the sample. (LEW)

  1. Improvement of structural models using covariance analysis and nonlinear generalized least squares

    NASA Technical Reports Server (NTRS)

    Glaser, R. J.; Kuo, C. P.; Wada, B. K.

    1992-01-01

    The next generation of large, flexible space structures will be too light to support their own weight, requiring a system of structural supports for ground testing. The authors have proposed multiple boundary-condition testing (MBCT), using more than one support condition to reduce uncertainties associated with the supports. MBCT would revise the mass and stiffness matrix, analytically qualifying the structure for operation in space. The same procedure is applicable to other common test conditions, such as empty/loaded tanks and subsystem/system level tests. This paper examines three techniques for constructing the covariance matrix required by nonlinear generalized least squares (NGLS) to update structural models based on modal test data. The methods range from a complicated approach used to generate the simulation data (i.e., the correct answer) to a diagonal matrix based on only two constants. The results show that NGLS is very insensitive to assumptions about the covariance matrix, suggesting that a workable NGLS procedure is possible. The examples also indicate that the multiple boundary condition procedure more accurately reduces errors than individual boundary condition tests alone.

  2. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.

  3. The density of apical cells of dark-grown protonemata of the moss Ceratodon purpureus

    NASA Technical Reports Server (NTRS)

    Schwuchow, J. M.; Kern, V. D.; Wagner, T.; Sack, F. D.

    2000-01-01

    Determinations of plant or algal cell density (cell mass divided by volume) have rarely accounted for the extracellular matrix or shrinkage during isolation. Three techniques were used to indirectly estimate the density of intact apical cells from protonemata of the moss Ceratodon purpureus. First, the volume fraction of each cell component was determined by stereology, and published values for component density were used to extrapolate to the entire cell. Second, protonemal tips were immersed in bovine serum albumin solutions of different densities, and then the equilibrium density was corrected for the mass of the cell wall. Third, apical cell protoplasts were centrifuged in low-osmolarity gradients, and values were corrected for shrinkage during protoplast isolation. Values from centrifugation (1.004 to 1.015 g/cm3) were considerably lower than from other methods (1.046 to 1.085 g/cm3). This work appears to provide the first corrected estimates of the density of any plant cell. It also documents a method for the isolation of protoplasts specifically from apical cells of protonemal filaments.

  4. Long-range correction for tight-binding TD-DFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humeniuk, Alexander; Mitrić, Roland, E-mail: roland.mitric@uni-wuerzburg.de

    2015-10-07

    We present two improvements to the tight-binding approximation of time-dependent density functional theory (TD-DFTB): First, we add an exact Hartree-Fock exchange term, which is switched on at large distances, to the ground state Hamiltonian and similarly to the coupling matrix that enters the linear response equations for the calculation of excited electronic states. We show that the excitation energies of charge transfer states are improved relative to the standard approach without the long-range correction by testing the method on a set of molecules from the database in Peach et al. [J. Chem. Phys. 128, 044118 (2008)] which are known tomore » exhibit problematic charge transfer states. The degree of spatial overlap between occupied and virtual orbitals indicates where TD-DFTB and long-range corrected TD-DFTB (lc-TD-DFTB) can be expected to produce large errors. Second, we improve the calculation of oscillator strengths. The transition dipoles are obtained from Slater Koster files for the dipole matrix elements between valence orbitals. In particular, excitations localized on a single atom, which appear dark when using Mulliken transition charges, acquire a more realistic oscillator strength in this way. These extensions pave the way for using lc-TD-DFTB to describe the electronic structure of large chromophoric polymers, where uncorrected TD-DFTB fails to describe the high degree of conjugation and produces spurious low-lying charge transfer states.« less

  5. Synthesis of linear regression coefficients by recovering the within-study covariance matrix from summary statistics.

    PubMed

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-06-01

    Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Accurate mass measurement by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. I. Measurement of positive radical ions using porphyrin standard reference materials.

    PubMed

    Griffiths, Nia W; Wyatt, Mark F; Kean, Suzanna D; Graham, Andrew E; Stein, Bridget K; Brenton, A Gareth

    2010-06-15

    A method for the accurate mass measurement of positive radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. Initial use of a conjugated oligomeric calibration material was rejected in favour of a series of meso-tetraalkyl/tetraalkylaryl-functionalised porphyrins, from which the two calibrants required for a particular accurate mass measurement were chosen. While all measurements of monoisotopic species were within +/-5 ppm, and the method was rigorously validated using chemometrics, mean values of five measurements were used for extra confidence in the generation of potential elemental formulae. Potential difficulties encountered when measuring compounds containing multi-isotopic elements are discussed, where the monoisotopic peak is no longer the lowest mass peak, and a simple mass-correction solution can be applied. The method requires no significant expertise to implement, but care and attention is required to obtain valid measurements. The method is operationally simple and will prove useful to the analytical chemistry community. Copyright (c) 2010 John Wiley & Sons, Ltd.

  7. Limitations on near-surface correction for multicomponent offset VSP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macbeth, C.; Li, X.Y.; Horne, S.

    1994-12-31

    Multicomponent data are degraded due to near-surface scattering and non-ideal or unexpected source behavior. These effects cannot be neglected when interpreting relative wavefield attributes derived from compressional and shear waves. They confuse analyses based on standard scalar procedures and a prima facia interpretation of the vector wavefield properties. Here, the authors highlight two unique polar matrix decompositions for near-surface correction in offset VSPs, consider their inherent mathematical constraints and how they impact on subsurface interpretation. The first method is applied to a four component subset of a six component field data from a configuration of three concentric rings and walkawaymore » source positions forming offset VSPs in the Cymric field, California. The correction appears successful in automatically converting the wavefield into its ideal form, and the qSl polarizations scatter around N15{degree}E in agreement with the layer stripping of Winterstein and Meadows (1991).« less

  8. Analysis and correction for measurement error of edge sensors caused by deformation of guide flexure applied in the Thirty Meter Telescope SSA.

    PubMed

    Cao, Haifeng; Zhang, Jingxu; Yang, Fei; An, Qichang; Zhao, Hongchao; Guo, Peng

    2018-05-01

    The Thirty Meter Telescope (TMT) project will design and build a 30-m-diameter telescope for research in astronomy in visible and infrared wavelengths. The primary mirror of TMT is made up of 492 hexagonal mirror segments under active control. The highly segmented primary mirror will utilize edge sensors to align and stabilize the relative piston, tip, and tilt degrees of segments. The support system assembly (SSA) of the segmented mirror utilizes a guide flexure to decouple the axial support and lateral support, while its deformation will cause measurement error of the edge sensor. We have analyzed the theoretical relationship between the segment movement and the measurement value of the edge sensor. Further, we have proposed an error correction method with a matrix. The correction process and the simulation results of the edge sensor will be described in this paper.

  9. Higgs boson decay into b-quarks at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán

    2015-04-01

    We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.

  10. Reduced circuit implementation of encoder and syndrome generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trager, Barry M; Winograd, Shmuel

    An error correction method and system includes an Encoder and Syndrome-generator that operate in parallel to reduce the amount of circuitry used to compute check symbols and syndromes for error correcting codes. The system and method computes the contributions to the syndromes and check symbols 1 bit at a time instead of 1 symbol at a time. As a result, the even syndromes can be computed as powers of the odd syndromes. Further, the system assigns symbol addresses so that there are, for an example GF(2.sup.8) which has 72 symbols, three (3) blocks of addresses which differ by a cubemore » root of unity to allow the data symbols to be combined for reducing size and complexity of odd syndrome circuits. Further, the implementation circuit for generating check symbols is derived from syndrome circuit using the inverse of the part of the syndrome matrix for check locations.« less

  11. Gauge invariance of excitonic linear and nonlinear optical response

    NASA Astrophysics Data System (ADS)

    Taghizadeh, Alireza; Pedersen, T. G.

    2018-05-01

    We study the equivalence of four different approaches to calculate the excitonic linear and nonlinear optical response of multiband semiconductors. These four methods derive from two choices of gauge, i.e., length and velocity gauges, and two ways of computing the current density, i.e., direct evaluation and evaluation via the time-derivative of the polarization density. The linear and quadratic response functions are obtained for all methods by employing a perturbative density-matrix approach within the mean-field approximation. The equivalence of all four methods is shown rigorously, when a correct interaction Hamiltonian is employed for the velocity gauge approaches. The correct interaction is written as a series of commutators containing the unperturbed Hamiltonian and position operators, which becomes equivalent to the conventional velocity gauge interaction in the limit of infinite Coulomb screening and infinitely many bands. As a case study, the theory is applied to hexagonal boron nitride monolayers, and the linear and nonlinear optical response found in different approaches are compared.

  12. Design Specification Issues in Time-Series Intervention Models.

    ERIC Educational Resources Information Center

    Huitema, Bradley E.; McKean, Joseph W.

    2000-01-01

    Presents examples of egregious errors of interpretation in time-series intervention models and makes recommendations regarding the correct specification of the design matrix. Discusses the profound effects of variants of the slope change variable in the design matrix. (SLD)

  13. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    DOE PAGES

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; ...

    2016-11-25

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  14. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    NASA Astrophysics Data System (ADS)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.

    2016-11-01

    We present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography-mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arranged into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.

  15. Comparative evaluation of matrix-assisted laser desorption ionisation-time of flight mass spectrometry and conventional phenotypic-based methods for identification of clinically important yeasts in a UK-based medical microbiology laboratory.

    PubMed

    Fatania, Nita; Fraser, Mark; Savage, Mike; Hart, Jason; Abdolrasouli, Alireza

    2015-12-01

    Performance of matrix-assisted laser desorption ionisation-time of flight mass spectrometry (MALDI-TOF MS) was compared in a side-by side-analysis with conventional phenotypic methods currently in use in our laboratory for identification of yeasts in a routine diagnostic setting. A diverse collection of 200 clinically important yeasts (19 species, five genera) were identified by both methods using standard protocols. Discordant or unreliable identifications were resolved by sequencing of the internal transcribed spacer region of the rRNA gene. MALDI-TOF and conventional methods were in agreement for 182 isolates (91%) with correct identification to species level. Eighteen discordant results (9%) were due to rarely encountered species, hence the difficulty in their identification using traditional phenotypic methods. MALDI-TOF MS enabled rapid, reliable and accurate identification of clinically important yeasts in a routine diagnostic microbiology laboratory. Isolates with rare, unusual or low probability identifications should be confirmed using robust molecular methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Rapid method for the determination of 226Ra in hydraulic fracturing wastewater samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maxwell, Sherrod L.; Culligan, Brian K.; Warren, Richard A.

    A new method that rapidly preconcentrates and measures 226Ra from hydraulic fracturing wastewater samples was developed in the Savannah River Environmental Laboratory. The method improves the quality of 226Ra measurements using gamma spectrometry by providing up to 100x preconcentration of 226Ra from this difficult sample matrix, which contains very high levels of calcium, barium, strontium, magnesium and sodium. The high chemical yield, typically 80-90%, facilitates a low detection limit, important for lower level samples, and indicates method ruggedness. Ba-133 tracer is used to determine chemical yield and correct for geometry-related counting issues. The 226Ra sample preparation takes < 2 hours.

  17. Rapid method for the determination of 226Ra in hydraulic fracturing wastewater samples

    DOE PAGES

    Maxwell, Sherrod L.; Culligan, Brian K.; Warren, Richard A.; ...

    2016-03-24

    A new method that rapidly preconcentrates and measures 226Ra from hydraulic fracturing wastewater samples was developed in the Savannah River Environmental Laboratory. The method improves the quality of 226Ra measurements using gamma spectrometry by providing up to 100x preconcentration of 226Ra from this difficult sample matrix, which contains very high levels of calcium, barium, strontium, magnesium and sodium. The high chemical yield, typically 80-90%, facilitates a low detection limit, important for lower level samples, and indicates method ruggedness. Ba-133 tracer is used to determine chemical yield and correct for geometry-related counting issues. The 226Ra sample preparation takes < 2 hours.

  18. Reducing time to identification of aerobic bacteria and fastidious micro-organisms in positive blood cultures.

    PubMed

    Intra, J; Sala, M R; Falbo, R; Cappellini, F; Brambilla, P

    2016-12-01

    Rapid and early identification of micro-organisms in blood has a key role in the diagnosis of a febrile patient, in particular, in guiding the clinician to define the correct antibiotic therapy. This study presents a simple and very fast method with high performances for identifying bacteria by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) after only 4 h of incubation. We used early bacterial growth on PolyViteX chocolate agar plates inoculated with five drops of blood-broth medium deposited in the same point and spread with a sterile loop, followed by a direct transfer procedure on MALDI-TOF MS target slides without additional modification. Ninety-nine percentage of aerobic bacteria were correctly identified from 600 monomicrobial-positive blood cultures. This procedure allowed obtaining the correct identification of fastidious pathogens, such as Streptococcus pneumoniae, Neisseria meningitidis and Haemophilus influenzae that need complex nutritional and environmental requirements in order to grow. Compared to the traditional pathogen identification from blood cultures that takes over 24 h, the reliability of results, rapid performance and suitability of this protocol allowed a more rapid administration of optimal antimicrobial treatment in the patients. Bloodstream infections are serious conditions with a high mortality and morbidity rate. Rapid identification of pathogens and appropriate antimicrobial therapy have a key role for successful patient outcome. In this work, we developed a rapid, simplified, accurate, and efficient method, reaching 99 % identification of aerobic bacteria from monomicrobial-positive blood cultures by using early growth on enriched medium, direct transfer to target plate without additional procedures, matrix-assisted laser desorption ionization-time of flight mass spectrometry and SARAMIS database. The application of this protocol allows to anticipate appropriate antibiotic therapy. © 2016 The Society for Applied Microbiology.

  19. Gauge invariance of phenomenological models of the interaction of quantum dissipative systems with electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Tokman, M. D.

    2009-05-01

    We discuss specific features of the electrodynamic characteristics of quantum systems within the framework of models that include a phenomenological description of the relaxation processes. As is shown by W. E. Lamb, Jr., R. R. Schlicher, and M. O. Scully [Phys. Rev. A 36, 2763 (1987)], the use of phenomenological relaxation operators, which adequately describe the attenuation of eigenvibrations of a quantum system, may lead to incorrect solutions in the presence of external electromagnetic fields determined by the vector potential for different resonance processes. This incorrectness can be eliminated by giving a gauge-invariant form to the relaxation operator. Lamb, Jr., proposed the corresponding gauge-invariant modification for the Weisskopf-Wigner relaxation operator, which is introduced directly into the Schrödinger equation within the framework of the two-level approximation. In the present paper, this problem is studied for the von Neumann equation supplemented by a relaxation operator. First, we show that the solution of the equation for the density matrix with the relaxation operator correctly obtained “from the first principles” has properties that ensure gauge invariance for the observables. Second, we propose a common recipe for transformation of the phenomenological relaxation operator into the correct (gauge-invariant) form in the density-matrix equations for a multilevel system. Also, we discuss the methods of elimination of other inaccuracies (not related to the gauge-invariance problem) which arise if the electrodynamic response of a dissipative quantum system is calculated within the framework of simplified relaxation models (first of all, the model corresponding to constant relaxation rates of coherences in quantum transitions). Examples illustrating the correctness of the results obtained within the framework of the proposed methods in contrast to inaccuracy of the results of the standard calculation techniques are given.

  20. Error measure comparison of currently employed dose-modulation schemes for e-beam proximity effect control

    NASA Astrophysics Data System (ADS)

    Peckerar, Martin C.; Marrian, Christie R.

    1995-05-01

    Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.

  1. A Note on Multigrid Theory for Non-nested Grids and/or Quadrature

    NASA Technical Reports Server (NTRS)

    Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.

    1996-01-01

    We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.

  2. Correction of patient motion in cone-beam CT using 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.

    2017-12-01

    Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was  >0.995, with significant improvement (p  <  0.001) compared to the SSIM values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.

  3. Color correction optimization with hue regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  4. Surgical Outcomes of Deep Superior Sulcus Augmentation Using Acellular Human Dermal Matrix in Anophthalmic or Phthisis Socket.

    PubMed

    Cho, Won-Kyung; Jung, Su-Kyung; Paik, Ji-Sun; Yang, Suk-Woo

    2016-07-01

    Patients with anophthalmic or phthisis socket suffer from cosmetic problems. To resolve those problems, the authors present the surgical outcomes of deep superior sulcus (DSS) augmentation using acellular dermal matrix in patients with anophthalmic or phthisis socket. The authors retrospectively reviewed anophthalmic or phthisis patients who underwent surgery for DSS augmentation using acellular dermal matrix. To evaluate surgical outcomes, the authors focused on 3 aspects: the possibility of wearing contact prosthesis, the degree of correction of the DSS, and any surgical complications. The degree of correction of DSS was classified as excellent: restoration of superior sulcus enough to remove sunken sulcus shadow; fair: gain of correction effect but sunken shadow remained; or fail: no effect of correction at all. Ten eyes of 10 patients were included. There was a mean 21.3 ± 37.1-month period from evisceration or enucleation to the operation for DSS augmentation. All patients could wear contact prosthesis after the operation (100%). The degree of correction was excellent in 8 patients (80%) and fair in 2. Three of 10 (30%) showed complications: eyelid entropion, upper eyelid multiple creases, and spontaneous wound dehiscence followed by inflammation after stitch removal. Uneven skin surface and paresthesia in the forehead area of the affected eye may be observed after surgery. The overall surgical outcomes were favorable, showing an excellent degree of correction of DSS and low surgical complication rates. This procedure is effective for patients who have DSS in the absence or atrophy of the eyeball.

  5. Excitation energies of dissociating H2: A problematic case for the adiabatic approximation of time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Gritsenko, O. V.; van Gisbergen, S. J. A.; Görling, A.; Baerends, E. J.

    2000-11-01

    Time-dependent density functional theory (TDDFT) is applied for calculation of the excitation energies of the dissociating H2 molecule. The standard TDDFT method of adiabatic local density approximation (ALDA) totally fails to reproduce the potential curve for the lowest excited singlet 1Σu+ state of H2. Analysis of the eigenvalue problem for the excitation energies as well as direct derivation of the exchange-correlation (xc) kernel fxc(r,r',ω) shows that ALDA fails due to breakdown of its simple spatially local approximation for the kernel. The analysis indicates a complex structure of the function fxc(r,r',ω), which is revealed in a different behavior of the various matrix elements K1c,1cxc (between the highest occupied Kohn-Sham molecular orbital ψ1 and virtual MOs ψc) as a function of the bond distance R(H-H). The effect of nonlocality of fxc(r,r') is modeled by using different expressions for the corresponding matrix elements of different orbitals. Asymptotically corrected ALDA (ALDA-AC) expressions for the matrix elements K12,12xc(στ) are proposed, while for other matrix elements the standard ALDA expressions are retained. This approach provides substantial improvement over the standard ALDA. In particular, the ALDA-AC curve for the lowest singlet excitation qualitatively reproduces the shape of the exact curve. It displays a minimum and approaches a relatively large positive energy at large R(H-H). ALDA-AC also produces a substantial improvement for the calculated lowest triplet excitation, which is known to suffer from the triplet instability problem of the restricted KS ground state. Failure of the ALDA for the excitation energies is related to the failure of the local density as well as generalized gradient approximations to reproduce correctly the polarizability of dissociating H2. The expression for the response function χ is derived to show the origin of the field-counteracting term in the xc potential, which is lacking in the local density and generalized gradient approximations and which is required to obtain a correct polarizability.

  6. Efficient Brownian Dynamics of rigid colloids in linear flow fields based on the grand mobility matrix

    NASA Astrophysics Data System (ADS)

    Palanisamy, Duraivelan; den Otter, Wouter K.

    2018-05-01

    We present an efficient general method to simulate in the Stokesian limit the coupled translational and rotational dynamics of arbitrarily shaped colloids subject to external potential forces and torques, linear flow fields, and Brownian motion. The colloid's surface is represented by a collection of spherical primary particles. The hydrodynamic interactions between these particles, here approximated at the Rotne-Prager-Yamakawa level, are evaluated only once to generate the body's (11 × 11) grand mobility matrix. The constancy of this matrix in the body frame, combined with the convenient properties of quaternions in rotational Brownian Dynamics, enables an efficient simulation of the body's motion. Simulations in quiescent fluids yield correct translational and rotational diffusion behaviour and sample Boltzmann's equilibrium distribution. Simulations of ellipsoids and spherical caps under shear, in the absence of thermal fluctuations, yield periodic orbits in excellent agreement with the theories by Jeffery and Dorrepaal. The time-varying stress tensors provide the Einstein coefficient and viscosity of dilute suspensions of these bodies.

  7. An Enhanced Butyrylcholinesterase Method to Measure Organophosphorus Nerve Agent Exposure in Humans

    PubMed Central

    Pantazides, Brooke G.; Watson, Caroline M.; Carter, Melissa D.; Crow, Brian S.; Perez, Jonas W.; Blake, Thomas A.; Thomas, Jerry D.; Johnson, Rudolph C.

    2016-01-01

    Organophosphorus nerve agent (OPNA) adducts to butyrylcholinesterase (BChE) can be used to confirm exposure in humans. A highly accurate method to detect G-series and V-series OPNA adducts to BChE in 75 μL of filtered blood, serum, or plasma has been developed using immunomagnetic separation (IMS) coupled with liquid chromatography tandem mass spectrometry (LC-MS/MS). The reported IMS method captures > 88% of the BChE in a specimen and corrects for matrix effects on peptide calibrators. The optimized method has been used to quantify baseline BChE levels (unadducted and OPNA-adducted) in a matched set of serum, plasma and whole blood (later processed in-house for plasma content) from 192 unexposed individuals to determine the interchangeability of the tested matrices. The results of these measurements demonstrate the ability to accurately measure BChE regardless of the format of the blood specimen received. Criteria for accepting or denying specimens were established through a series of sample stability and processing experiments. The results of these efforts are an optimized and rugged method that is transferrable to other laboratories and an increased understanding of the BChE biomarker in matrix. PMID:24604326

  8. Analysis and correction of gradient nonlinearity bias in ADC measurements

    PubMed Central

    Malyarenko, Dariya I.; Ross, Brian D.; Chenevert, Thomas L.

    2013-01-01

    Purpose Gradient nonlinearity of MRI systems leads to spatially-dependent b-values and consequently high non-uniformity errors (10–20%) in ADC measurements over clinically relevant field-of-views. This work seeks practical correction procedure that effectively reduces observed ADC bias for media of arbitrary anisotropy in the fewest measurements. Methods All-inclusive bias analysis considers spatial and time-domain cross-terms for diffusion and imaging gradients. The proposed correction is based on rotation of the gradient nonlinearity tensor into the diffusion gradient frame where spatial bias of b-matrix can be approximated by its Euclidean norm. Correction efficiency of the proposed procedure is numerically evaluated for a range of model diffusion tensor anisotropies and orientations. Results Spatial dependence of nonlinearity correction terms accounts for the bulk (75–95%) of ADC bias for FA = 0.3–0.9. Residual ADC non-uniformity errors are amplified for anisotropic diffusion. This approximation obviates need for full diffusion tensor measurement and diagonalization to derive a corrected ADC. Practical scenarios are outlined for implementation of the correction on clinical MRI systems. Conclusions The proposed simplified correction algorithm appears sufficient to control ADC non-uniformity errors in clinical studies using three orthogonal diffusion measurements. The most efficient reduction of ADC bias for anisotropic medium is achieved with non-lab-based diffusion gradients. PMID:23794533

  9. Non-collinear magnetism with analytic Bond-Order Potentials

    NASA Astrophysics Data System (ADS)

    Ford, Michael E.; Pettifor, D. G.; Drautz, Ralf

    2015-03-01

    The theory of analytic Bond-Order Potentials as applied to non-collinear magnetic structures of transition metals is extended to take into account explicit rotations of Hamiltonian and local moment matrix elements between locally and globally defined spin-coordinate systems. Expressions for the gradients of the energy with respect to the Hamiltonian matrix elements, the interatomic forces and the magnetic torques are derived. The method is applied to simulations of the rotation of magnetic moments in α iron, as well as α and β manganese, based on d-valent orthogonal tight-binding parametrizations of the electronic structure. A new weighted-average terminator is introduced to improve the convergence of the Bond-Order Potential energies and torques with respect to tight-binding reference values, although the general behavior is qualitatively correct for low-moment expansions.

  10. RSAT matrix-clustering: dynamic exploration and redundancy reduction of transcription factor binding motif collections

    PubMed Central

    Jaeger, Sébastien; Thieffry, Denis

    2017-01-01

    Abstract Transcription factor (TF) databases contain multitudes of binding motifs (TFBMs) from various sources, from which non-redundant collections are derived by manual curation. The advent of high-throughput methods stimulated the production of novel collections with increasing numbers of motifs. Meta-databases, built by merging these collections, contain redundant versions, because available tools are not suited to automatically identify and explore biologically relevant clusters among thousands of motifs. Motif discovery from genome-scale data sets (e.g. ChIP-seq) also produces redundant motifs, hampering the interpretation of results. We present matrix-clustering, a versatile tool that clusters similar TFBMs into multiple trees, and automatically creates non-redundant TFBM collections. A feature unique to matrix-clustering is its dynamic visualisation of aligned TFBMs, and its capability to simultaneously treat multiple collections from various sources. We demonstrate that matrix-clustering considerably simplifies the interpretation of combined results from multiple motif discovery tools, and highlights biologically relevant variations of similar motifs. We also ran a large-scale application to cluster ∼11 000 motifs from 24 entire databases, showing that matrix-clustering correctly groups motifs belonging to the same TF families, and drastically reduced motif redundancy. matrix-clustering is integrated within the RSAT suite (http://rsat.eu/), accessible through a user-friendly web interface or command-line for its integration in pipelines. PMID:28591841

  11. Coil-to-coil physiological noise correlations and their impact on fMRI time-series SNR

    PubMed Central

    Triantafyllou, C.; Polimeni, J. R.; Keil, B.; Wald, L. L.

    2017-01-01

    Purpose Physiological nuisance fluctuations (“physiological noise”) are a major contribution to the time-series Signal to Noise Ratio (tSNR) of functional imaging. While thermal noise correlations between array coil elements have a well-characterized effect on the image Signal to Noise Ratio (SNR0), the element-to-element covariance matrix of the time-series fluctuations has not yet been analyzed. We examine this effect with a goal of ultimately improving the combination of multichannel array data. Theory and Methods We extend the theoretical relationship between tSNR and SNR0 to include a time-series noise covariance matrix Ψt, distinct from the thermal noise covariance matrix Ψ0, and compare its structure to Ψ0 and the signal coupling matrix SSH formed from the signal intensity vectors S. Results Inclusion of the measured time-series noise covariance matrix into the model relating tSNR and SNR0 improves the fit of experimental multichannel data and is shown to be distinct from Ψ0 or SSH. Conclusion Time-series noise covariances in array coils are found to differ from Ψ0 and more surprisingly, from the signal coupling matrix SSH. Correct characterization of the time-series noise has implications for the analysis of time-series data and for improving the coil element combination process. PMID:26756964

  12. Line Interference Effects Using a Refined Robert-Bonamy Formalism: the Test Case of the Isotropic Raman Spectra of Autoperturbed N2

    NASA Technical Reports Server (NTRS)

    Boulet, Christian; Ma, Qiancheng; Thibault, Franck

    2014-01-01

    A symmetrized version of the recently developed refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)] is proposed. This model takes into account line coupling effects and hence allows the calculation of the off-diagonal elements of the relaxation matrix, without neglecting the rotational structure of the perturbing molecule. The formalism is applied to the isotropic Raman spectra of autoperturbed N2 for which a benchmark quantum relaxation matrix has recently been proposed. The consequences of the classical path approximation are carefully analyzed. Methods correcting for effects of inelasticity are considered. While in the right direction, these corrections appear to be too crude to provide off diagonal elements which would yield, via the sum rule, diagonal elements in good agreement with the quantum results. In order to overcome this difficulty, a re-normalization procedure is applied, which ensures that the off-diagonal elements do lead to the exact quantum diagonal elements. The agreement between the (re-normalized) semi-classical and quantum relaxation matrices is excellent, at least for the Raman spectra of N2, opening the way to the analysis of more complex molecular systems.

  13. Lorentz symmetry violation with higher-order operators and renormalization

    NASA Astrophysics Data System (ADS)

    Nascimento, J. R.; Petrov, A. Yu; Reyes, C. M.

    2018-01-01

    Effective field theory has shown to be a powerful method in searching for quantum gravity effects and in particular for CPT and Lorentz symmetry violation. In this work we study an effective field theory with higher-order Lorentz violation, specifically we consider a modified model with scalars and modified fermions interacting via the Yukawa coupling. We study its renormalization properties, that is, its radiative corrections and renormalization conditions in the light of the requirements of having a finite and unitary S-matrix.

  14. Location of laccase in ordered mesoporous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayoral, Álvaro; Gascón, Victoria; Blanco, Rosa M.

    2014-11-01

    The functionalization with amine groups was developed on the SBA-15, and its effect in the laccase immobilization was compared with that of a Periodic Mesoporous Aminosilica. A method to encapsulate the laccase in situ has now been developed. In this work, spherical aberration (C{sub s}) corrected scanning transmission electron microscopy combined with high angle annular dark field detector and electron energy loss spectroscopy were applied to identify the exact location of the enzyme in the matrix formed by the ordered mesoporous solids.

  15. Location of laccase in ordered mesoporous materials

    NASA Astrophysics Data System (ADS)

    Mayoral, Álvaro; Gascón, Victoria; Blanco, Rosa M.; Márquez-Álvarez, Carlos; Díaz, Isabel

    2014-11-01

    The functionalization with amine groups was developed on the SBA-15, and its effect in the laccase immobilization was compared with that of a Periodic Mesoporous Aminosilica. A method to encapsulate the laccase in situ has now been developed. In this work, spherical aberration (Cs) corrected scanning transmission electron microscopy combined with high angle annular dark field detector and electron energy loss spectroscopy were applied to identify the exact location of the enzyme in the matrix formed by the ordered mesoporous solids.

  16. Modulated error diffusion CGHs for neural nets

    NASA Astrophysics Data System (ADS)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  17. Morphometric classification of Spanish thoroughbred stallion sperm heads.

    PubMed

    Hidalgo, Manuel; Rodríguez, Inmaculada; Dorado, Jesús; Soler, Carles

    2008-01-30

    This work used semen samples collected from 12 stallions and assessed for sperm morphometry by the Sperm Class Analyzer (SCA) computer-assisted system. A discriminant analysis was performed on the morphometric data from that sperm to obtain a classification matrix for sperm head shape. Thereafter, we defined six types of sperm head shape. Classification of sperm head by this method obtained a globally correct assignment of 90.1%. Moreover, significant differences (p<0.05) were found between animals for all the sperm head morphometric parameters assessed.

  18. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  19. Identification of bacteria isolated from veterinary clinical specimens using MALDI-TOF MS.

    PubMed

    Pavlovic, Melanie; Wudy, Corinna; Zeller-Peronnet, Veronique; Maggipinto, Marzena; Zimmermann, Pia; Straubinger, Alix; Iwobi, Azuka; Märtlbauer, Erwin; Busch, Ulrich; Huber, Ingrid

    2015-01-01

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has recently emerged as a rapid and accurate identification method for bacterial species. Although it has been successfully applied for the identification of human pathogens, it has so far not been well evaluated for routine identification of veterinary bacterial isolates. This study was performed to compare and evaluate the performance of MALDI-TOF MS based identification of veterinary bacterial isolates with commercially available conventional test systems. Discrepancies of both methods were resolved by sequencing 16S rDNA and, if necessary, the infB gene for Actinobacillus isolates. A total of 375 consecutively isolated veterinary samples were collected. Among the 357 isolates (95.2%) correctly identified at the genus level by MALDI-TOF MS, 338 of them (90.1% of the total isolates) were also correctly identified at the species level. Conventional methods offered correct species identification for 319 isolates (85.1%). MALDI-TOF identification therefore offered more accurate identification of veterinary bacterial isolates. An update of the in-house mass spectra database with additional reference spectra clearly improved the identification results. In conclusion, the presented data suggest that MALDI-TOF MS is an appropriate platform for classification and identification of veterinary bacterial isolates.

  20. Reconstruction of 2D PET data with Monte Carlo generated system matrix for generalized natural pixels

    NASA Astrophysics Data System (ADS)

    Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.

    2006-06-01

    In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).

  1. Fast, rugged and sensitive ultra high pressure liquid chromatography tandem mass spectrometry method for analysis of cyanotoxins in raw water and drinking water--First findings of anatoxins, cylindrospermopsins and microcystin variants in Swedish source waters and infiltration ponds.

    PubMed

    Pekar, Heidi; Westerberg, Erik; Bruno, Oscar; Lääne, Ants; Persson, Kenneth M; Sundström, L Fredrik; Thim, Anna-Maria

    2016-01-15

    Freshwater blooms of cyanobacteria (blue-green algae) in source waters are generally composed of several different strains with the capability to produce a variety of toxins. The major exposure routes for humans are direct contact with recreational waters and ingestion of drinking water not efficiently treated. The ultra high pressure liquid chromatography tandem mass spectrometry based analytical method presented here allows simultaneous analysis of 22 cyanotoxins from different toxin groups, including anatoxins, cylindrospermopsins, nodularin and microcystins in raw water and drinking water. The use of reference standards enables correct identification of toxins as well as precision of the quantification and due to matrix effects, recovery correction is required. The multi-toxin group method presented here, does not compromise sensitivity, despite the large number of analytes. The limit of quantification was set to 0.1 μg/L for 75% of the cyanotoxins in drinking water and 0.5 μg/L for all cyanotoxins in raw water, which is compliant with the WHO guidance value for microcystin-LR. The matrix effects experienced during analysis were reasonable for most analytes, considering the large volume injected into the mass spectrometer. The time of analysis, including lysing of cell bound toxins, is less than three hours. Furthermore, the method was tested in Swedish source waters and infiltration ponds resulting in evidence of presence of anatoxin, homo-anatoxin, cylindrospermopsin and several variants of microcystins for the first time in Sweden, proving its usefulness. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Simultaneous LC-MS/MS determination of 40 legal and illegal psychoactive drugs in breast and bovine milk.

    PubMed

    López-García, Ester; Mastroianni, Nicola; Postigo, Cristina; Valcárcel, Yolanda; González-Alonso, Silvia; Barceló, Damia; López de Alda, Miren

    2018-04-15

    This work presents a fast, sensitive and reliable multi-residue methodology based on fat and protein precipitation and liquid chromatography-tandem mass spectrometry for the determination of common legal and illegal psychoactive drugs, and major metabolites, in breast milk. One-fourth of the 40 target analytes is investigated for the first time in this biological matrix. The method was validated in breast milk and also in various types of bovine milk, as tranquilizers are occasionally administered to food-producing animals. Absolute recoveries were satisfactory for 75% of the target analytes. The use of isotopically labeled compounds assisted in correcting analyte losses due to ionization suppression matrix effects (higher in whole milk than in the other investigated milk matrices) and ensured the reliability of the results. Average method limits of quantification ranged between 0.4 and 6.8 ng/mL. Application of the developed method showed the presence of caffeine in breast milk samples (12-179 ng/mL). Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The use of Gram stain and matrix-assisted laser desorption ionization time-of-flight mass spectrometry on positive blood culture: synergy between new and old technology.

    PubMed

    Fuglsang-Damgaard, David; Nielsen, Camilla Houlberg; Mandrup, Elisabeth; Fuursted, Kurt

    2011-10-01

    Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) is promising as an alternative to more costly and cumbersome methods for direct identifications in blood cultures. We wanted to evaluate a simplified pre-treatment method for using MALDI-TOF-MS directly on positive blood cultures using BacT/Alert blood culture system, and to test an algorithm combining the result of the initial microscopy with the result suggested by MALDI-TOF-MS. Using the recommended cut-off score of 1.7 the best results were obtained among Gram-negative rods with correct identifications in 91% of Enterobacteriaceae, 83% in aerobic/non-fermentative Gram-negative rods, whereas results were more modest among Gram-positive cocci with correct identifications in 52% of Staphylococci, 54% in Enterococci and only 20% in Streptococci. Combining the results of Gram stain with the top reports by MALDI-TOF-MS, increased the sensitivity from 91% to 93% in the score range from 1.5 to 1.7 and from 48% to 85% in the score range from 1.3 to 1.5. Thus, using this strategy and accepting a cut-off at 1.3 instead of the suggested 1.7, overall sensitivity could be increased from 88.1% to 96.3%. MALDI-TOF-MS is an efficient method for direct routine identification of bacterial isolates in blood culture, especially when combined with the result of the Gram stain. © 2011 The Authors. APMIS © 2011 APMIS.

  4. Perturbative triples correction for local pair natural orbital based explicitly correlated CCSD(F12*) using Laplace transformation techniques.

    PubMed

    Schmitz, Gunnar; Hättig, Christof

    2016-12-21

    We present an implementation of pair natural orbital coupled cluster singles and doubles with perturbative triples, PNO-CCSD(T), which avoids the quasi-canonical triples approximation (T0) where couplings due to off-diagonal Fock matrix elements are neglected. A numerical Laplace transformation of the canonical expression for the perturbative (T) triples correction is used to avoid an I/O and storage bottleneck for the triples amplitudes. Results for a test set of reaction energies show that only very few Laplace grid points are needed to obtain converged energy differences and that PNO-CCSD(T) is a more robust approximation than PNO-CCSD(T0) with a reduced mean absolute deviation from canonical CCSD(T) results. We combine the PNO-based (T) triples correction with the explicitly correlated PNO-CCSD(F12*) method and investigate the use of specialized F12-PNOs in the conventional triples correction. We find that no significant additional errors are introduced and that PNO-CCSD(F12*)(T) can be applied in a black box manner.

  5. Half-lives of α -decaying nuclei in the medium-mass region within the transfer matrix method

    NASA Astrophysics Data System (ADS)

    Wu, Shuangxiang; Qian, Yibin; Ren, Zhongzhou

    2018-05-01

    The α -decay half-lives of even-even nuclei from Sm to Th are systematically studied based on the transfer matrix method. For the nuclear potential, a type of cosh-parametrized form is applied to calculate the penetration probability. Through a least-squares fit to experimental half-lives, we optimize the parameters in the potential and the α preformation factor P0. During this process, P0 is treated as a constant for each parent nucleus. Eventually, the calculated half-lives are found to agree well with the experimental data, which verifies the accuracy of the present approach. Furthermore, in recent studies, P0 is regulated by the shell and paring effects plus the nuclear deformation. To this end, P0 is here associated with the structural quantity, i.e., the microscopic correction of nuclear mass (Emic). In this way, the agreement between theory and experiment is greatly improved by more than 20%, validating the appropriate treatment of P0 in the scheme of Emic.

  6. Algorithms and applications of aberration correction and American standard-based digital evaluation in surface defects evaluating system

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-11-01

    The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.

  7. Color correction pipeline optimization for digital cameras

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  8. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  9. Integrated Circuit For Simulation Of Neural Network

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.; Khanna, Satish K.

    1988-01-01

    Ballast resistors deposited on top of circuit structure. Cascadable, programmable binary connection matrix fabricated in VLSI form as basic building block for assembly of like units into content-addressable electronic memory matrices operating somewhat like networks of neurons. Connections formed during storage of data, and data recalled from memory by prompting matrix with approximate or partly erroneous signals. Redundancy in pattern of connections causes matrix to respond with correct stored data.

  10. Correction of partial volume effect in (18)F-FDG PET brain studies using coregistered MR volumes: voxel based analysis of tracer uptake in the white matter.

    PubMed

    Coello, Christopher; Willoch, Frode; Selnes, Per; Gjerstad, Leif; Fladby, Tormod; Skretting, Arne

    2013-05-15

    A voxel-based algorithm to correct for partial volume effect in PET brain volumes is presented. This method (named LoReAn) is based on MRI based segmentation of anatomical regions and accurate measurements of the effective point spread function of the PET imaging process. The objective is to correct for the spill-out of activity from high-uptake anatomical structures (e.g. grey matter) into low-uptake anatomical structures (e.g. white matter) in order to quantify physiological uptake in the white matter. The new algorithm is presented and validated against the state of the art region-based geometric transfer matrix (GTM) method with synthetic and clinical data. Using synthetic data, both bias and coefficient of variation were improved in the white matter region using LoReAn compared to GTM. An increased number of anatomical regions doesn't affect the bias (<5%) and misregistration affects equally LoReAn and GTM algorithms. The LoReAn algorithm appears to be a simple and promising voxel-based algorithm for studying metabolism in white matter regions. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Tensor voting for image correction by global and local intensity alignment.

    PubMed

    Jia, Jiaya; Tang, Chi-Keung

    2005-01-01

    This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.

  12. Full-degrees-of-freedom frequency based substructuring

    NASA Astrophysics Data System (ADS)

    Drozg, Armin; Čepon, Gregor; Boltežar, Miha

    2018-01-01

    Dividing the whole system into multiple subsystems and a separate dynamic analysis is common practice in the field of structural dynamics. The substructuring process improves the computational efficiency and enables an effective realization of the local optimization, modal updating and sensitivity analyses. This paper focuses on frequency-based substructuring methods using experimentally obtained data. An efficient substructuring process has already been demonstrated using numerically obtained frequency-response functions (FRFs). However, the experimental process suffers from several difficulties, among which, many of them are related to the rotational degrees of freedom. Thus, several attempts have been made to measure, expand or combine numerical correction methods in order to obtain a complete response model. The proposed methods have numerous limitations and are not yet generally applicable. Therefore, in this paper an alternative approach based on experimentally obtained data only, is proposed. The force-excited part of the FRF matrix is measured with piezoelectric translational and rotational direct accelerometers. The incomplete moment-excited part of the FRF matrix is expanded, based on the modal model. The proposed procedure is integrated in a Lagrange Multiplier Frequency Based Substructuring method and demonstrated on a simple beam structure, where the connection coordinates are mainly associated with the rotational degrees of freedom.

  13. Development and validation of a high throughput assay for the quantification of multiple green tea-derived catechins in human plasma.

    PubMed

    Mawson, Deborah H; Jeffrey, Keon L; Teale, Philip; Grace, Philip B

    2018-06-19

    A rapid, accurate and robust method for the determination of catechin (C), epicatechin (EC), gallocatechin (GC), epigallocatechin (EGC), catechin gallate (Cg), epicatechin gallate (ECg), gallocatechin gallate (GCg) and epigallocatechin gallate (EGCg) concentrations in human plasma has been developed. The method utilises protein precipitation following enzyme hydrolysis, with chromatographic separation and detection using reversed-phase liquid chromatography - tandem mass spectrometry (LC-MS/MS). Traditional issues such as lengthy chromatographic run times, sample and extract stability, and lack of suitable internal standards have been addressed. The method has been evaluated using a comprehensive validation procedure, confirming linearity over appropriate concentration ranges, and inter/intra batch precision and accuracies within suitable thresholds (precisions within 13.8% and accuracies within 12.4%). Recoveries of analytes were found to be consistent between different matrix samples, compensated for using suitable internal markers and within the performance of the instrumentation used. Similarly, chromatographic interferences have been corrected using the internal markers selected. Stability of all analytes in matrix is demonstrated over 32 days and throughout extraction conditions. This method is suitable for high throughput sample analysis studies. This article is protected by copyright. All rights reserved.

  14. Conservative supra-characteristics method for splitting the hyperbolic systems of gasdynamics for real and perfect gases

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.

    1982-01-01

    A conservative flux difference splitting is presented for the hyperbolic systems of gasdynamics. The stable robust method is suitable for wide application in a variety of schemes, explicit or implicit, iterative or direct, for marching in either time or space. The splitting is modeled on the local quasi one dimensional characteristics system for multi-dimensional flow similar to Chakravarthy's nonconservative split coefficient matrix method; but, as the result of maintaining global conservation, the method is able to capture sharp shocks correctly. The embedded characteristics formulation is cast in a primitive variable the volumetric internal energy (rather than the pressure) that is effective for treating real as well as perfect gases. Finally the relationship of the splitting to characteristics boundary conditions is discussed and the associated conservative matrix formulation for a computed blown wall boundary condition is developed as an example. The theoretical development employs and extends the notion of Roe of constructing stable upwind difference formulae by sending split simple one sided flux difference pieces to appropriate mesh sites. The developments are also believed to have the potential for aiding in the analysis of both existing and new conservative difference schemes.

  15. Production integrated nondestructive testing of composite materials and material compounds - an overview

    NASA Astrophysics Data System (ADS)

    Straß, B.; Conrad, C.; Wolter, B.

    2017-03-01

    Composite materials and material compounds are of increasing importance, because of the steadily rising relevance of resource saving lightweight constructions. Quality assurance with appropriate Nondestructive Testing (NDT) methods is a key aspect for reliable and efficient production. Quality changes have to be detected already in the manufacturing flow in order to take adequate corrective actions. For materials and compounds the classical NDT methods for defectoscopy, like X-ray and Ultrasound (US) are still predominant. Nevertheless, meanwhile fast, contactless NDT methods, like air-borne ultrasound, dynamic thermography and special Eddy-Current techniques are available in order to detect cracks, voids, pores and delaminations but also for characterizing fiber content, distribution and alignment. In Metal-Matrix Composites US back-scattering can be used for this purpose. US run-time measurements allow the detection of thermal stresses at the metal-matrix interface. Another important area is the necessity for NDT in joining. To achieve an optimum material utilization and product safety as well as the best possible production efficiency, there is a need for NDT methods for in-line inspection of the joint quality while joining or immediately afterwards. For this purpose EMAT (Electromagnetic Acoustic Transducer) technique or Acoustic Emission testing can be used.

  16. EvolQG - An R package for evolutionary quantitative genetics

    PubMed Central

    Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel

    2016-01-01

    We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352

  17. Features analysis for identification of date and party hubs in protein interaction network of Saccharomyces Cerevisiae.

    PubMed

    Mirzarezaee, Mitra; Araabi, Babak N; Sadeghi, Mehdi

    2010-12-19

    It has been understood that biological networks have modular organizations which are the sources of their observed complexity. Analysis of networks and motifs has shown that two types of hubs, party hubs and date hubs, are responsible for this complexity. Party hubs are local coordinators because of their high co-expressions with their partners, whereas date hubs display low co-expressions and are assumed as global connectors. However there is no mutual agreement on these concepts in related literature with different studies reporting their results on different data sets. We investigated whether there is a relation between the biological features of Saccharomyces Cerevisiae's proteins and their roles as non-hubs, intermediately connected, party hubs, and date hubs. We propose a classifier that separates these four classes. We extracted different biological characteristics including amino acid sequences, domain contents, repeated domains, functional categories, biological processes, cellular compartments, disordered regions, and position specific scoring matrix from various sources. Several classifiers are examined and the best feature-sets based on average correct classification rate and correlation coefficients of the results are selected. We show that fusion of five feature-sets including domains, Position Specific Scoring Matrix-400, cellular compartments level one, and composition pairs with two and one gaps provide the best discrimination with an average correct classification rate of 77%. We study a variety of known biological feature-sets of the proteins and show that there is a relation between domains, Position Specific Scoring Matrix-400, cellular compartments level one, composition pairs with two and one gaps of Saccharomyces Cerevisiae's proteins, and their roles in the protein interaction network as non-hubs, intermediately connected, party hubs and date hubs. This study also confirms the possibility of predicting non-hubs, party hubs and date hubs based on their biological features with acceptable accuracy. If such a hypothesis is correct for other species as well, similar methods can be applied to predict the roles of proteins in those species.

  18. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  19. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography

    PubMed Central

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-01-01

    Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037

  20. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks

    PubMed Central

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-01-01

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668

  1. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    PubMed

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  2. Reduction of interferences in graphite furnace atomic absorption spectrometry by multiple linear regression modelling

    NASA Astrophysics Data System (ADS)

    Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto

    2000-12-01

    The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.

  3. Multi-population Genomic Relationships for Estimating Current Genetic Variances Within and Genetic Correlations Between Populations.

    PubMed

    Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L

    2017-10-01

    Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.

  4. Higher-Order Corrections to Timelike Jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giele, W.T.; /Fermilab; Kosower, D.A.

    2011-02-01

    We present a simple formalism for the evolution of timelike jets in which tree-level matrix element corrections can be systematically incorporated, up to arbitrary parton multiplicities and over all of phase space, in a way that exponentiates the matching corrections. The scheme is cast as a shower Markov chain which generates one single unweighted event sample, that can be passed to standard hadronization models. Remaining perturbative uncertainties are estimated by providing several alternative weight sets for the same events, at a relatively modest additional overhead. As an explicit example, we consider Z {yields} q{bar q} evolution with unpolarized, massless quarksmore » and include several formally subleading improvements as well as matching to tree-level matrix elements through {alpha}{sub s}{sup 4}. The resulting algorithm is implemented in the publicly available VINCIA plugin to the PYTHIA8 event generator.« less

  5. Combination of Sharing Matrix and Image Encryption for Lossless $(k,n)$ -Secret Image Sharing.

    PubMed

    Bao, Long; Yi, Shuang; Zhou, Yicong

    2017-12-01

    This paper first introduces a (k,n) -sharing matrix S (k, n) and its generation algorithm. Mathematical analysis is provided to show its potential for secret image sharing. Combining sharing matrix with image encryption, we further propose a lossless (k,n) -secret image sharing scheme (SMIE-SIS). Only with no less than k shares, all the ciphertext information and security key can be reconstructed, which results in a lossless recovery of original information. This can be proved by the correctness and security analysis. Performance evaluation and security analysis demonstrate that the proposed SMIE-SIS with arbitrary settings of k and n has at least five advantages: 1) it is able to fully recover the original image without any distortion; 2) it has much lower pixel expansion than many existing methods; 3) its computation cost is much lower than the polynomial-based secret image sharing methods; 4) it is able to verify and detect a fake share; and 5) even using the same original image with the same initial settings of parameters, every execution of SMIE-SIS is able to generate completely different secret shares that are unpredictable and non-repetitive. This property offers SMIE-SIS a high level of security to withstand many different attacks.

  6. A simple and low cost dual-wavelength β-correction spectrophotometric determination and speciation of mercury(II) in water using chromogenic reagent 4-(2-thiazolylazo) resorcinol

    NASA Astrophysics Data System (ADS)

    Al-Bagawi, A. H.; Ahmad, W.; Saigl, Z. M.; Alwael, H.; Al-Harbi, E. A.; El-Shahawi, M. S.

    2017-12-01

    The most common problems in spectrophotometric determination of various complex species originate from the background spectral interference. Thus, the present study aimed to overcome the spectral matrix interference for the precise analysis and speciation of mercury(II) in water by dual-wavelength β-correction spectrophotometry using 4-(2-thiazolylazo) resorcinol (TAR) as chromogenic reagent. The principle was based on measuring the correct absorbance for the formed complex of mercury(II) ions with TAR reagent at 547 nm (lambda max). Under optimized conditions, a linear dynamic range of 0.1-2.0 μg mL- 1 with correlation coefficient (R2) of 0.997 were obtained with lower limits of detection (LOD) of 0.024 μg mL- 1 and limit of quantification (LOQ) of 0.081 μg mL- 1. The values of RSD and relative error (RE) obtained for β-correction method and single wavelength spectrophotometry were 1.3, 1.32% and 4.7, 5.9%, respectively. The method was validated in tap and sea water in terms of the data obtained from inductively coupled plasma-optical emission spectrometry (ICP-OES) using student's t and F tests. The developed methodology satisfactorily overcomes the spectral interference in trace determination and speciation of mercury(II) ions in water.

  7. RSAT matrix-clustering: dynamic exploration and redundancy reduction of transcription factor binding motif collections.

    PubMed

    Castro-Mondragon, Jaime Abraham; Jaeger, Sébastien; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques

    2017-07-27

    Transcription factor (TF) databases contain multitudes of binding motifs (TFBMs) from various sources, from which non-redundant collections are derived by manual curation. The advent of high-throughput methods stimulated the production of novel collections with increasing numbers of motifs. Meta-databases, built by merging these collections, contain redundant versions, because available tools are not suited to automatically identify and explore biologically relevant clusters among thousands of motifs. Motif discovery from genome-scale data sets (e.g. ChIP-seq) also produces redundant motifs, hampering the interpretation of results. We present matrix-clustering, a versatile tool that clusters similar TFBMs into multiple trees, and automatically creates non-redundant TFBM collections. A feature unique to matrix-clustering is its dynamic visualisation of aligned TFBMs, and its capability to simultaneously treat multiple collections from various sources. We demonstrate that matrix-clustering considerably simplifies the interpretation of combined results from multiple motif discovery tools, and highlights biologically relevant variations of similar motifs. We also ran a large-scale application to cluster ∼11 000 motifs from 24 entire databases, showing that matrix-clustering correctly groups motifs belonging to the same TF families, and drastically reduced motif redundancy. matrix-clustering is integrated within the RSAT suite (http://rsat.eu/), accessible through a user-friendly web interface or command-line for its integration in pipelines. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Mitochondrial Protein Synthesis, Import, and Assembly

    PubMed Central

    Fox, Thomas D.

    2012-01-01

    The mitochondrion is arguably the most complex organelle in the budding yeast cell cytoplasm. It is essential for viability as well as respiratory growth. Its innermost aqueous compartment, the matrix, is bounded by the highly structured inner membrane, which in turn is bounded by the intermembrane space and the outer membrane. Approximately 1000 proteins are present in these organelles, of which eight major constituents are coded and synthesized in the matrix. The import of mitochondrial proteins synthesized in the cytoplasm, and their direction to the correct soluble compartments, correct membranes, and correct membrane surfaces/topologies, involves multiple pathways and macromolecular machines. The targeting of some, but not all, cytoplasmically synthesized mitochondrial proteins begins with translation of messenger RNAs localized to the organelle. Most proteins then pass through the translocase of the outer membrane to the intermembrane space, where divergent pathways sort them to the outer membrane, inner membrane, and matrix or trap them in the intermembrane space. Roughly 25% of mitochondrial proteins participate in maintenance or expression of the organellar genome at the inner surface of the inner membrane, providing 7 membrane proteins whose synthesis nucleates the assembly of three respiratory complexes. PMID:23212899

  9. Validation of the Thermo Scientific SureTect Escherichia coli O157:H7 Real-Time PCR Assay for Raw Beef and Produce Matrixes.

    PubMed

    Cloke, Jonathan; Crowley, Erin; Bird, Patrick; Bastin, Ben; Flannery, Jonathan; Agin, James; Goins, David; Clark, Dorn; Radcliff, Roy; Wickstrand, Nina; Kauppinen, Mikko

    2015-01-01

    The Thermo Scientific™ SureTect™ Escherichia coli O157:H7 Assay is a new real-time PCR assay which has been validated through the AOAC Research Institute (RI) Performance Tested Methods(SM) program for raw beef and produce matrixes. This validation study specifically validated the assay with 375 g 1:4 and 1:5 ratios of raw ground beef and raw beef trim in comparison to the U.S. Department of Agriculture, Food Safety Inspection Service, Microbiology Laboratory Guidebook (USDS-FSIS/MLG) reference method and 25 g bagged spinach and fresh apple juice at a ratio of 1:10, in comparison to the reference method detailed in the International Organization for Standardization 16654:2001 reference method. For raw beef matrixes, the validation of both 1:4 and 1:5 allows user flexibility with the enrichment protocol, although which of these two ratios chosen by the laboratory should be based on specific test requirements. All matrixes were analyzed by Thermo Fisher Scientific, Microbiology Division, Vantaa, Finland, and Q Laboratories Inc, Cincinnati, Ohio, in the method developer study. Two of the matrixes (raw ground beef at both 1:4 and 1:5 ratios) and bagged spinach were additionally analyzed in the AOAC-RI controlled independent laboratory study, which was conducted by Marshfield Food Safety, Marshfield, Wisconsin. Using probability of detection statistical analysis, no significant difference was demonstrated by the SureTect kit in comparison to the USDA FSIS reference method for raw beef matrixes, or with the ISO reference method for matrixes of bagged spinach and apple juice. Inclusivity and exclusivity testing was conducted with 58 E. coli O157:H7 and 54 non-E. coli O157:H7 isolates, respectively, which demonstrated that the SureTect assay was able to detect all isolates of E. coli O157:H7 analyzed. In addition, all but one of the nontarget isolates were correctly interpreted as negative by the SureTect Software. The single isolate giving a positive result was an E. coli O157:NM isolate. Nonmotile isolates of E. coli O157 have been demonstrated to still contain the H7 gene; therefore, this result is not unexpected. Robustness testing was conducted to evaluate the performance of the SureTect assay with specific deviations to the assay protocol, which were outside the recommended parameters and which are open to variation. This study demonstrated that the SureTect assay gave reliable performance. A final study to verify the shelf life of the product, under accelerated conditions was also conducted.

  10. Matrix Effect Compensation in Small-Molecule Profiling for an LC-TOF Platform Using Multicomponent Postcolumn Infusion.

    PubMed

    González, Oskar; van Vliet, Michael; Damen, Carola W N; van der Kloet, Frans M; Vreeken, Rob J; Hankemeier, Thomas

    2015-06-16

    The possible presence of matrix effect is one of the main concerns in liquid chromatography-mass spectrometry (LC-MS)-driven bioanalysis due to its impact on the reliability of the obtained quantitative results. Here we propose an approach to correct for the matrix effect in LC-MS with electrospray ionization using postcolumn infusion of eight internal standards (PCI-IS). We applied this approach to a generic ultraperformance liquid chromatography-time-of-flight (UHPLC-TOF) platform developed for small-molecule profiling with a main focus on drugs. Different urine samples were spiked with 19 drugs with different physicochemical properties and analyzed in order to study matrix effect (in absolute and relative terms). Furthermore, calibration curves for each analyte were constructed and quality control samples at different concentration levels were analyzed to check the applicability of this approach in quantitative analysis. The matrix effect profiles of the PCI-ISs were different: this confirms that the matrix effect is compound-dependent, and therefore the most suitable PCI-IS has to be chosen for each analyte. Chromatograms were reconstructed using analyte and PCI-IS responses, which were used to develop an optimized method which compensates for variation in ionization efficiency. The approach presented here improved the results in terms of matrix effect dramatically. Furthermore, calibration curves of higher quality are obtained, dynamic range is enhanced, and accuracy and precision of QC samples is increased. The use of PCI-ISs is a very promising step toward an analytical platform free of matrix effect, which can make LC-MS analysis even more successful, adding a higher reliability in quantification to its intrinsic high sensitivity and selectivity.

  11. Matrix metalloproteinase-7 expression in gastric carcinoma.

    PubMed Central

    Honda, M; Mori, M; Ueo, H; Sugimachi, K; Akiyoshi, T

    1996-01-01

    BACKGROUND/AIMS: Matrix metalloproteinase-7 (MMP-7) belongs to the same family as matrix degrading metalloproteinase (MMPs) that may play an important part in cancer cell invasion and metastasis. This study reports on the MMP-7 mRNA expression level both in human gastric carcinomas and the normal gastric mucosa. METHODS: From fresh specimens of 47 surgical pairs of primary gastric carcinomas and corresponding normal tissue specimens, cDNA was obtained by reverse transcription (RT) and thereafter MMP-7 mRNAs were detected by means of a polymerase chain reaction. The tumour/normal (T/N) ratio of MMP-7 expression was calculated after correcting for glyceraldehyde-3-phosphate dehydrogenase as an internal control. RESULTS: The expression corrected levels of MMP-7 mRNA of the tumour was greater than that of the normal mucosa in 41 of 47 cases (87%). The 13 cases whose T/N ratio was more than 2.1 showed a deeper invasion of the gastric wall, and more frequent lymphatic or vascular permeations than the 34 cases whose T/N ratio was less than 2.0. An immunohistochemical study showed that MMP-7 was predominantly expressed in the cancer cells, weakly expressed in normal epithelial cells, and not expressed in the surrounding stromal cells. CONCLUSIONS: These findings suggest that the overexpression of MMP-7 may thus play an important part in tumour invasion in gastric carcinomas while, in addition, MMP-7 may also prove to be a useful marker for determining the biological aggressiveness of gastric carcinoma. Images Figure 1 Figure 2 Figure 3 PMID:8949652

  12. Aggregation of carbon dioxide sequestration storage assessment units

    USGS Publications Warehouse

    Blondes, Madalyn S.; Schuenemeyer, John H.; Olea, Ricardo A.; Drew, Lawrence J.

    2013-01-01

    The U.S. Geological Survey is currently conducting a national assessment of carbon dioxide (CO2) storage resources, mandated by the Energy Independence and Security Act of 2007. Pre-emission capture and storage of CO2 in subsurface saline formations is one potential method to reduce greenhouse gas emissions and the negative impact of global climate change. Like many large-scale resource assessments, the area under investigation is split into smaller, more manageable storage assessment units (SAUs), which must be aggregated with correctly propagated uncertainty to the basin, regional, and national scales. The aggregation methodology requires two types of data: marginal probability distributions of storage resource for each SAU, and a correlation matrix obtained by expert elicitation describing interdependencies between pairs of SAUs. Dependencies arise because geologic analogs, assessment methods, and assessors often overlap. The correlation matrix is used to induce rank correlation, using a Cholesky decomposition, among the empirical marginal distributions representing individually assessed SAUs. This manuscript presents a probabilistic aggregation method tailored to the correlations and dependencies inherent to a CO2 storage assessment. Aggregation results must be presented at the basin, regional, and national scales. A single stage approach, in which one large correlation matrix is defined and subsets are used for different scales, is compared to a multiple stage approach, in which new correlation matrices are created to aggregate intermediate results. Although the single-stage approach requires determination of significantly more correlation coefficients, it captures geologic dependencies among similar units in different basins and it is less sensitive to fluctuations in low correlation coefficients than the multiple stage approach. Thus, subsets of one single-stage correlation matrix are used to aggregate to basin, regional, and national scales.

  13. Investigation of dynamic properties of a polymer matrix composite with different angles of fiber orientations

    NASA Astrophysics Data System (ADS)

    Kadioglu, F.; Coskun, T.; Elfarra, M.

    2018-05-01

    For the dynamic values of fiber-reinforced polymer matrix composite materials, elastic modulus and damping values are emphasized, and the two are desired to be high as much as possible, as the first is related to load bearing capacity, the latter provides the capability of energy absorption. In the composites, while fibers are usually utilized for reinforcement providing high elastic modulus and so high strength, matrix introduces a medium for high damping. Correct measurement of damping values is a critical step in designing composite materials. The aim of the current study is to measure the dynamic values of a glass fiber-reinforced polymer matrix composite, Hexply 913/33%/UD280, produced by Hexcel, using a vibrating beam technique. The specimens with different angles of fiber orientations (0, ±10°, ±20°, ±35, ±45°, ±55°, ±70, ±80 and 90) were manufactured from the composite prepreg and subjected to the clamped-free boundary conditions. Two different methods, the half power bandwidth and the logarithmic free decay, were used to measure the damping values to be able to compare the results. It has been revealed that the dynamic values are affected by the fiber orientations; for high flexural modulus the specimens with small angles of orientation, but for high damping those with large angles of orientation should be preferred. In general, the results are comparable, and the free decay method gave smaller values compared to the bandwidth method, with a little exception. It is suggested that the results (data) obtained from the test can be used for modal analysis reliably.

  14. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    NASA Astrophysics Data System (ADS)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  15. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory.

    PubMed

    Lee, M; Leiter, K; Eisner, C; Breuer, A; Wang, X

    2017-09-21

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  16. The Use of EST Expression Matrixes for the Quality Control of Gene Expression Data

    PubMed Central

    Milnthorpe, Andrew T.; Soloviev, Mikhail

    2012-01-01

    EST expression profiling provides an attractive tool for studying differential gene expression, but cDNA libraries' origins and EST data quality are not always known or reported. Libraries may originate from pooled or mixed tissues; EST clustering, EST counts, library annotations and analysis algorithms may contain errors. Traditional data analysis methods, including research into tissue-specific gene expression, assume EST counts to be correct and libraries to be correctly annotated, which is not always the case. Therefore, a method capable of assessing the quality of expression data based on that data alone would be invaluable for assessing the quality of EST data and determining their suitability for mRNA expression analysis. Here we report an approach to the selection of a small generic subset of 244 UniGene clusters suitable for identification of the tissue of origin for EST libraries and quality control of the expression data using EST expression information alone. We created a small expression matrix of UniGene IDs using two rounds of selection followed by two rounds of optimisation. Our selection procedures differ from traditional approaches to finding “tissue-specific” genes and our matrix yields consistency high positive correlation values for libraries with confirmed tissues of origin and can be applied for tissue typing and quality control of libraries as small as just a few hundred total ESTs. Furthermore, we can pick up tissue correlations between related tissues e.g. brain and peripheral nervous tissue, heart and muscle tissues and identify tissue origins for a few libraries of uncharacterised tissue identity. It was possible to confirm tissue identity for some libraries which have been derived from cancer tissues or have been normalised. Tissue matching is affected strongly by cancer progression or library normalisation and our approach may potentially be applied for elucidating the stage of normalisation in normalised libraries or for cancer staging. PMID:22412959

  17. Optimization of digital droplet polymerase chain reaction for quantification of genetically modified organisms.

    PubMed

    Gerdes, Lars; Iwobi, Azuka; Busch, Ulrich; Pecoraro, Sven

    2016-03-01

    Digital PCR in droplets (ddPCR) is an emerging method for more and more applications in DNA (and RNA) analysis. Special requirements when establishing ddPCR for analysis of genetically modified organisms (GMO) in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets, that is setting of an appropriate threshold, can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the content of GM material in a sample. Droplets which show fluorescent units ranging between those of explicit positive and negative droplets are called 'rain'. Signals of such droplets can hinder analysis and the correct setting of a threshold. In this manuscript, a computer-based algorithm has been carefully designed to evaluate assay performance and facilitate objective criteria for assay optimization. Optimized assays in return minimize the impact of rain on ddPCR analysis. We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. Parameters considered include singleplex/duplex ddPCR, assay volume, thermal cycler, probe manufacturer, oligonucleotide concentration, annealing/elongation temperature, and a droplet separation evaluation. We additionally propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The proposed performance classification in the experience matrix can be used for a rating of different assays for the same GMO target, thus enabling employment of the best suited assay parameters. Main optimization parameters include annealing/extension temperature and oligonucleotide concentrations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event.

  18. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  19. Wavefront control in adaptive microscopy using Shack-Hartmann sensors with arbitrarily shaped pupils.

    PubMed

    Dong, Bing; Booth, Martin J

    2018-01-22

    In adaptive optical microscopy of thick biological tissue, strong scattering and aberrations can change the effective pupil shape by rendering some Shack-Hartmann spots unusable. The change of pupil shape leads to a change of wavefront reconstruction or control matrix that should be updated accordingly. Modified slope and modal wavefront control methods based on measurements of a Shack-Hartmann wavefront sensor are proposed to accommodate an arbitrarily shaped pupil. Furthermore, we present partial wavefront control methods that remove specific aberration modes like tip, tilt and defocus from the control loop. The proposed control methods were investigated and compared by simulation using experimentally obtained aberration data. The performance was then tested experimentally through closed-loop aberration corrections using an obscured pupil.

  20. Sequencing RNA by a combination of exonuclease digestion and uridine specific chemical cleavage using MALDI-TOF.

    PubMed Central

    Tolson, D A; Nicholson, N H

    1998-01-01

    The determination of DNA sequences by partial exonuclease digestion followed by Matrix-Assisted Laser Desorption Time of Flight Mass Spectrometry (MALDI-TOF) is a well established method. When the same procedure is applied to RNA, difficulties arise due to the small (1 Da) mass difference between the nucleotides U and C, which makes unambiguous assignment difficult using a MALDI-TOF instrument. Here we report our experiences with sequence specific endonucleases and chemical methods followed by MALDI-TOF to resolve these sequence ambiguities. We have found chemical methods superior to endonucleases both in terms of correct specificity and extent of sequence coverage. This methodology can be used in combination with exonuclease digestion to rapidly assign RNA sequences. PMID:9421498

  1. Graphic matching based on shape contexts and reweighted random walks

    NASA Astrophysics Data System (ADS)

    Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun

    2018-04-01

    Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.

  2. Quadratic canonical transformation theory and higher order density matrices.

    PubMed

    Neuscamman, Eric; Yanai, Takeshi; Chan, Garnet Kin-Lic

    2009-03-28

    Canonical transformation (CT) theory provides a rigorously size-extensive description of dynamic correlation in multireference systems, with an accuracy superior to and cost scaling lower than complete active space second order perturbation theory. Here we expand our previous theory by investigating (i) a commutator approximation that is applied at quadratic, as opposed to linear, order in the effective Hamiltonian, and (ii) incorporation of the three-body reduced density matrix in the operator and density matrix decompositions. The quadratic commutator approximation improves CT's accuracy when used with a single-determinant reference, repairing the previous formal disadvantage of the single-reference linear CT theory relative to singles and doubles coupled cluster theory. Calculations on the BH and HF binding curves confirm this improvement. In multireference systems, the three-body reduced density matrix increases the overall accuracy of the CT theory. Tests on the H(2)O and N(2) binding curves yield results highly competitive with expensive state-of-the-art multireference methods, such as the multireference Davidson-corrected configuration interaction (MRCI+Q), averaged coupled pair functional, and averaged quadratic coupled cluster theories.

  3. Enhancing multi-step quantum state tomography by PhaseLift

    NASA Astrophysics Data System (ADS)

    Lu, Yiping; Zhao, Qing

    2017-09-01

    Multi-photon system has been studied by many groups, however the biggest challenge faced is the number of copies of an unknown state are limited and far from detecting quantum entanglement. The difficulty to prepare copies of the state is even more serious for the quantum state tomography. One possible way to solve this problem is to use adaptive quantum state tomography, which means to get a preliminary density matrix in the first step and revise it in the second step. In order to improve the performance of adaptive quantum state tomography, we develop a new distribution scheme of samples and extend it to three steps, that is to correct it once again based on the density matrix obtained in the traditional adaptive quantum state tomography. Our numerical results show that the mean square error of the reconstructed density matrix by our new method is improved to the level from 10-4 to 10-9 for several tested states. In addition, PhaseLift is also applied to reduce the required storage space of measurement operator.

  4. Multi-centre evaluation of mass spectrometric identification of anaerobic bacteria using the VITEK® MS system.

    PubMed

    Garner, O; Mochon, A; Branda, J; Burnham, C-A; Bythrow, M; Ferraro, M; Ginocchio, C; Jennemann, R; Manji, R; Procop, G W; Richter, S; Rychert, J; Sercia, L; Westblade, L; Lewinski, M

    2014-04-01

    Accurate and timely identification of anaerobic bacteria is critical to successful treatment. Classic phenotypic methods for identification require long turnaround times and can exhibit poor species level identification. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) is an identification method that can provide rapid identification of anaerobes. We present a multi-centre study assessing the clinical performance of the VITEK(®) MS in the identification of anaerobic bacteria. Five different test sites analysed a collection of 651 unique anaerobic isolates comprising 11 different genera. Multiple species were included for several of the genera. Briefly, anaerobic isolates were applied directly to a well of a target plate. Matrix solution (α-cyano-4-hydroxycinnamic acid) was added and allowed to dry. Mass spectra results were generated with the VITEK(®) MS, and the comparative spectral analysis and organism identification were determined using the VITEK(®) MS database 2.0. Results were confirmed by 16S rRNA gene sequencing. Of the 651 isolates analysed, 91.2% (594/651) exhibited the correct species identification. An additional eight isolates were correctly identified to genus level, raising the rate of identification to 92.5%. Genus-level identification consisted of Actinomyces, Bacteroides and Prevotella species. Fusobacterium nucleatum, Actinomyces neuii and Bacteroides uniformis were notable for an increased percentage of no-identification results compared with the other anaerobes tested. VITEK(®) MS identification of clinically relevant anaerobes is highly accurate and represents a dramatic improvement over other phenotypic methods in accuracy and turnaround time. © 2013 The Authors Clinical Microbiology and Infection © 2013 European Society of Clinical Microbiology and Infectious Diseases.

  5. ASCS online fault detection and isolation based on an improved MPCA

    NASA Astrophysics Data System (ADS)

    Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan

    2014-09-01

    Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.

  6. Direct computational approach to lattice supersymmetric quantum mechanics

    NASA Astrophysics Data System (ADS)

    Kadoh, Daisuke; Nakayama, Katsumasa

    2018-07-01

    We study the lattice supersymmetric models numerically using the transfer matrix approach. This method consists only of deterministic processes and has no statistical uncertainties. We improve it by performing a scale transformation of variables such that the Witten index is correctly reproduced from the lattice model, and the other prescriptions are shown in detail. Compared to the precious Monte-Carlo results, we can estimate the effective masses, SUSY Ward identity and the cut-off dependence of the results in high precision. Those kinds of information are useful in improving lattice formulation of supersymmetric models.

  7. A new treatment for parrot beak deformity of the toe.

    PubMed

    Kurokawa, M; Isshiki, N; Inoue, K

    1994-03-01

    Two cases of congenital parrot beak deformity of the toe were treated by pushing back the nail plate, nailbed, matrix, and proximal skin fold in one piece as a flap. The proximal skin portion of this flap was deepithelialized to facilitate this shift, and thus no dog-ear deformity was produced. The distal skin defect of the pulp was covered by a palmar advancement flap. This method does not require augmentation of the fingertip or toetip and is very useful for correcting parrot beak deformities of the toes.

  8. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  9. Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources

    NASA Astrophysics Data System (ADS)

    Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.

    2011-05-01

    The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.

  10. Optics measurement and correction for the Relativistic Heavy Ion Collider

    NASA Astrophysics Data System (ADS)

    Shen, Xiaozhe

    The quality of beam optics is of great importance for the performance of a high energy accelerator like the Relativistic Heavy Ion Collider (RHIC). The turn-by-turn (TBT) beam position monitor (BPM) data can be used to derive beam optics. However, the accuracy of the derived beam optics is often limited by the performance and imperfections of instruments as well as measurement methods and conditions. Therefore, a robust and model-independent data analysis method is highly desired to extract noise-free information from TBT BPM data. As a robust signal-processing technique, an independent component analysis (ICA) algorithm called second order blind identification (SOBI) has been proven to be particularly efficient in extracting physical beam signals from TBT BPM data even in the presence of instrument's noise and error. We applied the SOBI ICA algorithm to RHIC during the 2013 polarized proton operation to extract accurate linear optics from TBT BPM data of AC dipole driven coherent beam oscillation. From the same data, a first systematic estimation of RHIC BPM noise performance was also obtained by the SOBI ICA algorithm, and showed a good agreement with the RHIC BPM configurations. Based on the accurate linear optics measurement, a beta-beat response matrix correction method and a scheme of using horizontal closed orbit bumps at sextupoles for arc beta-beat correction were successfully applied to reach a record-low beam optics error at RHIC. This thesis presents principles of the SOBI ICA algorithm and theory as well as experimental results of optics measurement and correction at RHIC.

  11. Line interference effects using a refined Robert-Bonamy formalism: The test case of the isotropic Raman spectra of autoperturbed N{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boulet, Christian, E-mail: Christian.boulet@u-psud.fr; Ma, Qiancheng; Thibault, Franck

    A symmetrized version of the recently developed refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)] is proposed. This model takes into account line coupling effects and hence allows the calculation of the off-diagonal elements of the relaxation matrix, without neglecting the rotational structure of the perturbing molecule. The formalism is applied to the isotropic Raman spectra of autoperturbed N{sub 2} for which a benchmark quantum relaxation matrix has recently been proposed. The consequences of the classical path approximation are carefully analyzed. Methods correcting for effects of inelasticity are considered. While inmore » the right direction, these corrections appear to be too crude to provide off diagonal elements which would yield, via the sum rule, diagonal elements in good agreement with the quantum results. In order to overcome this difficulty, a re-normalization procedure is applied, which ensures that the off-diagonal elements do lead to the exact quantum diagonal elements. The agreement between the (re-normalized) semi-classical and quantum relaxation matrices is excellent, at least for the Raman spectra of N{sub 2}, opening the way to the analysis of more complex molecular systems.« less

  12. A beam hardening and dispersion correction for x-ray dark-field radiography.

    PubMed

    Pelzer, Georg; Anton, Gisela; Horn, Florian; Rieger, Jens; Ritter, André; Wandner, Johannes; Weber, Thomas; Michel, Thilo

    2016-06-01

    X-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed. The experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image. Comparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased. The authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis.

  13. Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.

    PubMed

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-06-30

    For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.

  14. Simple Approach to Renormalize the Cabibbo-Kobayashi-Maskawa Matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kniehl, Bernd A.; Sirlin, Alberto

    2006-12-01

    We present an on-shell scheme to renormalize the Cabibbo-Kobayashi-Maskawa (CKM) matrix. It is based on a novel procedure to separate the external-leg mixing corrections into gauge-independent self-mass and gauge-dependent wave function renormalization contributions, and to implement the on-shell renormalization of the former with nondiagonal mass counterterm matrices. Diagonalization of the complete mass matrix leads to an explicit CKM counterterm matrix, which automatically satisfies all the following important properties: it is gauge independent, preserves unitarity, and leads to renormalized amplitudes that are nonsingular in the limit in which any two fermions become mass degenerate.

  15. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  16. Numerical implementation, verification and validation of two-phase flow four-equation drift flux model with Jacobian-free Newton–Krylov method

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-08-24

    This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less

  17. Using the sensors of shaft position for simulation of misalignments of shafting supports of turbounits

    NASA Astrophysics Data System (ADS)

    Kumenko, A. I.; Kostyukov, V. N.; Kuz'minykh, N. Yu.; Timin, A. V.; Boichenko, S. N.

    2017-09-01

    Examples of using the method developed for the earlier proposed concept of the monitoring system of the technical condition of a turbounit are presented. The solution methods of the inverse problem—the calculation of misalignments of supports based on the measurement results of positions of rotor pins in the borings of bearings during the operation of a turbounit—are demonstrated. The results of determination of static responses of supports at operation misalignments are presented. The examples of simulation and calculation of misalignments of supports are made for the three-bearing "high-pressure rotor-middle-pressure rotor" (HPR-MPR) system of a turbounit with 250 MW capacity and for 14-supporting shafting of a turbounit with 1000 MW capacity. The calculation results of coefficients of the stiffness matrix of shaftings and testing of methods for solving the inverse problem by modeling are presented. The high accuracy of the solution of the inverse problem at the inversion of the stiffness matrix of shafting used for determining the correcting centerings of rotors of multisupporting shafting is revealed. The stiffness matrix can be recommended to analyze the influence of displacements of one or several supports on changing the support responses of shafting of the turbounit during adjustment after assembling or repair. It is proposed to use the considered methods of evaluation of misalignments in the monitoring systems of changing the mutual position of supports and centerings of rotors by half-couplings of turbounits, especially for seismically dangerous regions and regions with increased sagging of foundations due to watering of soils.

  18. Optimizing identification of clinically relevant Gram-positive organisms by use of the Bruker Biotyper matrix-assisted laser desorption ionization-time of flight mass spectrometry system.

    PubMed

    McElvania Tekippe, Erin; Shuey, Sunni; Winkler, David W; Butler, Meghan A; Burnham, Carey-Ann D

    2013-05-01

    Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) can be used as a method for the rapid identification of microorganisms. This study evaluated the Bruker Biotyper (MALDI-TOF MS) system for the identification of clinically relevant Gram-positive organisms. We tested 239 aerobic Gram-positive organisms isolated from clinical specimens. We evaluated 4 direct-smear methods, including "heavy" (H) and "light" (L) smears, with and without a 1-μl direct formic acid (FA) overlay. The quality measure assigned to a MALDI-TOF MS identification is a numerical value or "score." We found that a heavy smear with a formic acid overlay (H+FA) produced optimal MALDI-TOF MS identification scores and the highest percentage of correctly identified organisms. Using a score of ≥2.0, we identified 183 of the 239 isolates (76.6%) to the genus level, and of the 181 isolates resolved to the species level, 141 isolates (77.9%) were correctly identified. To maximize the number of correct identifications while minimizing misidentifications, the data were analyzed using a score of ≥1.7 for genus- and species-level identification. Using this score, 220 of the 239 isolates (92.1%) were identified to the genus level, and of the 181 isolates resolved to the species level, 167 isolates (92.2%) could be assigned an accurate species identification. We also evaluated a subset of isolates for preanalytic factors that might influence MALDI-TOF MS identification. Frequent subcultures increased the number of unidentified isolates. Incubation temperatures and subcultures of the media did not alter the rate of identification. These data define the ideal bacterial preparation, identification score, and medium conditions for optimal identification of Gram-positive bacteria by use of MALDI-TOF MS.

  19. Compact Polarimetry in a Low Frequency Spaceborne Context

    NASA Technical Reports Server (NTRS)

    Truong-Loi, M-L.; Freeman, A.; Dubois-Fernandez, P.; Pottier, E.

    2011-01-01

    Compact polarimetry has been shown to be an interesting alternative mode to full polarimetry when global coverage and revisit time are key issues. It consists on transmitting a single polarization, while receiving on two. Several critical points have been identified, one being the Faraday rotation (FR) correction and the other the calibration. When a low frequency electromagnetic wave travels through the ionosphere, it undergoes a rotation of the polarization plane about the radar line of sight for a linearly polarized wave, and a simple phase shift for a circularly polarized wave. In a low frequency radar, the only possible choice of the transmit polarization is the circular one, in order to guaranty that the scattering element on the ground is illuminated with a constant polarization independently of the ionosphere state. This will allow meaningful time series analysis, interferometry as long as the Faraday rotation effect is corrected for the return path. In full-polarimetric (FP) mode, two techniques allow to estimate the FR: Freeman method using linearly polarized data, and Bickel and Bates theory based on the transformation of the measured scattering matrix to a circular basis. In CP mode, an alternate procedure is presented which relies on the bare surface scattering properties. These bare surfaces are selected by the conformity coefficient, invariant with FR. This coefficient is compared to other published classifications to show its potential in distinguishing three different scattering types: surface, doublebounce and volume. The performances of the bare surfaces selection and FR estimation are evaluated on PALSAR and airborne data. Once the bare surfaces are selected and Faraday angle estimated over them, the correction can be applied over the whole scene. The algorithm is compared with both FP techniques. In the last part of the paper, the calibration of a CP system from the point of view of classical matrix transformation methods in polarimetry is proposed.

  20. Optimizing Identification of Clinically Relevant Gram-Positive Organisms by Use of the Bruker Biotyper Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry System

    PubMed Central

    McElvania TeKippe, Erin; Shuey, Sunni; Winkler, David W.; Butler, Meghan A.

    2013-01-01

    Matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) can be used as a method for the rapid identification of microorganisms. This study evaluated the Bruker Biotyper (MALDI-TOF MS) system for the identification of clinically relevant Gram-positive organisms. We tested 239 aerobic Gram-positive organisms isolated from clinical specimens. We evaluated 4 direct-smear methods, including “heavy” (H) and “light” (L) smears, with and without a 1-μl direct formic acid (FA) overlay. The quality measure assigned to a MALDI-TOF MS identification is a numerical value or “score.” We found that a heavy smear with a formic acid overlay (H+FA) produced optimal MALDI-TOF MS identification scores and the highest percentage of correctly identified organisms. Using a score of ≥2.0, we identified 183 of the 239 isolates (76.6%) to the genus level, and of the 181 isolates resolved to the species level, 141 isolates (77.9%) were correctly identified. To maximize the number of correct identifications while minimizing misidentifications, the data were analyzed using a score of ≥1.7 for genus- and species-level identification. Using this score, 220 of the 239 isolates (92.1%) were identified to the genus level, and of the 181 isolates resolved to the species level, 167 isolates (92.2%) could be assigned an accurate species identification. We also evaluated a subset of isolates for preanalytic factors that might influence MALDI-TOF MS identification. Frequent subcultures increased the number of unidentified isolates. Incubation temperatures and subcultures of the media did not alter the rate of identification. These data define the ideal bacterial preparation, identification score, and medium conditions for optimal identification of Gram-positive bacteria by use of MALDI-TOF MS. PMID:23426925

  1. Rapid Identification of Bacteria in Positive Blood Culture Broths by Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry▿

    PubMed Central

    Stevenson, Lindsay G.; Drake, Steven K.; Murray, Patrick R.

    2010-01-01

    Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry is a rapid, accurate method for identifying bacteria and fungi recovered on agar culture media. We report herein a method for the direct identification of bacteria in positive blood culture broths by MALDI-TOF mass spectrometry. A total of 212 positive cultures were examined, representing 32 genera and 60 species or groups. The identification of bacterial isolates by MALDI-TOF mass spectrometry was compared with biochemical testing, and discrepancies were resolved by gene sequencing. No identification (spectral score of <1.7) was obtained for 42 (19.8%) of the isolates, due most commonly to insufficient numbers of bacteria in the blood culture broth. Of the bacteria with a spectral score of ≥1.7, 162 (95.3%) of 170 isolates were correctly identified. All 8 isolates of Streptococcus mitis were misidentified as being Streptococcus pneumoniae isolates. This method provides a rapid, accurate, definitive identification of bacteria within 1 h of detection in positive blood cultures with the caveat that the identification of S. pneumoniae would have to be confirmed by an alternative test. PMID:19955282

  2. Rapid identification of bacteria in positive blood culture broths by matrix-assisted laser desorption ionization-time of flight mass spectrometry.

    PubMed

    Stevenson, Lindsay G; Drake, Steven K; Murray, Patrick R

    2010-02-01

    Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry is a rapid, accurate method for identifying bacteria and fungi recovered on agar culture media. We report herein a method for the direct identification of bacteria in positive blood culture broths by MALDI-TOF mass spectrometry. A total of 212 positive cultures were examined, representing 32 genera and 60 species or groups. The identification of bacterial isolates by MALDI-TOF mass spectrometry was compared with biochemical testing, and discrepancies were resolved by gene sequencing. No identification (spectral score of < 1.7) was obtained for 42 (19.8%) of the isolates, due most commonly to insufficient numbers of bacteria in the blood culture broth. Of the bacteria with a spectral score of > or = 1.7, 162 (95.3%) of 170 isolates were correctly identified. All 8 isolates of Streptococcus mitis were misidentified as being Streptococcus pneumoniae isolates. This method provides a rapid, accurate, definitive identification of bacteria within 1 h of detection in positive blood cultures with the caveat that the identification of S. pneumoniae would have to be confirmed by an alternative test.

  3. Selective Matrix (Hyaluronan) Interaction with CD44 and RhoGTPase Signaling Promotes Keratinocyte Functions and Overcomes Age-related Epidermal Dysfunction

    PubMed Central

    Bourguignon, Lilly Y.W.; Wong, Gabriel; Xia, Weiliang; Man, Mao-Qiang; Holleran, Walter M.; Elias, Peter M.

    2013-01-01

    Background Mouse epidermal chronologic aging is closely associated with aberrant matrix (hyaluronan, HA) -size distribution/production and impaired keratinocyte proliferation/differentiation, leading to a marked thinning of the epidermis with functional consequence that causes a slower recovery of permeability barrier function. Objective The goal of this study is to demonstrate mechanism-based, corrective therapeutic strategies using topical applications of small HA (HAS) and/or large HA (HAL) [or a sequential small HA (HAS) and large HA(HAL) (HAs-»HAL) treatment] as well as RhoGTPase signaling perturbation agents to regulate HA/CD44-mediated signaling, thereby restoring normal epidermal function, and permeability barrier homeostasis in aged mouse skin. Methods A number of biochemical, cell biological/molecular, pharmacological and physiological approaches were used to investigate matrix HA-CD44-mediated RhoGTPase signaling in regulating epidermal functions and skin aging. Results In this study we demonstrated that topical application of small HA (HAS) promotes keratinocyte proliferation and increases skin thickness, while it fails to upregulate keratinocyte differentiation or permeability barrier repair in aged mouse skin. In contrast, large HA (HAL) induces only minimal changes in keratinocyte proliferation and skin thickness, but restores keratinocyte differentiation and improves permeability barrier function in aged epidermis. Since neither HAS nor HAL corrects these epidermal defects in aged CD44 knock-out mice, CD44 likely mediates HA-associated epidermal functions in aged mouse skin. Finally, blockade of Rho-kinase activity with Y27632 or protein kinase-Nγ activity with Ro31-8220 significantly decreased the HA (HAS or HAL)-mediated changes in epidermal function in aged mouse skin. Conclusion The results of our study show first that HA application of different sizes regulates epidermal proliferation, differentiation and barrier function in aged mouse skin. Second, manipulation of matrix (HA) interaction with CD44 and RhoGTPase signaling could provide further novel therapeutic approaches that could be targeted for the treatment of various aging-related skin disorders. PMID:23790635

  4. Normal response function method for mass and stiffness matrix updating using complex FRFs

    NASA Astrophysics Data System (ADS)

    Pradhan, S.; Modak, S. V.

    2012-10-01

    Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.

  5. Retargeted Least Squares Regression Algorithm.

    PubMed

    Zhang, Xu-Yao; Wang, Lingfeng; Xiang, Shiming; Liu, Cheng-Lin

    2015-09-01

    This brief presents a framework of retargeted least squares regression (ReLSR) for multicategory classification. The core idea is to directly learn the regression targets from data other than using the traditional zero-one matrix as regression targets. The learned target matrix can guarantee a large margin constraint for the requirement of correct classification for each data point. Compared with the traditional least squares regression (LSR) and a recently proposed discriminative LSR models, ReLSR is much more accurate in measuring the classification error of the regression model. Furthermore, ReLSR is a single and compact model, hence there is no need to train two-class (binary) machines that are independent of each other. The convex optimization problem of ReLSR is solved elegantly and efficiently with an alternating procedure including regression and retargeting as substeps. The experimental evaluation over a range of databases identifies the validity of our method.

  6. Line Coupling Effects in the Isotropic Raman Spectra of N2: A Quantum Calculation at Room Temperature

    NASA Technical Reports Server (NTRS)

    Thibault, Franck; Boulet, Christian; Ma, Qiancheng

    2014-01-01

    We present quantum calculations of the relaxation matrix for the Q branch of N2 at room temperature using a recently proposed N2-N2 rigid rotor potential. Close coupling calculations were complemented by coupled states studies at high energies and provide about 10200 two-body state-to state cross sections from which the needed one-body cross-sections may be obtained. For such temperatures, convergence has to be thoroughly analyzed since such conditions are close to the limit of current computational feasibility. This has been done using complementary calculations based on the energy corrected sudden formalism. Agreement of these quantum predictions with experimental data is good, but the main goal of this work is to provide a benchmark relaxation matrix for testing more approximate methods which remain of a great utility for complex molecular systems at room (and higher) temperatures.

  7. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  8. Correction to: Expression of matrix metalloproteinase 12 is highly specific for non-proliferating invasive trophoblasts in the first trimester and temporally regulated by oxygen-dependent mechanisms including HIF-1A.

    PubMed

    Hiden, Ursula; Eyth, Christian P; Majali-Martinez, Alejandro; Desoye, Gernot; Tam-Amersdorfer, Carmen; Huppertz, Berthold; Ghaffari Tabrizi-Wizsy, Nassim

    2018-01-01

    In the original publication, the contribution of Dr. Christian Eyth as equal first author was not indicated. This has been corrected confirming that U. Hidden and C. Eyth contributed equally to this work.

  9. Incorporation of Pentraxin 3 into Hyaluronan Matrices Is Tightly Regulated and Promotes Matrix Cross-linking

    PubMed Central

    Baranova, Natalia S.; Inforzato, Antonio; Briggs, David C.; Tilakaratna, Viranga; Enghild, Jan J.; Thakar, Dhruv; Milner, Caroline M.; Day, Anthony J.; Richter, Ralf P.

    2014-01-01

    Mammalian oocytes are surrounded by a highly hydrated hyaluronan (HA)-rich extracellular matrix with embedded cumulus cells, forming the cumulus cell·oocyte complex (COC) matrix. The correct assembly, stability, and mechanical properties of this matrix, which are crucial for successful ovulation, transport of the COC to the oviduct, and its fertilization, depend on the interaction between HA and specific HA-organizing proteins. Although the proteins inter-α-inhibitor (IαI), pentraxin 3 (PTX3), and TNF-stimulated gene-6 (TSG-6) have been identified as being critical for COC matrix formation, its supramolecular organization and the molecular mechanism of COC matrix stabilization remain unknown. Here we used films of end-grafted HA as a model system to investigate the molecular interactions involved in the formation and stabilization of HA matrices containing TSG-6, IαI, and PTX3. We found that PTX3 binds neither to HA alone nor to HA films containing TSG-6. This long pentraxin also failed to bind to products of the interaction between IαI, TSG-6, and HA, among which are the covalent heavy chain (HC)·HA and HC·TSG-6 complexes, despite the fact that both IαI and TSG-6 are ligands of PTX3. Interestingly, prior encounter with IαI was required for effective incorporation of PTX3 into TSG-6-loaded HA films. Moreover, we demonstrated that this ternary protein mixture made of IαI, PTX3, and TSG-6 is sufficient to promote formation of a stable (i.e. cross-linked) yet highly hydrated HA matrix. We propose that this mechanism is essential for correct assembly of the COC matrix and may also have general implications in other inflammatory processes that are associated with HA cross-linking. PMID:25190808

  10. Direct identification of bacteria from positive BacT/ALERT blood culture bottles using matrix-assisted laser desorption ionization-time-of-flight mass spectrometry.

    PubMed

    Mestas, Javier; Felsenstein, Susanna; Bard, Jennifer Dien

    2014-11-01

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry is a fast and robust method for the identification of bacteria. In this study, we evaluate the performance of a laboratory-developed lysis method (LDT) for the rapid identification of bacteria from positive BacT/ALERT blood culture bottles. Of the 168 positive bottles tested, 159 were monomicrobial, the majority of which were Gram-positive organisms (61.0% versus 39.0%). Using a cut-off score of ≥1.7, 80.4% of the organisms were correctly identified to the species level, and the identification rate of Gram-negative organisms (90.3%) was found to be significantly greater than that of Gram-positive organisms (78.4%). The simplicity and cost-effectiveness of the LDT enable it to be fully integrated into the routine workflow of the clinical microbiology laboratory, allowing for rapid identification of Gram-positive and Gram-negative bacteria within an hour of blood culture positivity. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Rapid Identification of Mycobacterial Whole Cells in Solid and Liquid Culture Media by Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry ▿

    PubMed Central

    Lotz, Aurélie; Ferroni, Agnès; Beretti, Jean-Luc; Dauphin, Brunhilde; Carbonnelle, Etienne; Guet-Revillet, Hélène; Veziris, Nicolas; Heym, Béate; Jarlier, Vincent; Gaillard, Jean-Louis; Pierre-Audigier, Catherine; Frapy, Eric; Berche, Patrick; Nassif, Xavier; Bille, Emmanuelle

    2010-01-01

    Mycobacterial identification is based on several methods: conventional biochemical tests that require several weeks for accurate identification, and molecular tools that are now routinely used. However, these techniques are expensive and time-consuming. In this study, an alternative method was developed using matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). This approach allows a characteristic mass spectral fingerprint to be obtained from whole inactivated mycobacterial cells. We engineered a strategy based on specific profiles in order to identify the most clinically relevant species of mycobacteria. To validate the mycobacterial database, a total of 311 strains belonging to 31 distinct species and 4 species complexes grown in Löwenstein-Jensen (LJ) and liquid (mycobacterium growth indicator tube [MGIT]) media were analyzed. No extraction step was required. Correct identifications were obtained for 97% of strains from LJ and 77% from MGIT media. No misidentification was noted. Our results, based on a very simple protocol, suggest that this system may represent a serious alternative for clinical laboratories to identify mycobacterial species. PMID:20943874

  12. Study of the versatility of a graphite furnace atomic absorption spectrometric method for the determination of cadmium in the environmental field.

    PubMed

    Rucandio, M Isabel; Petit-Domínguez, M Dolores

    2002-01-01

    Cadmium is a representative example of trace elements that are insidious and widespread health hazards. In contemporary environmental analysis, there is a clear trend toward its determination over a wide range of concentrations in complex matrixes. This paper describes a versatile method for the determination of Cd at various levels (0.1-500 microg/g) in several sample types, such as soils, sediments, coals, ashes, sewage sludges, animal tissues, and plants, by graphite furnace atomic absorption spectrometry with Zeeman background correction. The effect of the individual presence of about 50 elements, with an interference/analyte concentration ratio of up to 10(5), was tested; recoveries of Cd ranged from 93 to 106%. The influence of different media, such as HNO3, HCI, HF, H2SO4, HClO4, acetic acid, hydroxylammonium chloride, and ammonium acetate, in several concentrations, was also tested. From these studies it can be concluded that the analytical procedure is scarcely matrix dependent, and the results obtained for a wide diversity of reference materials are in good agreement with the certified values.

  13. Modeling Fatigue Damage Onset and Progression in Composites Using an Element-Based Virtual Crack Closure Technique Combined With the Floating Node Method

    NASA Technical Reports Server (NTRS)

    De Carvalho, Nelson V.; Krueger, Ronald

    2016-01-01

    A new methodology is proposed to model the onset and propagation of matrix cracks and delaminations in carbon-epoxy composites subject to fatigue loading. An extended interface element, based on the Floating Node Method, is developed to represent delaminations and matrix cracks explicitly in a mesh independent fashion. Crack propagation is determined using an element-based Virtual Crack Closure Technique approach to determine mixed-mode energy release rates, and the Paris-Law relationship to obtain crack growth rate. Crack onset is determined using a stressbased onset criterion coupled with a stress vs. cycle curve and Palmgren-Miner rule to account for fatigue damage accumulation. The approach is implemented in Abaqus/Standard® via the user subroutine functionality. Verification exercises are performed to assess the accuracy and correct implementation of the approach. Finally, it was demonstrated that this approach captured the differences in failure morphology in fatigue for two laminates of identical stiffness, but with layups containing ?deg plies that were either stacked in a single group, or distributed through the laminate thickness.

  14. A genetic fuzzy analytical hierarchy process based projection pursuit method for selecting schemes of water transportation projects

    NASA Astrophysics Data System (ADS)

    Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming

    2006-10-01

    The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix A it produces is relatively small, and the result obtained is both stable and accurate; therefore FPRM-PP can be widely used in the optimal selection of different multi-factor decision-making schemes.

  15. Application of the dual-kinetic-balance sets in the relativistic many-body problem of atomic structure

    NASA Astrophysics Data System (ADS)

    Beloy, Kyle; Derevianko, Andrei

    2008-09-01

    The dual-kinetic-balance (DKB) finite basis set method for solving the Dirac equation for hydrogen-like ions [V.M. Shabaev et al., Phys. Rev. Lett. 93 (2004) 130405] is extended to problems with a non-local spherically-symmetric Dirac-Hartree-Fock potential. We implement the DKB method using B-spline basis sets and compare its performance with the widely-employed approach of Notre Dame (ND) group [W.R. Johnson, S.A. Blundell, J. Sapirstein, Phys. Rev. A 37 (1988) 307-315]. We compare the performance of the ND and DKB methods by computing various properties of Cs atom: energies, hyperfine integrals, the parity-non-conserving amplitude of the 6s-7s transition, and the second-order many-body correction to the removal energy of the valence electrons. We find that for a comparable size of the basis set the accuracy of both methods is similar for matrix elements accumulated far from the nuclear region. However, for atomic properties determined by small distances, the DKB method outperforms the ND approach. In addition, we present a strategy for optimizing the size of the basis sets by choosing progressively smaller number of basis functions for increasingly higher partial waves. This strategy exploits suppression of contributions of high partial waves to typical many-body correlation corrections.

  16. Long-range analysis of density fitting in extended systems

    NASA Astrophysics Data System (ADS)

    Varga, Scarontefan

    Density fitting scheme is analyzed for the Coulomb problem in extended systems from the correctness of long-range behavior point of view. We show that for the correct cancellation of divergent long-range Coulomb terms it is crucial for the density fitting scheme to reproduce the overlap matrix exactly. It is demonstrated that from all possible fitting metric choices the Coulomb metric is the only one which inherently preserves the overlap matrix for infinite systems with translational periodicity. Moreover, we show that by a small additional effort any non-Coulomb metric fit can be made overlap-preserving as well. The problem is analyzed for both ordinary and Poisson basis set choices.

  17. A comparison of visual outcomes in three different types of monofocal intraocular lenses.

    PubMed

    Shetty, Vijay; Haldipurkar, Suhas S; Gore, Rujuta; Dhamankar, Rita; Paik, Anirban; Setia, Maninder Singh

    2015-01-01

    To compare the visual outcomes (distance and near) in patients opting for three different types of monofocal intraocular lens (IOL) (Matrix Aurium, AcrySof single piece, and AcrySof IQ lens). The present study is a cross-sectional analysis of secondary clinical data collected from 153 eyes (52 eyes in Matrix Aurium, 48 in AcrySof single piece, and 53 in AcrySof IQ group) undergoing cataract surgery (2011-2012). We compared near vision, distance vision, distance corrected near vision in these three types of lenses on day 15 (±3) post-surgery. About 69% of the eyes in the Matrix Aurium group had good uncorrected distance vision post-surgery; the proportion was 48% and 57% in the AcrySof single piece and AcrySof IQ group (P=0.09). The proportion of eyes with good distance corrected near vision were 38%, 33%, and 15% in the Matrix Aurium, AcrySof single piece, and AcrySof IQ groups respectively (P=0.02). Similarly, The proportion with good "both near and distance vision" were 38%, 33%, and 15% in the Matrix Aurium, AcrySof single piece, and AcrySof IQ groups respectively (P=0.02). It was only the Matrix Aurium group which had significantly better both "distance and near vision" compared with the AcrySof IQ group (odds ratio: 5.87, 95% confidence intervals: 1.68 to 20.56). Matrix Aurium monofocal lenses may be a good option for those patients who desire to have a good near as well as distance vision post-surgery.

  18. Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.

    PubMed

    Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R

    2008-02-14

    The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.

  19. Determination of a metabolite of nifursol in foodstuffs of animal origin by liquid-liquid extraction and liquid chromatography with tandem mass spectrometry.

    PubMed

    Wang, Chuanxian; Qu, Li; Liu, Xia; Zhao, Chaomin; Zhao, Fengjuan; Huang, Fuzhen; Zhu, Zhenou; Han, Chao

    2017-02-01

    An analytical method has been developed for the detection of a metabolite of nifursol, 3,5-dinitrosalicylic acid hydrazide, in foodstuffs of animal origin (chicken liver, pork liver, lobster, shrimp, eel, sausage, and honey). The method combines liquid chromatography and tandem mass spectrometry with liquid-liquid extraction. Samples were hydrolyzed with hydrochloric acid and derivatized with 2-nitrobenzaldehyde at 37°C for 16 h. The solutions of derivatives were adjusted to pH 7.0-7.5, and the metabolite was extracted with ethyl acetate. 3,5-Dinitrosalicylic acid hydrazide determination was performed in the negative electrospray ionization method. Both isotope-labeled internal standard and matrix-matched calibration solutions were used to correct the matrix effects. Limits of quantification were 0.5 μg/kg for all samples. The average recoveries, measured at three concentration levels (0.5, 2.0, and 10 μg/kg) were in the range of 75.8-108.4% with relative standard deviations below 9.8%. The developed method exhibits a high sensitivity and selectivity for the routine determination and confirmation of the presence of a metabolite of nifursol in foodstuffs of animal origin. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Evaluation of a Semiquantitative Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry Method for Rapid Antimicrobial Susceptibility Testing of Positive Blood Cultures.

    PubMed

    Jung, Jette S; Hamacher, Christina; Gross, Birgit; Sparbier, Katrin; Lange, Christoph; Kostrzewa, Markus; Schubert, Sören

    2016-11-01

    With the increasing prevalence of multidrug-resistant Gram-negative bacteria, rapid identification of the pathogen and its individual antibiotic resistance is crucial to ensure adequate antiinfective treatment at the earliest time point. Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry for the identification of bacteria directly from the blood culture bottle has been widely established; however, there is still an urgent need for new methods that permit rapid resistance testing. Recently, a semiquantitative MALDI-TOF mass spectrometry-based method for the prediction of antibiotic resistance was described. We evaluated this method for detecting nonsusceptibility against two β-lactam and two non-β-lactam antibiotics. A collection of 30 spiked blood cultures was tested for nonsusceptibility against gentamicin and ciprofloxacin. Furthermore, 99 patient-derived blood cultures were tested for nonsusceptibility against cefotaxime, piperacillin-tazobactam, and ciprofloxacin in parallel with MALDI-TOF mass spectrometry identification from the blood culture fluid. The assay correctly classified all isolates tested for nonsusceptibility against gentamicin and cefotaxime. One misclassification for ciprofloxacin nonsusceptibility and five misclassifications for piperacillin-tazobactam nonsusceptibility occurred. Identification of the bacterium and prediction of nonsusceptibility was possible within approximately 4 h. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  1. Multicenter Evaluation of the Vitek MS Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry System for Identification of Gram-Positive Aerobic Bacteria

    PubMed Central

    Burnham, Carey-Ann D.; Bythrow, Maureen; Garner, Omai B.; Ginocchio, Christine C.; Jennemann, Rebecca; Lewinski, Michael A.; Manji, Ryhana; Mochon, A. Brian; Procop, Gary W.; Richter, Sandra S.; Sercia, Linda; Westblade, Lars F.; Ferraro, Mary Jane; Branda, John A.

    2013-01-01

    Matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF) is gaining momentum as a tool for bacterial identification in the clinical microbiology laboratory. Compared with conventional methods, this technology can more readily and conveniently identify a wide range of organisms. Here, we report the findings from a multicenter study to evaluate the Vitek MS v2.0 system (bioMérieux, Inc.) for the identification of aerobic Gram-positive bacteria. A total of 1,146 unique isolates, representing 13 genera and 42 species, were analyzed, and results were compared to those obtained by nucleic acid sequence-based identification as the reference method. For 1,063 of 1,146 isolates (92.8%), the Vitek MS provided a single identification that was accurate to the species level. For an additional 31 isolates (2.7%), multiple possible identifications were provided, all correct at the genus level. Mixed-genus or single-choice incorrect identifications were provided for 18 isolates (1.6%). Although no identification was obtained for 33 isolates (2.9%), there was no specific bacterial species for which the Vitek MS consistently failed to provide identification. In a subset of 463 isolates representing commonly encountered important pathogens, 95% were accurately identified to the species level and there were no misidentifications. Also, in all but one instance, the Vitek MS correctly differentiated Streptococcus pneumoniae from other viridans group streptococci. The findings demonstrate that the Vitek MS system is highly accurate for the identification of Gram-positive aerobic bacteria in the clinical laboratory setting. PMID:23658261

  2. Multicenter evaluation of the Vitek MS matrix-assisted laser desorption ionization-time of flight mass spectrometry system for identification of Gram-positive aerobic bacteria.

    PubMed

    Rychert, Jenna; Burnham, Carey-Ann D; Bythrow, Maureen; Garner, Omai B; Ginocchio, Christine C; Jennemann, Rebecca; Lewinski, Michael A; Manji, Ryhana; Mochon, A Brian; Procop, Gary W; Richter, Sandra S; Sercia, Linda; Westblade, Lars F; Ferraro, Mary Jane; Branda, John A

    2013-07-01

    Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF) is gaining momentum as a tool for bacterial identification in the clinical microbiology laboratory. Compared with conventional methods, this technology can more readily and conveniently identify a wide range of organisms. Here, we report the findings from a multicenter study to evaluate the Vitek MS v2.0 system (bioMérieux, Inc.) for the identification of aerobic Gram-positive bacteria. A total of 1,146 unique isolates, representing 13 genera and 42 species, were analyzed, and results were compared to those obtained by nucleic acid sequence-based identification as the reference method. For 1,063 of 1,146 isolates (92.8%), the Vitek MS provided a single identification that was accurate to the species level. For an additional 31 isolates (2.7%), multiple possible identifications were provided, all correct at the genus level. Mixed-genus or single-choice incorrect identifications were provided for 18 isolates (1.6%). Although no identification was obtained for 33 isolates (2.9%), there was no specific bacterial species for which the Vitek MS consistently failed to provide identification. In a subset of 463 isolates representing commonly encountered important pathogens, 95% were accurately identified to the species level and there were no misidentifications. Also, in all but one instance, the Vitek MS correctly differentiated Streptococcus pneumoniae from other viridans group streptococci. The findings demonstrate that the Vitek MS system is highly accurate for the identification of Gram-positive aerobic bacteria in the clinical laboratory setting.

  3. Efficient and accurate treatment of electron correlations with correlation matrix renormalization theory

    DOE PAGES

    Yao, Y. X.; Liu, J.; Liu, C.; ...

    2015-08-28

    We present an efficient method for calculating the electronic structure and total energy of strongly correlated electron systems. The method extends the traditional Gutzwiller approximation for one-particle operators to the evaluation of the expectation values of two particle operators in the many-electron Hamiltonian. The method is free of adjustable Coulomb parameters, and has no double counting issues in the calculation of total energy, and has the correct atomic limit. We demonstrate that the method describes well the bonding and dissociation behaviors of the hydrogen and nitrogen clusters, as well as the ammonia composed of hydrogen and nitrogen atoms. We alsomore » show that the method can satisfactorily tackle great challenging problems faced by the density functional theory recently discussed in the literature. The computational workload of our method is similar to the Hartree-Fock approach while the results are comparable to high-level quantum chemistry calculations.« less

  4. Solution of the Skyrme-Hartree-Fock-Bogolyubov equations in the Cartesian deformed harmonic-oscillator basis.. (VII) HFODD (v2.49t): A new version of the program

    NASA Astrophysics Data System (ADS)

    Schunck, N.; Dobaczewski, J.; McDonnell, J.; Satuła, W.; Sheikh, J. A.; Staszczak, A.; Stoitsov, M.; Toivanen, P.

    2012-01-01

    We describe the new version (v2.49t) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogolyubov (HFB) problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following physics features: (i) the isospin mixing and projection, (ii) the finite-temperature formalism for the HFB and HF + BCS methods, (iii) the Lipkin translational energy correction method, (iv) the calculation of the shell correction. A number of specific numerical methods have also been implemented in order to deal with large-scale multi-constraint calculations and hardware limitations: (i) the two-basis method for the HFB method, (ii) the Augmented Lagrangian Method (ALM) for multi-constraint calculations, (iii) the linear constraint method based on the approximation of the RPA matrix for multi-constraint calculations, (iv) an interface with the axial and parity-conserving Skyrme-HFB code HFBTHO, (v) the mixing of the HF or HFB matrix elements instead of the HF fields. Special care has been paid to using the code on massively parallel leadership class computers. For this purpose, the following features are now available with this version: (i) the Message Passing Interface (MPI) framework, (ii) scalable input data routines, (iii) multi-threading via OpenMP pragmas, (iv) parallel diagonalization of the HFB matrix in the simplex-breaking case using the ScaLAPACK library. Finally, several little significant errors of the previous published version were corrected. New version program summaryProgram title:HFODD (v2.49t) Catalogue identifier: ADFL_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADFL_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence v3 No. of lines in distributed program, including test data, etc.: 190 614 No. of bytes in distributed program, including test data, etc.: 985 898 Distribution format: tar.gz Programming language: FORTRAN-90 Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT4, Cray XT5 Operating system: UNIX, LINUX, Windows XP Has the code been vectorized or parallelized?: Yes, parallelized using MPI RAM: 10 Mwords Word size: The code is written in single-precision for the use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Classification: 17.22 Catalogue identifier of previous version: ADFL_v2_2 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2361 External routines: The user must have access to the NAGLIB subroutine f02axe, or LAPACK subroutines zhpev, zhpevx, zheevr, or zheevd, which diagonalize complex hermitian matrices, the LAPACK subroutines dgetri and dgetrf which invert arbitrary real matrices, the LAPACK subroutines dsyevd, dsytrf and dsytri which compute eigenvalues and eigenfunctions of real symmetric matrices, the LINPACK subroutines zgedi and zgeco, which invert arbitrary complex matrices and calculate determinants, the BLAS routines dcopy, dscal, dgeem and dgemv for double-precision linear algebra and zcopy, zdscal, zgeem and zgemv for complex linear algebra, or provide another set of subroutines that can perform such tasks. The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Does the new version supersede the previous version?: Yes Nature of problem: The nuclear mean field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic ( n-particle- n-hole) configurations, deformations, excitation energies, or angular momenta. Similarly, Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Solution method: The program uses the Cartesian harmonic oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean-field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constraints are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in: [J. Dobaczewski, J. Dudek, Comput. Phys. Commun. 102 (1997) 166]. Reasons for new version: Version 2.49s of HFODD provides a number of new options such as the isospin mixing and projection of the Skyrme functional, the finite-temperature HF and HFB formalism and optimized methods to perform multi-constrained calculations. It is also the first version of HFODD to contain threading and parallel capabilities. Summary of revisions: Isospin mixing and projection of the HF states has been implemented. The finite-temperature formalism for the HFB equations has been implemented. The Lipkin translational energy correction method has been implemented. Calculation of the shell correction has been implemented. The two-basis method for the solution to the HFB equations has been implemented. The Augmented Lagrangian Method (ALM) for calculations with multiple constraints has been implemented. The linear constraint method based on the cranking approximation of the RPA matrix has been implemented. An interface between HFODD and the axially-symmetric and parity-conserving code HFBTHO has been implemented. The mixing of the matrix elements of the HF or HFB matrix has been implemented. A parallel interface using the MPI library has been implemented. A scalable model for reading input data has been implemented. OpenMP pragmas have been implemented in three subroutines. The diagonalization of the HFB matrix in the simplex-breaking case has been parallelized using the ScaLAPACK library. Several little significant errors of the previous published version were corrected. Running time: In serial mode, running 6 HFB iterations for 152Dy for conserved parity and signature symmetries in a full spherical basis of N=14 shells takes approximately 8 min on an AMD Opteron processor at 2.6 GHz, assuming standard BLAS and LAPACK libraries. As a rule of thumb, runtime for HFB calculations for parity and signature conserved symmetries roughly increases as N, where N is the number of full HO shells. Using custom-built optimized BLAS and LAPACK libraries (such as in the ATLAS implementation) can bring down the execution time by 60%. Using the threaded version of the code with 12 threads and threaded BLAS libraries can bring an additional factor 2 speed-up, so that the same 6 HFB iterations now take of the order of 2 min 30 s.

  5. Robust Averaging of Covariances for EEG Recordings Classification in Motor Imagery Brain-Computer Interfaces.

    PubMed

    Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone

    2017-06-01

    The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.

  6. Coulomb matrix elements in multi-orbital Hubbard models.

    PubMed

    Bünemann, Jörg; Gebhard, Florian

    2017-04-26

    Coulomb matrix elements are needed in all studies in solid-state theory that are based on Hubbard-type multi-orbital models. Due to symmetries, the matrix elements are not independent. We determine a set of independent Coulomb parameters for a d-shell and an f-shell and all point groups with up to 16 elements (O h , O, T d , T h , D 6h , and D 4h ). Furthermore, we express all other matrix elements as a function of the independent Coulomb parameters. Apart from the solution of the general point-group problem we investigate in detail the spherical approximation and first-order corrections to the spherical approximation.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Dan, E-mail: danzhou@is.mpg.de; Sigle, Wilfried; Wang, Yi

    We studied ZrO{sub 2} − La{sub 2/3}Sr{sub 1/3}MnO{sub 3} pillar–matrix thin films which were found to show anomalous magnetic and electron transport properties. With the application of an aberration-corrected transmission electron microscope, interfacial chemistry, and atomic-arrangement of the system, especially of the pillar–matrix interface were revealed at atomic resolution. Minor amounts of Zr were found to occupy Mn positions within the matrix. The Zr concentration reaches a minimum near the pillar–matrix interface accompanied by oxygen vacancies. La and Mn diffusion into the pillar was revealed at atomic resolution and a concomitant change of the Mn valence state was observed.

  8. Dynamic sequence analysis of a decision making task of multielement target tracking and its usage as a learning method

    NASA Astrophysics Data System (ADS)

    Kang, Ziho

    This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.

  9. Correction of pathlength amplification in the filter-pad technique for measurements of particulate absorption coefficient in the visible spectral region.

    PubMed

    Stramski, Dariusz; Reynolds, Rick A; Kaczmarek, Sławomir; Uitz, Julia; Zheng, Guangming

    2015-08-01

    Spectrophotometric measurement of particulate matter retained on filters is the most common and practical method for routine determination of the spectral light absorption coefficient of aquatic particles, ap(λ), at high spectral resolution over a broad spectral range. The use of differing geometrical measurement configurations and large variations in the reported correction for pathlength amplification induced by the particle/filter matrix have hindered adoption of an established measurement protocol. We describe results of dedicated laboratory experiments with a diversity of particulate sample types to examine variation in the pathlength amplification factor for three filter measurement geometries; the filter in the transmittance configuration (T), the filter in the transmittance-reflectance configuration (T-R), and the filter placed inside an integrating sphere (IS). Relationships between optical density measured on suspensions (ODs) and filters (ODf) within the visible portion of the spectrum were evaluated for the formulation of pathlength amplification correction, with power functions providing the best functional representation of the relationship for all three geometries. Whereas the largest uncertainties occur in the T method, the IS method provided the least sample-to-sample variability and the smallest uncertainties in the relationship between ODs and ODf. For six different samples measured with 1 nm resolution within the light wavelength range from 400 to 700 nm, a median error of 7.1% is observed for predicted values of ODs using the IS method. The relationships established for the three filter-pad methods are applicable to historical and ongoing measurements; for future work, the use of the IS method is recommended whenever feasible.

  10. Mathematical foundations of hybrid data assimilation from a synchronization perspective

    NASA Astrophysics Data System (ADS)

    Penny, Stephen G.

    2017-12-01

    The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.

  11. Mathematical foundations of hybrid data assimilation from a synchronization perspective.

    PubMed

    Penny, Stephen G

    2017-12-01

    The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.

  12. Coherent-Anomaly Method in Critical Phenomena. III.

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Katori, Makoto; Suzuki, Masuo

    Two kinds of systematic mean-field transfer-matrix methods are formulated in the 2-dimensional Ising spin system, by introducing Weiss-like and Bethe-like approximations. All the critical exponents as well as the true critical point can be estimated in these methods following the CAM procedure. The numerical results of the above system are Tc* = 2.271 (J/kB), γ=γ' ≃ 1.749, β≃0.131 and δ ≃ 15.1. The specific heat is confirmed to be continuous and to have a logarithmic divergence at the true critical point, i.e., α=α'=0. Thus, the finite-degree-of-approximation scaling ansatz is shown to be correct and very powerful in practical estimations of the critical exponents as well as the true critical point.

  13. FPGA-based real time controller for high order correction in EDIFISE

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Chulani, H.; Martín, Y.; Dorta, T.; Alonso, A.; Fuensalida, J. J.

    2012-07-01

    EDIFISE is a technology demonstrator instrument developed at the Institute of Astrophysics of the Canary Islands (IAC), intended to explore the feasibility of combining Adaptive Optics with attenuated optical fibers in order to obtain high spatial resolution spectra at the surroundings of a star, as an alternative to coronagraphy. A simplified version with only tip tilt correction has been tested at the OGS telescope in Observatorio del Teide (Canary islands, Spain) and a complete version is intended to be tested at the OGS and at the WHT telescope in Observatorio del Roque de los Muchachos, (Canary Islands, Spain). This paper describes the FPGA-based real time control of the High Order unit, responsible of the computation of the actuation values of a 97-actuactor deformable mirror (11x11) with the information provided by a configurable wavefront sensor of up to 16x16 subpupils at 500 Hz (128x128 pixels). The reconfigurable logic hardware will allow both zonal and modal control approaches, will full access to select which mode loops should be closed and with a number of utilities for influence matrix and open loop response measurements. The system has been designed in a modular way to allow for easy upgrade to faster frame rates (1500 Hz) and bigger wavefront sensors (240x240 pixels), accepting also several interfaces from the WFS and towards the mirror driver. The FPGA-based (Field Programmable Gate Array) real time controller provides bias and flat-fielding corrections, subpupil slopes to modal matrix computation for up to 97 modes, independent servo loop controllers for each mode with user control for independent loop opening or closing, mode to actuator matrix computation and non-common path aberration correction capability. It also provides full housekeeping control via UPD/IP for matrix reloading and full system data logging.

  14. The Effect of Teachers' Written Corrective Feedback (WCF) Types on Intermediate EFL Learners' Writing Performance

    ERIC Educational Resources Information Center

    Aghajanloo, Khadijeh; Mobini, Fariba; Khosravi, Robab

    2016-01-01

    Written Corrective Feedback (WCF) is a controversial topic among theorists and researchers in L2 studies. Ellis, Sheen, Murakami, and Takashima (2008) identify two dominant dichotomies in this regard, that is focused vs. unfocused WCF and direct vs. indirect WCF. This study considered both dichotomies in a matrix format, resulted in the…

  15. Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach.

    PubMed

    Nielsen, Morten; Lundegaard, Claus; Worning, Peder; Hvid, Christina Sylvester; Lamberth, Kasper; Buus, Søren; Brunak, Søren; Lund, Ole

    2004-06-12

    Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates novel features optimized for the task of recognizing the binding motif of MHC classes I and II. The method locates the binding motif in a set of sequences and characterizes the motif in terms of a weight-matrix. Subsequently, the weight-matrix can be applied to identifying effectively potential MHC binding peptides and to guiding the process of rational vaccine design. We apply the motif sampler method to the complex problem of MHC class II binding. The input to the method is amino acid peptide sequences extracted from the public databases of SYFPEITHI and MHCPEP and known to bind to the MHC class II complex HLA-DR4(B1*0401). Prior identification of information-rich (anchor) positions in the binding motif is shown to improve the predictive performance of the Gibbs sampler. Similarly, a consensus solution obtained from an ensemble average over suboptimal solutions is shown to outperform the use of a single optimal solution. In a large-scale benchmark calculation, the performance is quantified using relative operating characteristics curve (ROC) plots and we make a detailed comparison of the performance with that of both the TEPITOPE method and a weight-matrix derived using the conventional alignment algorithm of ClustalW. The calculation demonstrates that the predictive performance of the Gibbs sampler is higher than that of ClustalW and in most cases also higher than that of the TEPITOPE method.

  16. Direct Analysis of Low-Volatile Molecular Marker Extract from Airborne Particulate Matter Using Sensitivity Correction Method

    PubMed Central

    Irei, Satoshi

    2016-01-01

    Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511

  17. High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods

    NASA Astrophysics Data System (ADS)

    Yoon, Yeo-Sun; Amin, Moeness G.

    2008-04-01

    Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.

  18. Identification of Enterobacteriaceae by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry using the VITEK MS system.

    PubMed

    Richter, S S; Sercia, L; Branda, J A; Burnham, C-A D; Bythrow, M; Ferraro, M J; Garner, O B; Ginocchio, C C; Jennemann, R; Lewinski, M A; Manji, R; Mochon, A B; Rychert, J A; Westblade, L F; Procop, G W

    2013-12-01

    This multicenter study evaluated the accuracy of matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry identifications from the VITEK MS system (bioMérieux, Marcy l'Etoile, France) for Enterobacteriaceae typically encountered in the clinical laboratory. Enterobacteriaceae isolates (n = 965) representing 17 genera and 40 species were analyzed on the VITEK MS system (database v2.0), in accordance with the manufacturer's instructions. Colony growth (≤72 h) was applied directly to the target slide. Matrix solution (α-cyano-4-hydroxycinnamic acid) was added and allowed to dry before mass spectrometry analysis. On the basis of the confidence level, the VITEK MS system provided a species, genus only, or no identification for each isolate. The accuracy of the mass spectrometric identification was compared to 16S rRNA gene sequencing performed at MIDI Labs (Newark, DE). Supplemental phenotypic testing was performed at bioMérieux when necessary. The VITEK MS result agreed with the reference method identification for 96.7% of the 965 isolates tested, with 83.8% correct to the species level and 12.8% limited to a genus-level identification. There was no identification for 1.7% of the isolates. The VITEK MS system misidentified 7 isolates (0.7 %) as different genera. Three Pantoea agglomerans isolates were misidentified as Enterobacter spp. and single isolates of Enterobacter cancerogenus, Escherichia hermannii, Hafnia alvei, and Raoultella ornithinolytica were misidentified as Klebsiella oxytoca, Citrobacter koseri, Obesumbacterium proteus, and Enterobacter aerogenes, respectively. Eight isolates (0.8 %) were misidentified as a different species in the correct genus. The VITEK MS system provides reliable mass spectrometric identifications for Enterobacteriaceae.

  19. Finite element analysis of stress transfer mechanism from matrix to the fiber in SWCN reinforced nanocomposites

    NASA Astrophysics Data System (ADS)

    Günay, E.

    2017-02-01

    This study defined as micromechanical finite element (FE) approach examining the stress transfer mechanism in single-walled carbon nanotube (SWCN) reinforced composites. In the modeling, 3D unit-cell method was evaluated. Carbon nanotube reinforced composites were modeled as three layers which comprises CNT, interface and matrix material. Firstly; matrix, fiber and interfacial materials all together considered as three layered cylindrical nanocomposite. Secondly, the cylindrical matrix material was assumed to be isotropic and also considered as a continuous medium. Then, fiber material was represented with zigzag type SWCNs. Finally, SWCN was combined with the elastic medium by using springs with different constants. In the FE modeling of SWCN reinforced composite model springs were modeled by using ANSYS spring damper element COMBIN14. The developed interfacial van der Waals interaction effects between the continuous matrix layer and the carbon nanotube fiber layer were simulated by applying these various spring stiffness values. In this study, the layered composite cylindrical FE model was presented as the equivalent mechanical properties of SWCN structures in terms of Young's modulus. The obtained results and literature values were presented and discussed. Figures, 16, 17, and 18 of the original article PDF file, as supplied to AIP Publishing, were affected by a PDF-processing error. Consequently, a solid diamond symbol appeared instead of a Greek tau on the y axis labels for these three figures. This article was updated on 17 March 2017 to correct the PDF-processing error, with the scientific content remaining unchanged.

  20. Novel measures of linkage disequilibrium that correct the bias due to population structure and relatedness.

    PubMed

    Mangin, B; Siberchicot, A; Nicolas, S; Doligez, A; This, P; Cierco-Ayrolles, C

    2012-03-01

    Among the several linkage disequilibrium measures known to capture different features of the non-independence between alleles at different loci, the most commonly used for diallelic loci is the r(2) measure. In the present study, we tackled the problem of the bias of r(2) estimate, which results from the sample structure and/or the relatedness between genotyped individuals. We derived two novel linkage disequilibrium measures for diallelic loci that are both extensions of the usual r(2) measure. The first one, r(S)(2), uses the population structure matrix, which consists of information about the origins of each individual and the admixture proportions of each individual genome. The second one, r(V)(2), includes the kinship matrix into the calculation. These two corrections can be applied together in order to correct for both biases and are defined either on phased or unphased genotypes.We proved that these novel measures are linked to the power of association tests under the mixed linear model including structure and kinship corrections. We validated them on simulated data and applied them to real data sets collected on Vitis vinifera plants. Our results clearly showed the usefulness of the two corrected r(2) measures, which actually captured 'true' linkage disequilibrium unlike the usual r(2) measure.

  1. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  2. Simple on-shell renormalization framework for the Cabibbo-Kobayashi-Maskawa matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kniehl, Bernd A.; Sirlin, Alberto

    2006-12-01

    We present an explicit on-shell framework to renormalize the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix at the one-loop level. It is based on a novel procedure to separate the external-leg mixing corrections into gauge-independent self-mass (sm) and gauge-dependent wave-function renormalization contributions, and to adjust nondiagonal mass counterterm matrices to cancel all the divergent sm contributions, and also their finite parts subject to constraints imposed by the Hermiticity of the mass matrices. It is also shown that the proof of gauge independence and finiteness of the remaining one-loop corrections to W{yields}q{sub i}+q{sub j} reduces to that in the unmixed, single-generation case. Diagonalizationmore » of the complete mass matrices leads then to an explicit expression for the CKM counterterm matrix, which is gauge independent, preserves unitarity, and leads to renormalized amplitudes that are nonsingular in the limit in which any two fermions become mass degenerate.« less

  3. Pharmaceutical analysis in solids using front face fluorescence spectroscopy and multivariate calibration with matrix correction by piecewise direct standardization

    NASA Astrophysics Data System (ADS)

    Alves, Julio Cesar L.; Poppi, Ronei J.

    2013-02-01

    This paper reports the application of piecewise direct standardization (PDS) for matrix correction in front face fluorescence spectroscopy of solids when different excipients are used in a pharmaceutical preparation based on a mixture of acetylsalicylic acid (ASA), paracetamol (acetaminophen) and caffeine. As verified in earlier studies, the use of different excipients and their ratio can cause a displacement, change in fluorescence intensity or band profile. To overcome this important drawback, a standardization strategy was adopted to convert all the excitation-emission fluorescence spectra into those used for model development. An excitation-emission matrix (EEM) for which excitation and emission wavelengths ranging from 265 to 405 nm and 300 to 480 nm, respectively, was used. Excellent results were obtained using unfolded partial least squares (U-PLS), with RMSEP values of 8.2 mg/g, 10.9 mg/g and 2.7 mg/g for ASA, paracetamol and caffeine, respectively, and with relative errors lesser than 5% for the three analytes.

  4. Long-range corrected density functional through the density matrix expansion based semilocal exchange hole.

    PubMed

    Patra, Bikash; Jana, Subrata; Samal, Prasanjit

    2018-03-28

    The exchange hole, which is one of the principal constituents of the density functional formalism, can be used to design accurate range-separated hybrid functionals in association with appropriate correlation. In this regard, the exchange hole derived from the density matrix expansion has gained attention due to its fulfillment of some of the desired exact constraints. Thus, the new long-range corrected density functional proposed here combines the meta generalized gradient approximation level exchange functional designed from the density matrix expansion based exchange hole coupled with the ab initio Hartree-Fock exchange through the range separation of the Coulomb interaction operator using the standard error function technique. Then, in association with the Lee-Yang-Parr correlation functional, the assessment and benchmarking of the above newly constructed range-separated functional with various well-known test sets shows its reasonable performance for a broad range of molecular properties, such as thermochemistry, non-covalent interaction and barrier heights of the chemical reactions.

  5. Bicylindrical model of Herschel-Quincke tube-duct system: theory and comparison with experiment and finite element method.

    PubMed

    Poirier, B; Ville, J M; Maury, C; Kateb, D

    2009-09-01

    An analytical three dimensional bicylindrical model is developed in order to take into account the effects of the saddle-shaped area for the interface of a n-Herschel-Quincke tube system with the main duct. Results for the scattering matrix of this system deduced from this model are compared, in the plane wave frequency domain, versus experimental and numerical data and a one dimensional model with and without tube length correction. The results are performed with a two-Herschel-Quincke tube configuration having the same diameter as the main duct. In spite of strong assumptions on the acoustic continuity conditions at the interfaces, this model is shown to improve the nonperiodic amplitude variations and the frequency localization of the minima of the transmission and reflection coefficients with respect to one dimensional model with length correction and a three dimensional model.

  6. Color standardization in whole slide imaging using a color calibration slide

    PubMed Central

    Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako

    2014-01-01

    Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739

  7. Different partial volume correction methods lead to different conclusions: An (18)F-FDG-PET study of aging.

    PubMed

    Greve, Douglas N; Salat, David H; Bowen, Spencer L; Izquierdo-Garcia, David; Schultz, Aaron P; Catana, Ciprian; Becker, J Alex; Svarer, Claus; Knudsen, Gitte M; Sperling, Reisa A; Johnson, Keith A

    2016-05-15

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using 99 subjects aged 65-87years from the Harvard Aging Brain study. Sensitivity to parameter selection was tested for MZ and MG. The various methods and parameter settings resulted in an extremely wide range of conclusions as to the effects of age on metabolism, from almost no changes to virtually all of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas correlated with age. MG and SGTM were found to be similar; however, MG was sensitive to a thresholding parameter that can result in data loss. CSF uptake was surprisingly high at about 15% of that in gray matter. The exclusion of CSF from SGTM and MG models, which is almost universally done, caused a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant age-related changes of about 7% per decade were found in frontal and cingulate cortices as well as primary visual and insular cortices. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Different Partial Volume Correction Methods Lead to Different Conclusions: an 18F-FDG PET Study of Aging

    PubMed Central

    Greve, Douglas N.; Salat, David H.; Bowen, Spencer L.; Izquierdo-Garcia, David; Schultz, Aaron P.; Catana, Ciprian; Becker, J. Alex; Svarer, Claus; Knudsen, Gitte; Sperling, Reisa A.; Johnson, Keith A.

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with 18F-FDG PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using 99 subjects aged 65-87 from the Harvard Aging Brain study. Sensitivity to parameter selection was tested for MZ and MG. The various methods and parameter settings resulted in an extremely wide range of conclusions as to the effects of age on metabolism, from almost no changes to virtually all of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas correlated with age. MG and SGTM were found to be similar; however, MG was sensitive to a thresholding parameter that can result in data loss. CSF uptake was surprisingly high at about 15% of that in gray matter. Exclusion of CSF from SGTM and MG models, which is almost universally done, caused a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant age-related changes of about 7% per decade were found in frontal and cingulate cortices as well as primary visual and insular cortices. PMID:26915497

  9. Autonomous identification of matrices in the APNea system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, D.

    1995-12-31

    The APNea System is a passive and active neutron assay device which features imaging to correct for nonuniform distributions of source material. Since the imaging procedure requires a detailed knowledge of both the detection efficiency and the thermal neutron flux for (sub)volumes of the drum of interest, it is necessary to identify which mocked-up matrix, to be used for detailed characterization studies, best matches the matrix of interest. A methodology referred to as the external matrix probe (EMP) has been established which links external measures of a drum matrix to those of mocked-up matrices. These measures by themselves are sufficientmore » to identify the appropriate mock matrix, from which the necessary characterization data are obtained. This independent matrix identification leads to an autonomous determination of the required system response parameters for the assay analysis.« less

  10. Autonomous identification of matrices in the APNea System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, D.

    1995-12-31

    The APNea System is a passive and active neutron assay device which features imaging to correct for nonuniform distributions of source material. Since the imaging procedure requires a detailed knowledge of both the detection efficiency and the thermal neutron flux for (sub)volumes of the drum of interest, it is necessary to identify which mocked-up matrix, to be used for detailed characterization studies, best matches the matrix of interest. A methodology referred to as the external matrix probe (EMP) has been established which links external measures of a drum matrix to those of mocked-up matrices. These measures by themselves are sufficientmore » to identify the appropriate mock matrix, from which the necessary characterization data are obtained. This independent matrix identification leads to an autonomous determination of the required system response parameters for the assay analysis.« less

  11. A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.

    PubMed

    Cai, Binghuang; Jiang, Xia

    2014-04-01

    Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. A fixed full-matrix method for determining ice sheet height change from satellite altimeter: an ENVISAT case study in East Antarctica with backscatter analysis

    NASA Astrophysics Data System (ADS)

    Yang, Yuande; Hwang, Cheinway; E, Dongchen

    2014-09-01

    A new method, called the fixed full-matrix method (FFM), is used to compute height changes at crossovers of satellite altimeter ground tracks. Using the ENVISAT data in East Antarctica, FFM results in crossovers of altimeter heights that are 1.9 and 79 times more than those from the fixed half method (FHM) and the one-row method (ORM). The mean standard error of height changes is about 14 cm from ORM, which is reduced to 7 cm by FHM and to 3 cm by FFM. Unlike FHM, FFM leads to uniform errors in the first-half and second-half height-change time series. FFM has the advantage in improving the accuracy of the change of height and backscattered power over ORM and FHM. Assisted by the ICESat-derived height changes, we determine the optimal threshold correlation coefficient (TCC) for a best correction for the backscatter effect on ENVISAT height changes. The TCC value of 0.92 yields an optimal result for FFM. With this value, FFM yields ENVISAT-derived height change rates in East Antarctica mostly falling between and 3 cm/year, and matching the ICESat result to 0.94 cm/year. The ENVISAT result will provide a constraint on the current mass balance result along the Chinese expedition route CHINARE.

  13. Matrix theory interpretation of discrete light cone quantization string worldsheets

    PubMed

    Grignani; Orland; Paniak; Semenoff

    2000-10-16

    We study the null compactification of type-IIA string perturbation theory at finite temperature. We prove a theorem about Riemann surfaces establishing that the moduli spaces of infinite-momentum-frame superstring worldsheets are identical to those of branched-cover instantons in the matrix-string model conjectured to describe M theory. This means that the identification of string degrees of freedom in the matrix model proposed by Dijkgraaf, Verlinde, and Verlinde is correct and that its natural generalization produces the moduli space of Riemann surfaces at all orders in the genus expansion.

  14. Uncertainty of relative sensitivity factors in glow discharge mass spectrometry

    NASA Astrophysics Data System (ADS)

    Meija, Juris; Methven, Brad; Sturgeon, Ralph E.

    2017-10-01

    The concept of the relative sensitivity factors required for the correction of the measured ion beam ratios in pin-cell glow discharge mass spectrometry is examined in detail. We propose a data-driven model for predicting the relative response factors, which relies on a non-linear least squares adjustment and analyte/matrix interchangeability phenomena. The model provides a self-consistent set of response factors for any analyte/matrix combination of any element that appears as either an analyte or matrix in at least one known response factor.

  15. The Rack-Gear Tool Generation Modelling. Non-Analytical Method Developed in CATIA, Using the Relative Generating Trajectories Method

    NASA Astrophysics Data System (ADS)

    Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.

    2016-11-01

    The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.

  16. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  17. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE PAGES

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    2014-10-10

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  18. Rapid identification of bacteria from bioMérieux BacT/ALERT blood culture bottles by MALDI-TOF MS.

    PubMed

    Haigh, J D; Green, I M; Ball, D; Eydmann, M; Millar, M; Wilks, M

    2013-01-01

    Several studies have reported poor results when trying to identify microorganisms directly from the bioMérieux BacT/ALERT blood culture system using matrix-assisted laser desorption/ionisation-time of flight (MALDI-TOF) mass spectrometry. The aim of this study is to evaluate two new methods, Sepsityper and an enrichment method for direct identification of microorganisms from this system. For both methods the samples were processed using the Bruker Microflex LT mass spectrometer (Biotyper) using the Microflex Control software to obtain spectra. The results from direct analysis were compared with those obtained by subculture and subsequent identification. A total of 350 positive blood cultures were processed simultaneously by the two methods. Fifty-three cultures were polymocrobial or failed to grow any organism on subculture, and these results were not included as there was either no subculture result, or for polymicrobial cultures it was known that the Biotyper would not be able to distinguish the constituent organisms correctly. Overall, the results showed that, contrary to previous reports, it is possible to identify bacteria directly from bioMérieux blood culture bottles, as 219/297 (74%) correct identifications were obtained using the Bruker Sepsityper method and 228/297 (77%) were obtained for the enrichment method when there is only one organism was present. Although the enrichment method was simpler, the reagent costs for the Sepsityper method were approximately pound 4.00 per sample compared to pound 0.50. An even simpler and cheaper method, which was less labour-intensive and did not require further reagents, was investigated. Seventy-seven specimens from positive signalled blood cultures were analysed by inoculating prewarmed blood agar plates and analysing any growth after 1-, 2- and 4-h periods of incubation at 37 degrees C, by either direct transfer or alcohol extraction. This method gave the highest number of correct identifications, 66/77 (86%), and was cheaper and less labour-intensive than either of the two above methods.

  19. Collisional Line Mixing in Parallel and Perpendicular Bands of Linear Molecules by a Non-Markovian Approach

    NASA Astrophysics Data System (ADS)

    Buldyreva, Jeanna

    2013-06-01

    Reliable modeling of radiative transfer in planetary atmospheres requires accounting for the collisional line mixing effects in the regions of closely spaced vibrotational lines as well as in the spectral wings. Because of too high CPU cost of calculations from ab initio potential energy surfaces (if available), the relaxation matrix describing the influence of collisions is usually built by dynamical scaling laws, such as Energy-Corrected Sudden law. Theoretical approaches currently used for calculation of absorption near the band center are based on the impact approximation (Markovian collisions without memory effects) and wings are modeled via introducing some empirical parameters [1,2]. Operating with the traditional non-symmetric metric in the Liouville space, these approaches need corrections of the ECS-modeled relaxation matrix elements ("relaxation times" and "renormalization procedure") in order to ensure the fundamental relations of detailed balance and sum rules.We present an extension to the infrared absorption case of the previously developed [3] for rototranslational Raman scattering spectra of linear molecules non-Markovian approach of ECS-type. Owing to the specific choice of symmetrized metric in the Liouville space, the relaxation matrix is corrected for initial bath-molecule correlations and satisfies non-Markovian sum rules and detailed balance. A few standard ECS parameters determined by fitting to experimental linewidths of the isotropic Q-branch enable i) retrieval of these isolated-line parameters for other spectroscopies (IR absorption and anisotropic Raman scattering); ii) reproducing of experimental intensities of these spectra. Besides including vibrational angular momenta in the IR bending shapes, Coriolis effects are also accounted for. The efficiency of the method is demonstrated on OCS-He and CO_2-CO_2 spectra up to 300 and 60 atm, respectively. F. Niro, C. Boulet, and J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transf. 88, 483 (2004). H. Tran, C. Boulet, S. Stefani, M. Snels, and G. Piccioni, J. Quant. Spectrosc. Radiat. Transf. 112, 925 (2011). J. Buldyreva and L. Bonamy, Phys. Rev. A 60, 370-376 (1999).

  20. A new compound control method for sine-on-random mixed vibration test

    NASA Astrophysics Data System (ADS)

    Zhang, Buyun; Wang, Ruochen; Zeng, Falin

    2017-09-01

    Vibration environmental test (VET) is one of the important and effective methods to provide supports for the strength design, reliability and durability test of mechanical products. A new separation control strategy was proposed to apply in multiple-input multiple-output (MIMO) sine on random (SOR) mixed mode vibration test, which is the advanced and intensive test type of VET. As the key problem of the strategy, correlation integral method was applied to separate the mixed signals which included random and sinusoidal components. The feedback control formula of MIMO linear random vibration system was systematically deduced in frequency domain, and Jacobi control algorithm was proposed in view of the elements, such as self-spectrum, coherence, and phase of power spectral density (PSD) matrix. Based on the excessive correction of excitation in sine vibration test, compression factor was introduced to reduce the excitation correction, avoiding the destruction to vibration table or other devices. The two methods were synthesized to be applied in MIMO SOR vibration test system. In the final, verification test system with the vibration of a cantilever beam as the control object was established to verify the reliability and effectiveness of the methods proposed in the paper. The test results show that the exceeding values can be controlled in the tolerance range of references accurately, and the method can supply theory and application supports for mechanical engineering.

  1. Toward holographic reconstruction of bulk geometry from lattice simulations

    NASA Astrophysics Data System (ADS)

    Rinaldi, Enrico; Berkowitz, Evan; Hanada, Masanori; Maltz, Jonathan; Vranas, Pavlos

    2018-02-01

    A black hole described in SU( N ) gauge theory consists of N D-branes. By separating one of the D-branes from others and studying the interaction between them, the black hole geometry can be probed. In order to obtain quantitative results, we employ the lattice Monte Carlo simulation. As a proof of the concept, we perform an explicit calculation in the matrix model dual to the black zero-brane in type IIA string theory. We demonstrate this method actually works in the high temperature region, where the stringy correction is large. We argue possible dual gravity interpretations.

  2. Internal standards in fluorescent X-ray spectroscopy1 1 Publication authorized by the Director, U.S. Geological Survey.

    USGS Publications Warehouse

    Adler, I.; Axelrod, J.M.

    1955-01-01

    The use of internal standards in the analysis of ores and minerals of widely-varying matrix by means of fluorescent X-ray spectroscopy is frequently the most practical approach. Internal standards correct for absorption and enhancement effects except when an absorption edge falls between the comparison lines or a very strong emission line falls between the absorption edges responsible for the comparison lines. Particle size variations may introduce substantial errors. One method of coping with the particle size problem is grinding the sample with an added abrasive. ?? 1955.

  3. Toward holographic reconstruction of bulk geometry from lattice simulations

    DOE PAGES

    Rinaldi, Enrico; Berkowitz, Evan; Hanada, Masanori; ...

    2018-02-07

    A black hole described in SU(N ) gauge theory consists of N D-branes. By separating one of the D-branes from others and studying the interaction between them, the black hole geometry can be probed. In order to obtain quantitative results, we employ the lattice Monte Carlo simulation. As a proof of the concept, we perform an explicit calculation in the matrix model dual to the black zero-brane in type IIA string theory. We demonstrate this method actually works in the high temperature region, where the stringy correction is large. We argue possible dual gravity interpretations.

  4. [Effect of hemosorption on the ultrastructure of hepatocytes in toxic liver damage].

    PubMed

    Kasymov, A Kh; Kasymov, Sh Z; Vorozheĭkin, V M; Kirichenko, I P

    1985-03-01

    Extracorporeal perfusion of toxic blood via carbonic sorbents is an effective method for correcting severe disturbances of hemostasis. Ultrastructural alterations in hepatic cells were studied in experimental toxic liver injury before and after hemosorption. It was established that after hemosorption the processes of intracellular regeneration were significantly activated in the liver parenchyma. The number of crysts in the mitochondria increased as did the electronic density of the matrix. At the same time the number of lysosomes rose as well. However, in persistent unresolved cholestasis, destructive alterations in the hepatic tissue progressed despite the performance of hemosorption.

  5. Overcoming Sequence Misalignments with Weighted Structural Superposition

    PubMed Central

    Khazanov, Nickolay A.; Damm-Ganamet, Kelly L.; Quang, Daniel X.; Carlson, Heather A.

    2012-01-01

    An appropriate structural superposition identifies similarities and differences between homologous proteins that are not evident from sequence alignments alone. We have coupled our Gaussian-weighted RMSD (wRMSD) tool with a sequence aligner and seed extension (SE) algorithm to create a robust technique for overlaying structures and aligning sequences of homologous proteins (HwRMSD). HwRMSD overcomes errors in the initial sequence alignment that would normally propagate into a standard RMSD overlay. SE can generate a corrected sequence alignment from the improved structural superposition obtained by wRMSD. HwRMSD’s robust performance and its superiority over standard RMSD are demonstrated over a range of homologous proteins. Its better overlay results in corrected sequence alignments with good agreement to HOMSTRAD. Finally, HwRMSD is compared to established structural alignment methods: FATCAT, SSM, CE, and Dalilite. Most methods are comparable at placing residue pairs within 2 Å, but HwRMSD places many more residue pairs within 1 Å, providing a clear advantage. Such high accuracy is essential in drug design, where small distances can have a large impact on computational predictions. This level of accuracy is also needed to correct sequence alignments in an automated fashion, especially for omics-scale analysis. HwRMSD can align homologs with low sequence identity and large conformational differences, cases where both sequence-based and structural-based methods may fail. The HwRMSD pipeline overcomes the dependency of structural overlays on initial sequence pairing and removes the need to determine the best sequence-alignment method, substitution matrix, and gap parameters for each unique pair of homologs. PMID:22733542

  6. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography.

    PubMed

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-09-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.

  7. Angle-domain inverse scattering migration/inversion in isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  8. Markov state models from short non-equilibrium simulations—Analysis and correction of estimation bias

    NASA Astrophysics Data System (ADS)

    Nüske, Feliks; Wu, Hao; Prinz, Jan-Hendrik; Wehmeyer, Christoph; Clementi, Cecilia; Noé, Frank

    2017-03-01

    Many state-of-the-art methods for the thermodynamic and kinetic characterization of large and complex biomolecular systems by simulation rely on ensemble approaches, where data from large numbers of relatively short trajectories are integrated. In this context, Markov state models (MSMs) are extremely popular because they can be used to compute stationary quantities and long-time kinetics from ensembles of short simulations, provided that these short simulations are in "local equilibrium" within the MSM states. However, over the last 15 years since the inception of MSMs, it has been controversially discussed and not yet been answered how deviations from local equilibrium can be detected, whether these deviations induce a practical bias in MSM estimation, and how to correct for them. In this paper, we address these issues: We systematically analyze the estimation of MSMs from short non-equilibrium simulations, and we provide an expression for the error between unbiased transition probabilities and the expected estimate from many short simulations. We show that the unbiased MSM estimate can be obtained even from relatively short non-equilibrium simulations in the limit of long lag times and good discretization. Further, we exploit observable operator model (OOM) theory to derive an unbiased estimator for the MSM transition matrix that corrects for the effect of starting out of equilibrium, even when short lag times are used. Finally, we show how the OOM framework can be used to estimate the exact eigenvalues or relaxation time scales of the system without estimating an MSM transition matrix, which allows us to practically assess the discretization quality of the MSM. Applications to model systems and molecular dynamics simulation data of alanine dipeptide are included for illustration. The improved MSM estimator is implemented in PyEMMA of version 2.3.

  9. Use of Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry for Identification of Molds of the Fusarium Genus

    PubMed Central

    Stubbe, Dirk; De Cremer, Koen; Piérard, Denis; Normand, Anne-Cécile; Piarroux, Renaud; Detandt, Monique; Hendrickx, Marijke

    2014-01-01

    The rates of infection with Fusarium molds are increasing, and a diverse number of Fusarium spp. belonging to different species complexes can cause infection. Conventional species identification in the clinical laboratory is time-consuming and prone to errors. We therefore evaluated whether matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) is a useful alternative. The 289 Fusarium strains from the Belgian Coordinated Collections of Microorganisms (BCCM)/Institute of Hygiene and Epidemiology Mycology (IHEM) culture collection with validated sequence-based identities and comprising 40 species were used in this study. An identification strategy was developed, applying a standardized MALDI-TOF MS assay and an in-house reference spectrum database. In vitro antifungal testing was performed to assess important differences in susceptibility between clinically relevant species/species complexes. We observed that no incorrect species complex identifications were made by MALDI-TOF MS, and 82.8% of the identifications were correct to the species level. This success rate was increased to 91% by lowering the cutoff for identification. Although the identification of the correct species complex member was not always guaranteed, antifungal susceptibility testing showed that discriminating between Fusarium species complexes can be important for treatment but is not necessarily required between members of a species complex. With this perspective, some Fusarium species complexes with closely related members can be considered as a whole, increasing the success rate of correct identifications to 97%. The application of our user-friendly MALDI-TOF MS identification approach resulted in a dramatic improvement in both time and accuracy compared to identification with the conventional method. A proof of principle of our MALDI-TOF MS approach in the clinical setting using recently isolated Fusarium strains demonstrated its validity. PMID:25411180

  10. Solving matrix-effects exploiting the second order advantage in the resolution and determination of eight tetracycline antibiotics in effluent wastewater by modelling liquid chromatography data with multivariate curve resolution-alternating least squares and unfolded-partial least squares followed by residual bilinearization algorithms I. Effect of signal pre-treatment.

    PubMed

    De Zan, M M; Gil García, M D; Culzoni, M J; Siano, R G; Goicoechea, H C; Martínez Galera, M

    2008-02-01

    The effect of piecewise direct standardization (PDS) and baseline correction approaches was evaluated in the performance of multivariate curve resolution (MCR-ALS) algorithm for the resolution of three-way data sets from liquid chromatography with diode-array detection (LC-DAD). First, eight tetracyclines (tetracycline, oxytetracycline, chlorotetracycline, demeclocycline, methacycline, doxycycline, meclocycline and minocycline) were isolated from 250 mL effluent wastewater samples by solid-phase extraction (SPE) with Oasis MAX 500 mg/6 mL cartridges and then separated on an Aquasil C18 150 mm x 4.6mm (5 microm particle size) column by LC and detected by DAD. Previous experiments, carried out with Milli-Q water samples, showed a considerable loss of the most polar analytes (minocycline, oxitetracycline and tetracycline) due to breakthrough. PDS was applied to overcome this important drawback. Conversion of chromatograms obtained from standards prepared in solvent was performed obtaining a high correlation with those corresponding to the real situation (r2 = 0.98). Although the enrichment and clean-up steps were carefully optimized, the sample matrix caused a large baseline drift, and also additive interferences were present at the retention times of the analytes. These problems were solved with the baseline correction method proposed by Eilers. MCR-ALS was applied to the corrected and uncorrected three-way data sets to obtain spectral and chromatographic profiles of each tetracycline, as well as those corresponding to the co-eluting interferences. The complexity of the calibration model built from uncorrected data sets was higher, as expected, and the quality of the spectral and chromatographic profiles was worse.

  11. High-Throughput Identification of Bacteria and Yeast by Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry in Conventional Medical Microbiology Laboratories ▿

    PubMed Central

    van Veen, S. Q.; Claas, E. C. J.; Kuijper, Ed J.

    2010-01-01

    Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) is suitable for high-throughput and rapid diagnostics at low costs and can be considered an alternative for conventional biochemical and molecular identification systems in a conventional microbiological laboratory. First, we evaluated MALDI-TOF MS using 327 clinical isolates previously cultured from patient materials and identified by conventional techniques (Vitek-II, API, and biochemical tests). Discrepancies were analyzed by molecular analysis of the 16S genes. Of 327 isolates, 95.1% were identified correctly to genus level, and 85.6% were identified to species level by MALDI-TOF MS. Second, we performed a prospective validation study, including 980 clinical isolates of bacteria and yeasts. Overall performance of MALDI-TOF MS was significantly better than conventional biochemical systems for correct species identification (92.2% and 83.1%, respectively) and produced fewer incorrect genus identifications (0.1% and 1.6%, respectively). Correct species identification by MALDI-TOF MS was observed in 97.7% of Enterobacteriaceae, 92% of nonfermentative Gram-negative bacteria, 94.3% of staphylococci, 84.8% of streptococci, 84% of a miscellaneous group (mainly Haemophilus, Actinobacillus, Cardiobacterium, Eikenella, and Kingella [HACEK]), and 85.2% of yeasts. MALDI-TOF MS had significantly better performance than conventional methods for species identification of staphylococci and genus identification of bacteria belonging to HACEK group. Misidentifications by MALDI-TOF MS were clearly associated with an absence of sufficient spectra from suitable reference strains in the MALDI-TOF MS database. We conclude that MALDI-TOF MS can be implemented easily for routine identification of bacteria (except for pneumococci and viridans streptococci) and yeasts in a medical microbiological laboratory. PMID:20053859

  12. High-throughput identification of bacteria and yeast by matrix-assisted laser desorption ionization-time of flight mass spectrometry in conventional medical microbiology laboratories.

    PubMed

    van Veen, S Q; Claas, E C J; Kuijper, Ed J

    2010-03-01

    Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) is suitable for high-throughput and rapid diagnostics at low costs and can be considered an alternative for conventional biochemical and molecular identification systems in a conventional microbiological laboratory. First, we evaluated MALDI-TOF MS using 327 clinical isolates previously cultured from patient materials and identified by conventional techniques (Vitek-II, API, and biochemical tests). Discrepancies were analyzed by molecular analysis of the 16S genes. Of 327 isolates, 95.1% were identified correctly to genus level, and 85.6% were identified to species level by MALDI-TOF MS. Second, we performed a prospective validation study, including 980 clinical isolates of bacteria and yeasts. Overall performance of MALDI-TOF MS was significantly better than conventional biochemical systems for correct species identification (92.2% and 83.1%, respectively) and produced fewer incorrect genus identifications (0.1% and 1.6%, respectively). Correct species identification by MALDI-TOF MS was observed in 97.7% of Enterobacteriaceae, 92% of nonfermentative Gram-negative bacteria, 94.3% of staphylococci, 84.8% of streptococci, 84% of a miscellaneous group (mainly Haemophilus, Actinobacillus, Cardiobacterium, Eikenella, and Kingella [HACEK]), and 85.2% of yeasts. MALDI-TOF MS had significantly better performance than conventional methods for species identification of staphylococci and genus identification of bacteria belonging to HACEK group. Misidentifications by MALDI-TOF MS were clearly associated with an absence of sufficient spectra from suitable reference strains in the MALDI-TOF MS database. We conclude that MALDI-TOF MS can be implemented easily for routine identification of bacteria (except for pneumococci and viridans streptococci) and yeasts in a medical microbiological laboratory.

  13. Method for making 2-electron response reduced density matrices approximately N-representable

    NASA Astrophysics Data System (ADS)

    Lanssens, Caitlin; Ayers, Paul W.; Van Neck, Dimitri; De Baerdemacker, Stijn; Gunst, Klaas; Bultinck, Patrick

    2018-02-01

    In methods like geminal-based approaches or coupled cluster that are solved using the projected Schrödinger equation, direct computation of the 2-electron reduced density matrix (2-RDM) is impractical and one falls back to a 2-RDM based on response theory. However, the 2-RDMs from response theory are not N-representable. That is, the response 2-RDM does not correspond to an actual physical N-electron wave function. We present a new algorithm for making these non-N-representable 2-RDMs approximately N-representable, i.e., it has the right symmetry and normalization and it fulfills the P-, Q-, and G-conditions. Next to an algorithm which can be applied to any 2-RDM, we have also developed a 2-RDM optimization procedure specifically for seniority-zero 2-RDMs. We aim to find the 2-RDM with the right properties which is the closest (in the sense of the Frobenius norm) to the non-N-representable 2-RDM by minimizing the square norm of the difference between this initial response 2-RDM and the targeted 2-RDM under the constraint that the trace is normalized and the 2-RDM, Q-matrix, and G-matrix are positive semidefinite, i.e., their eigenvalues are non-negative. Our method is suitable for fixing non-N-representable 2-RDMs which are close to being N-representable. Through the N-representability optimization algorithm we add a small correction to the initial 2-RDM such that it fulfills the most important N-representability conditions.

  14. Comparison of two matrix-assisted laser desorption ionization-time of flight mass spectrometry methods with conventional phenotypic identification for routine identification of bacteria to the species level.

    PubMed

    Cherkaoui, Abdessalam; Hibbs, Jonathan; Emonet, Stéphane; Tangomo, Manuela; Girard, Myriam; Francois, Patrice; Schrenzel, Jacques

    2010-04-01

    Bacterial identification relies primarily on culture-based methodologies requiring 24 h for isolation and an additional 24 to 48 h for species identification. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) is an emerging technology newly applied to the problem of bacterial species identification. We evaluated two MALDI-TOF MS systems with 720 consecutively isolated bacterial colonies under routine clinical laboratory conditions. Isolates were analyzed in parallel on both devices, using the manufacturers' default recommendations. We compared MS with conventional biochemical test system identifications. Discordant results were resolved with "gold standard" 16S rRNA gene sequencing. The first MS system (Bruker) gave high-confidence identifications for 680 isolates, of which 674 (99.1%) were correct; the second MS system (Shimadzu) gave high-confidence identifications for 639 isolates, of which 635 (99.4%) were correct. Had MS been used for initial testing and biochemical identification used only in the absence of high-confidence MS identifications, the laboratory would have saved approximately US$5 per isolate in marginal costs and reduced average turnaround time by more than an 8-h shift, with no loss in accuracy. Our data suggest that implementation of MS as a first test strategy for one-step species identification would improve timeliness and reduce isolate identification costs in clinical bacteriology laboratories now.

  15. A Devil in the Details: Matrix-Dependent 40Ca42Ca++/42Ca+ and Its Effects on Estimates of the Initial 41Ca/40Ca in the Solar System

    NASA Astrophysics Data System (ADS)

    McKeegan, K. D.; Liu, M.-C.

    2015-07-01

    Ian Hutcheon established that the molecular ion interference 40Ca42Ca++/42Ca+ on 41K+ is strongly dependent on the mineral analyzed. Correction for this "matrix effect" led to a downward revision of the initial 41Ca/40Ca of the solar system.

  16. Calabi-Yau structures on categories of matrix factorizations

    NASA Astrophysics Data System (ADS)

    Shklyarov, Dmytro

    2017-09-01

    Using tools of complex geometry, we construct explicit proper Calabi-Yau structures, that is, non-degenerate cyclic cocycles on differential graded categories of matrix factorizations of regular functions with isolated critical points. The formulas involve the Kapustin-Li trace and its higher corrections. From the physics perspective, our result yields explicit 'off-shell' models for categories of topological D-branes in B-twisted Landau-Ginzburg models.

  17. Rapid Identification of Cryptococcus neoformans and Cryptococcus gattii by Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry ▿

    PubMed Central

    McTaggart, Lisa R.; Lei, Eric; Richardson, Susan E.; Hoang, Linda; Fothergill, Annette; Zhang, Sean X.

    2011-01-01

    Compared to DNA sequence analysis, matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) correctly identified 100% of Cryptococcus species, distinguishing the notable pathogens Cryptococcus neoformans and C. gattii. Identification was greatly enhanced by supplementing a commercial spectral library with additional entries to account for subspecies variability. PMID:21653762

  18. Mixed quantum/classical theory of rotationally and vibrationally inelastic scattering in space-fixed and body-fixed reference frames

    NASA Astrophysics Data System (ADS)

    Semenov, Alexander; Babikov, Dmitri

    2013-11-01

    We formulated the mixed quantum/classical theory for rotationally and vibrationally inelastic scattering process in the diatomic molecule + atom system. Two versions of theory are presented, first in the space-fixed and second in the body-fixed reference frame. First version is easy to derive and the resultant equations of motion are transparent, but the state-to-state transition matrix is complex-valued and dense. Such calculations may be computationally demanding for heavier molecules and/or higher temperatures, when the number of accessible channels becomes large. In contrast, the second version of theory requires some tedious derivations and the final equations of motion are rather complicated (not particularly intuitive). However, the state-to-state transitions are driven by real-valued sparse matrixes of much smaller size. Thus, this formulation is the method of choice from the computational point of view, while the space-fixed formulation can serve as a test of the body-fixed equations of motion, and the code. Rigorous numerical tests were carried out for a model system to ensure that all equations, matrixes, and computer codes in both formulations are correct.

  19. How electronic dynamics with Pauli exclusion produces Fermi-Dirac statistics.

    PubMed

    Nguyen, Triet S; Nanguneri, Ravindra; Parkhill, John

    2015-04-07

    It is important that any dynamics method approaches the correct population distribution at long times. In this paper, we derive a one-body reduced density matrix dynamics for electrons in energetic contact with a bath. We obtain a remarkable equation of motion which shows that in order to reach equilibrium properly, rates of electron transitions depend on the density matrix. Even though the bath drives the electrons towards a Boltzmann distribution, hole blocking factors in our equation of motion cause the electronic populations to relax to a Fermi-Dirac distribution. These factors are an old concept, but we show how they can be derived with a combination of time-dependent perturbation theory and the extended normal ordering of Mukherjee and Kutzelnigg for a general electronic state. The resulting non-equilibrium kinetic equations generalize the usual Redfield theory to many-electron systems, while ensuring that the orbital occupations remain between zero and one. In numerical applications of our equations, we show that relaxation rates of molecules are not constant because of the blocking effect. Other applications to model atomic chains are also presented which highlight the importance of treating both dephasing and relaxation. Finally, we show how the bath localizes the electron density matrix.

  20. Simultaneous quantitative analysis of nine vitamin D compounds in human blood using LC-MS/MS.

    PubMed

    Abu Kassim, Nur Sofiah; Gomes, Fabio P; Shaw, Paul Nicholas; Hewavitharana, Amitha K

    2016-01-01

    It has been suggested that each member of the family of vitamin D compounds may have different function(s). Therefore, selective quantification of each compound is important in clinical research. Development and validation attempts of a simultaneous determination method of 12 vitamin D compounds in human blood using precolumn derivatization followed by LC-MS/MS is described. Internal standard calibration with 12 stable isotope labeled analogs was used to correct for matrix effects in MS detector. Nine vitamin D compounds were quantifiable in blood samples with detection limits within femtomole levels. Serum (compared with plasma) was found to be a more suitable sample type, and protein precipitation (compared with saponification) a more effective extraction method for vitamin D assay.

  1. [Application of cryogenic stimulation in treatment of chronic wounds].

    PubMed

    Vinnik, Iu S; Karapetian, G E; Iakimov, S V; Sychev, A G

    2008-01-01

    The authors have studied alterations occurring both in the ultrastructure of the cell matrix and in the microcirculatory bed of the chronic wound after local exposure to cryoagent. The up-to-date effective methods including laser Doppler flowmetry were used followed by correct statistical processing of the data obtained. The cryogenic stimulation of the wound was shown to result in considerably improved perfusion of the microcirculatory bed, epithelization and remodeling of the scar. It allowed transformation of a chronic process into acute and thus led to considerably accelerated process of regeneration. The developed method of cryogenic treatment of the chronic wound was used in 35 patients, allowed quicker healing of the chronic wounds and made ambulatory treatment of the patients 3 weeks shorter.

  2. Stable isotope dilution analysis of hydrologic samples by inductively coupled plasma mass spectrometry

    USGS Publications Warehouse

    Garbarino, John R.; Taylor, Howard E.

    1987-01-01

    Inductively coupled plasma mass spectrometry is employed in the determination of Ni, Cu, Sr, Cd, Ba, Ti, and Pb in nonsaline, natural water samples by stable isotope dilution analysis. Hydrologic samples were directly analyzed without any unusual pretreatment. Interference effects related to overlapping isobars, formation of metal oxide and multiply charged ions, and matrix composition were identified and suitable methods of correction evaluated. A comparability study snowed that single-element isotope dilution analysis was only marginally better than sequential multielement isotope dilution analysis. Accuracy and precision of the single-element method were determined on the basis of results obtained for standard reference materials. The instrumental technique was shown to be ideally suited for programs associated with certification of standard reference materials.

  3. Projector-Camera Systems for Immersive Training

    DTIC Science & Technology

    2006-01-01

    average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available

  4. Position Error Covariance Matrix Validation and Correction

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  5. Generalized nonequilibrium vertex correction method in coherent medium theory for quantum transport simulation of disordered nanoelectronics

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Ke, Youqi

    2016-07-01

    Electron transport properties of nanoelectronics can be significantly influenced by the inevitable and randomly distributed impurities/defects. For theoretical simulation of disordered nanoscale electronics, one is interested in both the configurationally averaged transport property and its statistical fluctuation that tells device-to-device variability induced by disorder. However, due to the lack of an effective method to do disorder averaging under the nonequilibrium condition, the important effects of disorders on electron transport remain largely unexplored or poorly understood. In this work, we report a general formalism of Green's function based nonequilibrium effective medium theory to calculate the disordered nanoelectronics. In this method, based on a generalized coherent potential approximation for the Keldysh nonequilibrium Green's function, we developed a generalized nonequilibrium vertex correction method to calculate the average of a two-Keldysh-Green's-function correlator. We obtain nine nonequilibrium vertex correction terms, as a complete family, to express the average of any two-Green's-function correlator and find they can be solved by a set of linear equations. As an important result, the averaged nonequilibrium density matrix, averaged current, disorder-induced current fluctuation, and averaged shot noise, which involve different two-Green's-function correlators, can all be derived and computed in an effective and unified way. To test the general applicability of this method, we applied it to compute the transmission coefficient and its fluctuation with a square-lattice tight-binding model and compared with the exact results and other previously proposed approximations. Our results show very good agreement with the exact results for a wide range of disorder concentrations and energies. In addition, to incorporate with density functional theory to realize first-principles quantum transport simulation, we have also derived a general form of conditionally averaged nonequilibrium Green's function for multicomponent disorders.

  6. Melamine detection by mid- and near-infrared (MIR/NIR) spectroscopy: a quick and sensitive method for dairy products analysis including liquid milk, infant formula, and milk powder.

    PubMed

    Balabin, Roman M; Smirnov, Sergey V

    2011-07-15

    Melamine (2,4,6-triamino-1,3,5-triazine) is a nitrogen-rich chemical implicated in the pet and human food recalls and in the global food safety scares involving milk products. Due to the serious health concerns associated with melamine consumption and the extensive scope of affected products, rapid and sensitive methods to detect melamine's presence are essential. We propose the use of spectroscopy data-produced by near-infrared (near-IR/NIR) and mid-infrared (mid-IR/MIR) spectroscopies, in particular-for melamine detection in complex dairy matrixes. None of the up-to-date reported IR-based methods for melamine detection has unambiguously shown its wide applicability to different dairy products as well as limit of detection (LOD) below 1 ppm on independent sample set. It was found that infrared spectroscopy is an effective tool to detect melamine in dairy products, such as infant formula, milk powder, or liquid milk. ALOD below 1 ppm (0.76±0.11 ppm) can be reached if a correct spectrum preprocessing (pretreatment) technique and a correct multivariate (MDA) algorithm-partial least squares regression (PLS), polynomial PLS (Poly-PLS), artificial neural network (ANN), support vector regression (SVR), or least squares support vector machine (LS-SVM)-are used for spectrum analysis. The relationship between MIR/NIR spectrum of milk products and melamine content is nonlinear. Thus, nonlinear regression methods are needed to correctly predict the triazine-derivative content of milk products. It can be concluded that mid- and near-infrared spectroscopy can be regarded as a quick, sensitive, robust, and low-cost method for liquid milk, infant formula, and milk powder analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. A new paradigm for use of ultrafast lasers in ophthalmology for enhancement of corneal mechanical properties and permanent correction of refractive errors

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Fomovsky, Mikhail; Hall, Jamie R.; Paik, David C.; Trokel, Stephen L.; Vukelic, Sinisa

    2017-02-01

    A new paradigm for strengthening of corneal tissue as well as permanent correction of refractive errors has been proposed. Ultrafast laser irradiation is confined to the levels below optical breakdown such that tissue damage is avoided while creating an ionization field responsible for subsequent photochemical modification of the stroma. The concept was assed using newly developed platform for precise application of a near-IR femtosecond laser irradiation to the cornea in in-vitro experiments. Targeted irradiation with tightly focused ultrafast laser pulses allows spatially resolved crosslinking in the interior of the porcine cornea in the absence of photosensitizers. The formation of intra- or interstromal covalent bonds in collagen matrix locally increases lamellar density. Due to high resolution, treatment is spatially resolved and therefore can be tailored to either enhance structure of corneal stroma or adjust corneal curvature towards correcting refractive errors. As the induced modification is primarily driven by nonlinear absorption, the treatment is essentially wavelength independent, and as such potentially less harmful than current method of choice, joint application of UVA light irradiation in conjunction with riboflavin. Potential applicability of a near-IR femtosecond laser for biomechanical stabilization of cornea and non-invasive refractive eye corrections is discussed.

  8. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  9. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  10. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    PubMed Central

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2014-01-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373

  11. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    NASA Astrophysics Data System (ADS)

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2013-05-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.

  12. The contribution of lot-to-lot variation to the measurement uncertainty of an LC-MS-based multi-mycotoxin assay.

    PubMed

    Stadler, David; Sulyok, Michael; Schuhmacher, Rainer; Berthiller, Franz; Krska, Rudolf

    2018-05-01

    Multi-mycotoxin determination by LC-MS is commonly based on external solvent-based or matrix-matched calibration and, if necessary, the correction for the method bias. In everyday practice, the method bias (expressed as apparent recovery RA), which may be caused by losses during the recovery process and/or signal/suppression enhancement, is evaluated by replicate analysis of a single spiked lot of a matrix. However, RA may vary for different lots of the same matrix, i.e., lot-to-lot variation, which can result in a higher relative expanded measurement uncertainty (U r ). We applied a straightforward procedure for the calculation of U r from the within-laboratory reproducibility, which is also called intermediate precision, and the uncertainty of RA (u r,RA ). To estimate the contribution of the lot-to-lot variation to U r , the measurement results of one replicate of seven different lots of figs and maize and seven replicates of a single lot of these matrices, respectively, were used to calculate U r . The lot-to-lot variation was contributing to u r,RA and thus to U r for the majority of the 66 evaluated analytes in both figs and maize. The major contributions of the lot-to-lot variation to u r,RA were differences in analyte recovery in figs and relative matrix effects in maize. U r was estimated from long-term participation in proficiency test schemes with 58%. Provided proper validation, a fit-for-purpose U r of 50% was proposed for measurement results obtained by an LC-MS-based multi-mycotoxin assay, independent of the concentration of the analytes.

  13. Optimal matrix rigidity for stress fiber polarization in stem cells

    PubMed Central

    Rehfeldt, F.; Brown, A. E. X.; Discher, D. E.; Safran, S. A.

    2010-01-01

    The shape and differentiation of human mesenchymal stem cells is especially sensitive to the rigidity of their environment; the physical mechanisms involved are unknown. A theoretical model and experiments demonstrate here that the polarization/alignment of stress-fibers within stem cells is a non-monotonic function of matrix rigidity. We treat the cell as an active elastic inclusion in a surrounding matrix whose polarizability, unlike dead matter, depends on the feedback of cellular forces that develop in response to matrix stresses. The theory correctly predicts the monotonic increase of the cellular forces with the matrix rigidity and the alignment of stress-fibers parallel to the long axis of cells. We show that the anisotropy of this alignment depends non-monotonically on matrix rigidity and demonstrate it experimentally by quantifying the orientational distribution of stress-fibers in stem cells. These findings offer a first physical insight for the dependence of stem cell differentiation on tissue elasticity. PMID:20563235

  14. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  15. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  16. Direct identification of bacteria causing urinary tract infections by combining matrix-assisted laser desorption ionization-time of flight mass spectrometry with UF-1000i urine flow cytometry.

    PubMed

    Wang, X-H; Zhang, G; Fan, Y-Y; Yang, X; Sui, W-J; Lu, X-X

    2013-03-01

    Rapid identification of bacterial pathogens from clinical specimens is essential to establish an adequate empirical antibiotic therapy to treat urinary tract infections (UTIs). We used matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) combined with UF-1000i urine flow cytometry of urine specimens to quickly and accurately identify bacteria causing UTIs. We divided each urine sample into three aliquots for conventional identification, UF-1000i, and MALDI-TOF MS, respectively. We compared the results of the conventional method with those of MALDI-TOF MS combined with UF-1000i, and discrepancies were resolved by 16S rRNA gene sequencing. We analyzed 1456 urine samples from patients with UTI symptoms, and 932 (64.0%) were negative using each of the three testing methods. The combined method used UF-1000i to eliminate negative specimens and then MALDI-TOF MS to identify the remaining positive samples. The combined method was consistent with the conventional method in 1373 of 1456 cases (94.3%), and gave the correct result in 1381 of 1456 cases (94.8%). Therefore, the combined method described here can directly provide a rapid, accurate, definitive bacterial identification for the vast majority of urine samples, though the MALDI-TOF MS software analysis capabilities should be improved, with regard to mixed bacterial infection. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  18. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  19. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.

    2016-01-01

    Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  20. [Differentiation by geometric morphometrics among 11 Anopheles (Nyssorhynchus) in Colombia].

    PubMed

    Calle, David Alonso; Quiñones, Martha Lucía; Erazo, Holmes Francisco; Jaramillo, Nicolás

    2008-09-01

    The correct identification of the Anopheles species of the subgenus Nyssorhynchus is important because this subgenus includes the main malaria vectors in Colombia. This information is necessary for focusing a malaria control program. Geometric morphometrics were used to evaluate morphometric variation of 11 species of subgenus Nyssorhynchus present in Colombia and to distinguish females of each species. Materials and methods. The specimens were obtained from series and family broods from females collected with protected human hosts as attractants. The field collected specimens and their progeny were identified at each of the associated stages by conventional keys. For some species, wild females were used. Landmarks were selected on wings from digital pictures from 336 individuals, and digitized with coordinates. The coordinate matrix was processed by generalized Procrustes analysis which generated size and shape variables, free of non-biological variation. Size and shape variables were analyzed by univariate and multivariate statistics. The subdivision of subgenus Nyssorhynchus in sections is not correlated with wing shape. Discriminant analyses correctly classified 97% of females in the section Albimanus and 86% in the section Argyritarsis. In addition, these methodologies allowed the correct identification of 3 sympatric species from Putumayo which have been difficult to identify in the adult female stage. The geometric morphometrics were demonstrated to be a very useful tool as an adjunct to taxonomy of females the use of this method is recommended in studies of the subgenus Nyssorhynchus in Colombia.

  1. High Precision Seawater Sr/Ca Measurements in the Florida Keys by Inductively Coupled Plasma Atomic Emission Spectrometry: Analytical Method and Implications for Coral Paleothermometry

    NASA Astrophysics Data System (ADS)

    Khare, A.; Kilbourne, K. H.; Schijf, J.

    2017-12-01

    Standard methods of reconstructing past sea surface temperatures (SSTs) with coral skeletal Sr/Ca ratios assume the seawater Sr/Ca ratio is constant. However, there is little data to support this assumption, in part because analytical techniques capable of determining seawater Sr/Ca with sufficient accuracy and precision are expensive and time consuming. We demonstrate a method to measure seawater Sr/Ca using inductively coupled plasma atomic emission spectrometry where we employ an intensity ratio calibration routine that reduces the self- matrix effects of calcium and cancels out the matrix effects that are common to both calcium and strontium. A seawater standard solution cross-calibrated with multiple instruments is used to correct for long-term instrument drift and any remnant matrix effects. The resulting method produces accurate seawater Sr/Ca determinations rapidly, inexpensively, and with a precision better than 0.2%. This method will make it easier for coral paleoclimatologists to quantify potentially problematic fluctuations in seawater Sr/Ca at their study locations. We apply our method to test for variability in surface seawater Sr/Ca along the Florida Keys Reef Tract. We are collecting winter and summer samples for two years in a grid with eleven nearshore to offshore transects across the reef, as well as continuous samples collected by osmotic pumps at four locations adjacent to our grid. Our initial analysis of the grid samples indicates a trend of decreasing Sr/Ca values offshore potentially due to a decreasing groundwater influence. The values differ by as much as 0.05 mmol/mol which could lead to an error of 1°C in mean SST reconstructions. Future work involves continued sampling in the Florida Keys to test for seasonal and interannual variability in seawater Sr/Ca, as well as collecting data from small reefs in the Virgin Islands to test the stability of seawater Sr/Ca under different geologic, hydrologic and hydrographic environments.

  2. Direct identification of bacteria in positive blood culture bottles by matrix-assisted laser desorption ionisation time-of-flight mass spectrometry.

    PubMed

    La Scola, Bernard; Raoult, Didier

    2009-11-25

    With long delays observed between sampling and availability of results, the usefulness of blood cultures in the context of emergency infectious diseases has recently been questioned. Among methods that allow quicker bacterial identification from growing colonies, matrix-assisted laser desorption ionisation time-of-flight (MALDI-TOF) mass spectrometry was demonstrated to accurately identify bacteria routinely isolated in a clinical biology laboratory. In order to speed up the identification process, in the present work we attempted bacterial identification directly from blood culture bottles detected positive by the automate. We prospectively analysed routine MALDI-TOF identification of bacteria detected in blood culture by two different protocols involving successive centrifugations and then lysis by trifluoroacetic acid or formic acid. Of the 562 blood culture broths detected as positive by the automate and containing one bacterial species, 370 (66%) were correctly identified. Changing the protocol from trifluoroacetic acid to formic acid improved identification of Staphylococci, and overall correct identification increased from 59% to 76%. Lack of identification was observed mostly with viridans streptococci, and only one false positive was observed. In the 22 positive blood culture broths that contained two or more different species, only one of the species was identified in 18 samples, no species were identified in two samples and false species identifications were obtained in two cases. The positive predictive value of bacterial identification using this procedure was 99.2%. MALDI-TOF MS is an efficient method for direct routine identification of bacterial isolates in blood culture, with the exception of polymicrobial samples and viridans streptococci. It may replace routine identification performed on colonies, provided improvement for the specificity of blood culture broths growing viridans streptococci is obtained in the near future.

  3. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry identification of large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G.

    PubMed

    Jensen, Christian Salgård; Dam-Nielsen, Casper; Arpi, Magnus

    2015-08-01

    The aim of this study was to investigate whether large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G can be adequately identified using matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-ToF). Previous studies show varying results, with an identification rate from below 50% to 100%. Large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G isolated from blood cultures between January 1, 2007 and May 1, 2012 were included in the study. Isolates were identified to the species level using a combination of phenotypic characteristics and 16s rRNA sequencing. The isolates were subjected to MALDI-ToF analysis. We used a two-stage approach starting with the direct method. If no valid result was obtained we proceeded to an extraction protocol. Scores above 2 were considered valid identification at the species level. A total of 97 Streptococcus pyogenes, 133 Streptococcus dysgalactiae, and 2 Streptococcus canis isolates were tested; 94%, 66%, and 100% of S. pyogenes, S. dysgalactiae, and S. canis, respectively, were correctly identified by MALDI-ToF. In most instances when the isolates were not identified by MALDI-ToF this was because MALDI-ToF was unable to differentiate between S. pyogenes and S. dysgalactiae. By removing two S. pyogenes reference spectra from the MALDI-ToF database the proportion of correctly identified isolates increased to 96% overall. MALDI-ToF is a promising method for discriminating between S. dysgalactiae, S. canis, and S. equi, although more strains need to be tested to clarify this.

  4. [Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].

    PubMed

    Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie

    2013-11-01

    In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.

  5. Dependent scattering and absorption by densely packed discrete spherical particles: Effects of complex refractive index

    NASA Astrophysics Data System (ADS)

    Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.

    2017-07-01

    Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.

  6. Statistical classification techniques for engineering and climatic data samples

    NASA Technical Reports Server (NTRS)

    Temple, E. C.; Shipman, J. R.

    1981-01-01

    Fisher's sample linear discriminant function is modified through an appropriate alteration of the common sample variance-covariance matrix. The alteration consists of adding nonnegative values to the eigenvalues of the sample variance covariance matrix. The desired results of this modification is to increase the number of correct classifications by the new linear discriminant function over Fisher's function. This study is limited to the two-group discriminant problem.

  7. Discriminating Majorana neutrino textures in light of the baryon asymmetry

    NASA Astrophysics Data System (ADS)

    Borah, Manikanta; Borah, Debasish; Das, Mrinal Kumar

    2015-06-01

    We study all possible texture zeros in the Majorana neutrino mass matrix which are allowed from neutrino oscillation as well as cosmology data when the charged lepton mass matrix is assumed to take the diagonal form. In the case of one-zero texture, we write down the Majorana phases which are assumed to be equal and the lightest neutrino mass as a function of the Dirac C P phase. In the case of two-zero texture, we numerically evaluate all the three C P phases and lightest neutrino mass by solving four real constraint equations. We then constrain texture zero mass matrices from the requirement of producing correct baryon asymmetry through the mechanism of leptogenesis by assuming the Dirac neutrino mass matrix to be diagonal. Adopting a type I seesaw framework, we consider the C P -violating out of equilibrium decay of the lightest right-handed neutrino as the source of lepton asymmetry. Apart from discriminating between the texture zero mass matrices and light neutrino mass hierarchy, we also constrain the Dirac and Majorana C P phases so that the observed baryon asymmetry can be produced. In two-zero texture, we further constrain the diagonal form of the Dirac neutrino mass matrix from the requirement of producing correct baryon asymmetry.

  8. SU-F-I-59: Quality Assurance Phantom for PET/CT Alignment and Attenuation Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, T; Hamacher, K

    2016-06-15

    Purpose: This study utilizes a commercial PET/CT phantom to investigate two specific properties of a PET/CT system: the alignment accuracy of PET images with those from CT used for attenuation correction and the accuracy of this correction in PET images. Methods: A commercial PET/CT phantom consisting of three aluminum rods, two long central cylinders containing uniform activity, and attenuating materials such as air, water, bone and iodine contrast was scanned using a standard PET/CT protocol. Images reconstructed with 2 mm slice thickness and a 512 by 512 matrix were obtained. The center of each aluminum rod in the PET andmore » CT images was compared to evaluate alignment accuracy. ROIs were drawn on transaxial images of the central rods at each section of attenuating material to determine the corrected activity (in BQML). BQML values were graphed as a function of slice number to provide a visual representation of the attenuation-correction throughout the whole phantom. Results: Alignment accuracy is high between the PET and CT images. The maximum deviation between the two in the axial plane is less than 1.5 mm, which is less than the width of a single pixel. BQML values measured along different sections of the large central rods are similar among the different attenuating materials except iodine contrast. Deviation of BQML values in the air and bone sections from the water section is less than 1%. Conclusion: Accurate alignment of PET and CT images is critical to ensure proper calculation and application of CT-based attenuation correction. This study presents a simple and quick method to evaluate the two with a single acquisition. As the phantom also includes spheres of increasing diameter, this could serve as a straightforward means to annually evaluate the status of a modern PET/CT system.« less

  9. Nuclear physics from Lattice QCD

    NASA Astrophysics Data System (ADS)

    Shanahan, Phiala

    2017-09-01

    I will discuss the current state and future scope of numerical Lattice Quantum Chromodynamics (LQCD) calculations of nuclear matrix elements. The goal of the program is to provide direct QCD calculations of nuclear observables relevant to experimental programs, including double-beta decay matrix elements, nuclear corrections to axial matrix elements relevant to long-baseline neutrino experiments and nuclear sigma terms needed for theory predictions of dark matter cross-sections at underground detectors. I will discuss the progress and challenges on these fronts, and also address recent work constraining a gluonic analogue of the EMC effect, which will be measurable at a future electron-ion collider.

  10. Change in the frequency and intensity of the spectral lines of a hydrogen-like atom in the field of a point charge

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, V. D.; Kamenskii, A. A.

    2002-03-01

    The changes in the wave functions and the energies of a hydrogen-like atom in the static field of a structureless charged particle are calculated in the asymptotic approximation. The corrections to the energy of states, as well as to the dipole matrix elements of radiative transitions caused by the interaction of the atom with the point charge at long range are calculated using the perturbation theory and the Sturm series for a reduced Coulomb Green’s function in parabolic coordinates. The analytical expressions are derived and tables of numerical values of the coefficients of asymptotic series that determine the corrections to the matrix elements and the intensities of transitions of the Lyman and Balmer series are presented.

  11. High precision computing with charge domain devices and a pseudo-spectral method therefor

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor); Fijany, Amir (Inventor); Zak, Michail (Inventor)

    1997-01-01

    The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.

  12. Quasi-stationary states of an electron with linearly dependent effective mass in an open nanostructure within transmission coefficient and S-matrix methods

    NASA Astrophysics Data System (ADS)

    Seti, Julia; Tkach, Mykola; Voitsekhivska, Oxana

    2018-03-01

    The exact solutions of the Schrödinger equation for a double-barrier open semiconductor plane nanostructure are obtained by using two different approaches, within the model of the rectangular potential profile and the continuous position-dependent effective mass of the electron. The transmission coefficient and scattering matrix are calculated for the double-barrier nanostructure. The resonance energies and resonance widths of the electron quasi-stationary states are analyzed as a function of the size of the near-interface region between wells and barriers, where the effective mass linearly depends on the coordinate. It is established that, in both methods, the increasing size affects in a qualitatively similar way the spectral characteristics of the states, shifting the resonance energies into the low- or high-energy region and increasing the resonance widths. It is shown that the relative difference of resonance energies and widths of a certain state, obtained in the model of position-dependent effective mass and in the widespread abrupt model in physically correct range of near-interface sizes, does not exceed 0.5% and 5%, respectively, independently of the other geometrical characteristics of the structure.

  13. Study of the retardance of a birefringent waveplate at tilt incidence by Mueller matrix ellipsometer

    NASA Astrophysics Data System (ADS)

    Gu, Honggang; Chen, Xiuguo; Zhang, Chuanwei; Jiang, Hao; Liu, Shiyuan

    2018-01-01

    Birefringent waveplates are indispensable optical elements for polarization state modification in various optical systems. The retardance of a birefringent waveplate will change significantly when the incident angle of the light varies. Therefore, it is of great importance to study such field-of-view errors on the polarization properties, especially the retardance of a birefringent waveplate, for the performance improvement of the system. In this paper, we propose a generalized retardance formula at arbitrary incidence and azimuth for a general plane-parallel composite waveplate consisting of multiple aligned single waveplates. An efficient method and corresponding experimental set-up have been developed to characterize the retardance versus the field-of-view angle based on a constructed spectroscopic Mueller matrix ellipsometer. Both simulations and experiments on an MgF2 biplate over an incident angle of 0°-8° and an azimuthal angle of 0°-360° are presented as an example, and the dominant experimental errors are discussed and corrected. The experimental results strongly agree with the simulations with a maximum difference of 0.15° over the entire field of view, which indicates the validity and great potential of the presented method for birefringent waveplate characterization at tilt incidence.

  14. Corrigendum to "Pharmaceutical analysis in solids using front face fluorescence spectroscopy and multivariate calibration with matrix correction by piecewise direct standardization" [Spectrochim. Acta Part A: Mol. Biomol. Spectrosc. 103 (2013) 311-318

    NASA Astrophysics Data System (ADS)

    Alves, Julio Cesar L.; Poppi, Ronei J.

    2014-03-01

    The authors regret to inform that the tick labels of the ternary diagram axes in Fig. 1 were shown from 0% to 1.0% instead of 0% to 100%. The correct values of 0% to 100% are shown in the corrected Fig. 1 (see below). The right contents of the active ingredients in the sample sets shown in the diagram are now in agreement with the stated throughout the paper.

  15. Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.

    PubMed

    Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A

    2018-01-18

    Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.

  16. Construction of self-dual codes in the Rosenbloom-Tsfasman metric

    NASA Astrophysics Data System (ADS)

    Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin

    2017-12-01

    Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.

  17. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal

    PubMed Central

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-01-01

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal. PMID:26512665

  18. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    PubMed

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  19. Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.

    PubMed

    Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T

    2008-09-15

    Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with the reported levels at the 95% confidence level.

  20. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    NASA Astrophysics Data System (ADS)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic inversions we examine the importance of including the topography in the inversion and we test different regularization schemes using weighted second norm of model gradient as well as inverting for a static distortion matrix following Miensopust/Avdeeva approach. We also apply our algorithm to invert MT data collected at Mt St Helens.

  1. PET/CT detectability and classification of simulated pulmonary lesions using an SUV correction scheme

    NASA Astrophysics Data System (ADS)

    Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven

    2008-03-01

    Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.

  2. Diagnosing and Correcting Mass Accuracy and Signal Intensity Error Due to Initial Ion Position Variations in a MALDI TOFMS

    NASA Astrophysics Data System (ADS)

    Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.

    2018-02-01

    Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days.

  3. Diagnosing and Correcting Mass Accuracy and Signal Intensity Error Due to Initial Ion Position Variations in a MALDI TOFMS

    NASA Astrophysics Data System (ADS)

    Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.

    2017-12-01

    Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days. [Figure not available: see fulltext.

  4. Optimized statistical parametric mapping for partial-volume-corrected amyloid positron emission tomography in patients with Alzheimer's disease and Lewy body dementia

    NASA Astrophysics Data System (ADS)

    Oh, Jungsu S.; Kim, Jae Seung; Chae, Sun Young; Oh, Minyoung; Oh, Seung Jun; Cha, Seung Nam; Chang, Ho-Jong; Lee, Chong Sik; Lee, Jae Hong

    2017-03-01

    We present an optimized voxelwise statistical parametric mapping (SPM) of partial-volume (PV)-corrected positron emission tomography (PET) of 11C Pittsburgh Compound B (PiB), incorporating the anatomical precision of magnetic resonance image (MRI) and amyloid β (A β) burden-specificity of PiB PET. First, we applied region-based partial-volume correction (PVC), termed the geometric transfer matrix (GTM) method, to PiB PET, creating MRI-based lobar parcels filled with mean PiB uptakes. Then, we conducted a voxelwise PVC by multiplying the original PET by the ratio of a GTM-based PV-corrected PET to a 6-mm-smoothed PV-corrected PET. Finally, we conducted spatial normalizations of the PV-corrected PETs onto the study-specific template. As such, we increased the accuracy of the SPM normalization and the tissue specificity of SPM results. Moreover, lobar smoothing (instead of whole-brain smoothing) was applied to increase the signal-to-noise ratio in the image without degrading the tissue specificity. Thereby, we could optimize a voxelwise group comparison between subjects with high and normal A β burdens (from 10 patients with Alzheimer's disease, 30 patients with Lewy body dementia, and 9 normal controls). Our SPM framework outperformed than the conventional one in terms of the accuracy of the spatial normalization (85% of maximum likelihood tissue classification volume) and the tissue specificity (larger gray matter, and smaller cerebrospinal fluid volume fraction from the SPM results). Our SPM framework optimized the SPM of a PV-corrected A β PET in terms of anatomical precision, normalization accuracy, and tissue specificity, resulting in better detection and localization of A β burdens in patients with Alzheimer's disease and Lewy body dementia.

  5. The self-absorption correction factors for 210Pb concentration in mining waste and influence on environmental radiation risk assessment.

    PubMed

    Bonczyk, Michal; Michalik, Boguslaw; Chmielewska, Izabela

    2017-03-01

    The radioactive lead isotope 210 Pb occurs in waste originating from metal smelting and refining industry, gas and oil extraction and sometimes from underground coal mines, which are deposited in natural environment very often. Radiation risk assessment requires accurate knowledge about the concentration of 210 Pb in such materials. Laboratory measurements seem to be the only reliable method applicable in environmental 210 Pb monitoring. One of the methods is gamma-ray spectrometry, which is a very fast and cost-effective method to determine 210 Pb concentration. On the other hand, the self-attenuation of gamma ray from 210 Pb (46.5 keV) in a sample is significant as it does not depend only on sample density but also on sample chemical composition (sample matrix). This phenomenon is responsible for the under-estimation of the 210 Pb activity concentration level often when gamma spectrometry is applied with no regard to relevant corrections. Finally, the corresponding radiation risk can be also improperly evaluated. Sixty samples of coal mining solid tailings (sediments created from underground mining water) were analysed. Slightly modified and adapted to the existing laboratory condition, a transmission method has been applied for the accurate measurement of 210 Pb concentration . The observed concentrations of 210 Pb range between 42.2 ÷ 11,700 Bq·kg -1 of dry mass. Experimentally obtained correction factors related to a sample density and elemental composition range between 1.11 and 6.97. Neglecting this factor can cause a significant error or underestimations in radiological risk assessment. The obtained results have been used for environmental radiation risk assessment performed by use of the ERICA tool assuming exposure conditions typical for the final destination of such kind of waste.

  6. Exact simulation of polarized light reflectance by particle deposits

    NASA Astrophysics Data System (ADS)

    Ramezan Pour, B.; Mackowski, D. W.

    2015-12-01

    The use of polarimetric light reflection measurements as a means of identifying the physical and chemical characteristics of particulate materials obviously relies on an accurate model of predicting the effects of particle size, shape, concentration, and refractive index on polarized reflection. The research examines two methods for prediction of reflection from plane parallel layers of wavelength—sized particles. The first method is based on an exact superposition solution to Maxwell's time harmonic wave equations for a deposit of spherical particles that are exposed to a plane incident wave. We use a FORTRAN-90 implementation of this solution (the Multiple Sphere T Matrix (MSTM) code), coupled with parallel computational platforms, to directly simulate the reflection from particle layers. The second method examined is based upon the vector radiative transport equation (RTE). Mie theory is used in our RTE model to predict the extinction coefficient, albedo, and scattering phase function of the particles, and the solution of the RTE is obtained from adding—doubling method applied to a plane—parallel configuration. Our results show that the MSTM and RTE predictions of the Mueller matrix elements converge when particle volume fraction in the particle layer decreases below around five percent. At higher volume fractions the RTE can yield results that, depending on the particle size and refractive index, significantly depart from the exact predictions. The particle regimes which lead to dependent scattering effects, and the application of methods to correct the vector RTE for particle interaction, will be discussed.

  7. Infrared spectral imaging as a novel approach for histopathological recognition in colon cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Nallala, Jayakrupakar; Gobinet, Cyril; Diebold, Marie-Danièle; Untereiner, Valérie; Bouché, Olivier; Manfait, Michel; Sockalingum, Ganesh Dhruvananda; Piot, Olivier

    2012-11-01

    Innovative diagnostic methods are the need of the hour that could complement conventional histopathology for cancer diagnosis. In this perspective, we propose a new concept based on spectral histopathology, using IR spectral micro-imaging, directly applied to paraffinized colon tissue array stabilized in an agarose matrix without any chemical pre-treatment. In order to correct spectral interferences from paraffin and agarose, a mathematical procedure is implemented. The corrected spectral images are then processed by a multivariate clustering method to automatically recover, on the basis of their intrinsic molecular composition, the main histological classes of the normal and the tumoral colon tissue. The spectral signatures from different histological classes of the colonic tissues are analyzed using statistical methods (Kruskal-Wallis test and principal component analysis) to identify the most discriminant IR features. These features allow characterizing some of the biomolecular alterations associated with malignancy. Thus, via a single analysis, in a label-free and nondestructive manner, main changes associated with nucleotide, carbohydrates, and collagen features can be identified simultaneously between the compared normal and the cancerous tissues. The present study demonstrates the potential of IR spectral imaging as a complementary modern tool, to conventional histopathology, for an objective cancer diagnosis directly from paraffin-embedded tissue arrays.

  8. Comparison of two matrix-assisted laser desorption ionization-time of flight mass spectrometry systems for the identification of clinical filamentous fungi.

    PubMed

    Huang, Yanfei; Zhang, Mingxin; Zhu, Min; Wang, Mei; Sun, Yufeng; Gu, Haitong; Cao, Jingjing; Li, Xue; Zhang, Shaoya; Wang, Jinglin; Lu, Xinxin

    2017-07-01

    Infections caused by filamentous fungi have become a health concern, and require rapid and accurate identification in order for effective treatment of the pathogens. To compare the performance of two MALDI-TOF MS systems (Bruker Microflex LT and Xiamen Microtyper) in the identification of filamentous fungal species. A total of 374 clinical filamentous fungal isolates sequentially collected in the Clinical Laboratory at the Beijing Tongren Hospital between January 2014 and December 2015 were identified by traditional phenotypic methods, Bruker Microflex LT and Xiamen Microtyper MALDI-TOF MS, respectively. The discrepancy between these methods was resolved by sequencing for definitive identification. Bruker Microflex LT and Xiamen Microtyper had similar correct species ID (98.9 vs. 99.2%), genus ID (99.7 vs. 100%), mis-ID (0.3 vs. 0%) and no ID (0 vs. 0). The rate of correct species identification by both MALDI-TOF MS (98.9 and 99.2%, respectively) was much higher compared with phenotypic approach (91.9%). Both MALDI-TOF MS systems provide accurate identification of clinical filamentous fungi compared with conventional phenotypic method, and have the potential to replace identification for routine identification of these fungi in clinical mycology laboratories. Both systems have similar performance in the identification of clinical filamentous fungi.

  9. An evaluation of three processing methods and the effect of reduced culture times for faster direct identification of pathogens from BacT/ALERT blood cultures by MALDI-TOF MS.

    PubMed

    Loonen, A J M; Jansz, A R; Stalpers, J; Wolffs, P F G; van den Brule, A J C

    2012-07-01

    Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF MS) is a fast and reliable method for the identification of bacteria from agar media. Direct identification from positive blood cultures should decrease the time to obtaining the result. In this study, three different processing methods for the rapid direct identification of bacteria from positive blood culture bottles were compared. In total, 101 positive aerobe BacT/ALERT bottles were included in this study. Aliquots from all bottles were used for three bacterial processing methods, i.e. the commercially available Bruker's MALDI Sepsityper kit, the commercially available Molzym's MolYsis Basic5 kit and a centrifugation/washing method. In addition, the best method was used to evaluate the possibility of MALDI application after a reduced incubation time of 7 h of Staphylococcus aureus- and Escherichia coli-spiked (1,000, 100 and 10 colony-forming units [CFU]) aerobe BacT/ALERT blood cultures. Sixty-six (65%), 51 (50.5%) and 79 (78%) bottles were identified correctly at the species level when the centrifugation/washing method, MolYsis Basic 5 and Sepsityper were used, respectively. Incorrect identification was obtained in 35 (35%), 50 (49.5%) and 22 (22%) bottles, respectively. Gram-positive cocci were correctly identified in 33/52 (64%) of the cases. However, Gram-negative rods showed a correct identification in 45/47 (96%) of all bottles when the Sepsityper kit was used. Seven hours of pre-incubation of S. aureus- and E. coli-spiked aerobe BacT/ALERT blood cultures never resulted in reliable identification with MALDI-TOF MS. Sepsityper is superior for the direct identification of microorganisms from aerobe BacT/ALERT bottles. Gram-negative pathogens show better results compared to Gram-positive bacteria. Reduced incubation followed by MALDI-TOF MS did not result in faster reliable identification.

  10. Extended Hellmann-Feynman theorem for degenerate eigenstates

    NASA Astrophysics Data System (ADS)

    Zhang, G. P.; George, Thomas F.

    2004-04-01

    In a previous paper, we reported a failure of the traditional Hellmann-Feynman theorem (HFT) for degenerate eigenstates. This has generated enormous interest among different groups. In four independent papers by Fernandez, by Balawender, Hola, and March, by Vatsya, and by Alon and Cederbaum, an elegant method to solve the problem was devised. The main idea is that one has to construct and diagonalize the force matrix for the degenerate case, and only the eigenforces are well defined. We believe this is an important extension to HFT. Using our previous example for an energy level of fivefold degeneracy, we find that those eigenforces correctly reflect the symmetry of the molecule.

  11. Iterative universal state selective correction for the Brillouin-Wigner multireference coupled-cluster theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banik, Subrata; Ravichandran, Lalitha; Brabec, Jiri

    2015-03-21

    As a further development of the previously introduced a posteriori Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] and [Brabec et al., J. Chem. Phys., 136, 124102 (2012)], we suggest an iterative form of the USS correction by means of correcting effective Hamiltonian matrix elements. We also formulate USS corrections via the left Bloch equations. The convergence of the USS corrections with excitation level towards the FCI limit is also investigated. Various forms of the USS and simplified diagonal USSD corrections at the SD and SD(T) levels are numerically assessed on several model systems and onmore » the ozone and tetramethyleneethane molecules. It is shown that the iterative USS correction can successfully replace the previously developed a posteriori BWCC size-extensivity correction, while it is not sensitive to intruder states and performs well also in other cases when the a posteriori one fails, like e.g. for the asymmetric vibration mode of ozone.« less

  12. Landsat real-time processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, E.L.

    A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less

  13. Performance analysis of landslide early warning systems at regional scale: the EDuMaP method

    NASA Astrophysics Data System (ADS)

    Piciullo, Luca; Calvello, Michele

    2016-04-01

    Landslide early warning systems (LEWSs) reduce landslide risk by disseminating timely and meaningful warnings when the level of risk is judged intolerably high. Two categories of LEWSs, can be defined on the basis of their scale of analysis: "local" systems and "regional" systems. LEWSs at regional scale (ReLEWSs) are used to assess the probability of occurrence of landslides over appropriately-defined homogeneous warning zones of relevant extension, typically through the prediction and monitoring of meteorological variables, in order to give generalized warnings to the public. Despite many studies on ReLEWSs, no standard requirements exist for assessing their performance. Empirical evaluations are often carried out by simply analysing the time frames during which significant high-consequence landslides occurred in the test area. Alternatively, the performance evaluation is based on 2x2 contingency tables computed for the joint frequency distribution of landslides and alerts, both considered as dichotomous variables. In all these cases, model performance is assessed neglecting some important aspects which are peculiar to ReLEWSs, among which: the possible occurrence of multiple landslides in the warning zone; the duration of the warnings in relation to the time of occurrence of the landslides; the level of the warning issued in relation to the landslide spatial density in the warning zone; the relative importance system managers attribute to different types of errors. An original approach, called EDuMaP method, is proposed to assess the performance of landslide early warning models operating at regional scale. The method is composed by three main phases: Events analysis, Duration Matrix, Performance analysis. The events analysis phase focuses on the definition of landslide (LEs) and warning events (WEs), which are derived from available landslides and warnings databases according to their spatial and temporal characteristics by means of ten input parameters. The evaluation of time associated with the occurrence of landslide events (LE) in relation to the occurrence of warning events (WE) in their respective classes is a fundamental step to determine the duration matrix elements. On the other hand the classification of LEs and WEs establishes the structure of the duration matrix. Indeed, the number of rows and columns of the matrix is equal to the number of classes defined for the warning and landslide events, respectively. Thus the matrix is not expressed as a 2x2 contingency and LEs and WEs are not expressed as dichotomous variables. The final phase of the method is the evaluation of the duration matrix based on a set of performance criteria assigning a performance meaning to the element of the matrix. To this aim different criteria can be defined, for instance employing an alert classification scheme derived from 2x2 contingency tables or assigning a colour code to the elements of the matrix in relation to their grade of correctness. Finally, performance indicators can be derived from the performance criteria to quantify successes and errors of the early warning models. EDuMaP has been already applied to different real case studies, highlighting the adaptability of the method to analyse the performance of structurally different ReLEWSs.

  14. The great importance of normalization of LC-MS data for highly-accurate non-targeted metabolomics.

    PubMed

    Mizuno, Hajime; Ueda, Kazuki; Kobayashi, Yuta; Tsuyama, Naohiro; Todoroki, Kenichiro; Min, Jun Zhe; Toyo'oka, Toshimasa

    2017-01-01

    The non-targeted metabolomics analysis of biological samples is very important to understand biological functions and diseases. LC combined with electrospray ionization-based MS has been a powerful tool and widely used for metabolomic analyses. However, the ionization efficiency of electrospray ionization fluctuates for various unexpected reasons such as matrix effects and intraday variations of the instrument performances. To remove these fluctuations, normalization methods have been developed. Such techniques include increasing the sensitivity, separating co-eluting components and normalizing the ionization efficiencies. Normalization techniques allow simultaneously correcting of the ionization efficiencies of the detected metabolite peaks and achieving quantitative non-targeted metabolomics. In this review paper, we focused on these normalization methods for non-targeted metabolomics by LC-MS. Copyright © 2016 John Wiley & Sons, Ltd.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Igarashi, Noriyuki, E-mail: noriyuki.igarashi@kek.jp; Nitani, Hiroaki; Takeichi, Yasuo

    BL-15A is a new x-ray undulator beamline at the Photon Factory. It will be dedicated to two independent research activities, simultaneous XAFS/XRF/XRD experiments, and SAXS/WAXS/GI-SAXS studies. In order to supply a choice of micro-focus, low-divergence and collimated beams, a double surface bimorph mirror was recently developed. To achieve further mirror surface optimization, the pencil beam scanning method was applied for “in-situ” beam inspection and the Inverse Matrix method was used for determination of optimal voltages on the piezoelectric actuators. The corrected beam profiles at every focal spot gave good agreement with the theoretical values and the resultant beam performance ismore » promising for both techniques. Quick and stable switching between highly focused and intense collimated beams was established using this new mirror with the simple motorized stages.« less

  16. Stoichiometry determination of (Pb,La)(Zr,Ti)O3-type nano-crystalline ferroelectric ceramics by wavelength-dispersive X-ray fluorescence spectrometry.

    PubMed

    Sitko, Rafał; Zawisza, Beata; Kita, Andrzej; Płońska, Małgorzata

    2006-07-01

    Analysis of small samples of lanthanum-doped lead zirconate titanate (PLZT) by wavelength-dispersive X-ray fluorescence spectrometry (WDXRF) is presented. The powdered material in ca. 30 mg was suspended in water and collected on the membrane filter. The pure oxide standards (PbO, La2O3, ZrO2 and TiO2) were used for calibration. The matrix effects were corrected using a theoretical influence coefficients algorithm for intermediate-thickness specimens. The results from XRF method were compared with the results from the inductively coupled plasma optical emission spectrometry (ICP-OES). Agreement between XRF and ICP-OES analysis was satisfactory and indicates the usefulness of XRF method for stoichiometry determination of PLZT.

  17. Human motion planning based on recursive dynamics and optimal control techniques

    NASA Technical Reports Server (NTRS)

    Lo, Janzen; Huang, Gang; Metaxas, Dimitris

    2002-01-01

    This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.

  18. Noniterative Multireference Coupled Cluster Methods on Heterogeneous CPU-GPU Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Ma, Wenjing; Krishnamoorthy, Sriram

    2013-04-09

    A novel parallel algorithm for non-iterative multireference coupled cluster (MRCC) theories, which merges recently introduced reference-level parallelism (RLP) [K. Bhaskaran-Nair, J.Brabec, E. Aprà, H.J.J. van Dam, J. Pittner, K. Kowalski, J. Chem. Phys. 137, 094112 (2012)] with the possibility of accelerating numerical calculations using graphics processing unit (GPU) is presented. We discuss the performance of this algorithm on the example of the MRCCSD(T) method (iterative singles and doubles and perturbative triples), where the corrections due to triples are added to the diagonal elements of the MRCCSD (iterative singles and doubles) effective Hamiltonian matrix. The performance of the combined RLP/GPU algorithmmore » is illustrated on the example of the Brillouin-Wigner (BW) and Mukherjee (Mk) state-specific MRCCSD(T) formulations.« less

  19. Multi-residue determination of pesticides in tropical fruits using liquid chromatography/tandem mass spectrometry.

    PubMed

    Botero-Coy, A M; Marín, J M; Ibáñez, M; Sancho, J V; Hernández, F

    2012-03-01

    Monitoring pesticide residues in tropical fruits is of great interest for many countries, e.g., from South America, that base an important part of their economy on the exportation of these products. In this work, a LC-MS/MS multi-residue method using a triple quadrupole analyzer has been developed for around 30 pesticides in seven Colombian tropical fruits of high commercial value for domestic and international markets (uchuva, tamarillo, granadilla, gulupa, maracuya, papaya, and pithaya). After sample extraction with acetonitrile, an aliquot of the extract was diluted with water and directly injected into the HPLC-MS/MS system (electrospray interface) without any cleanup step. The formation of sodium adducts-of poor fragmentation-was minimized using 0.1% formic acid in the mobile phase, which favored the formation of the protonated molecule. However, the addition of ammonium acetate made the formation of the ammonium adducts in some particular cases possible, avoiding the presence of the sodium adducts. The highest sensitivity was observed in positive electrospray ionization for the wide majority of pesticides, with a few exceptions for acidic compounds that gave better response in the negative mode (e.g., 2,4-D, fluazinan). Thus, simultaneous acquisition on the positive/negative mode was applied. Two MS/MS transitions were acquired for each compound to ensure a reliable quantification and identification of the compounds detected in samples, although for malathion a third transition was acquired due to the presence of interfering isobaric compounds in the sample extracts. A detailed study of matrix effects was made by a comparison of standards in solvent and in matrix. Both ionization suppression and ionization enhancement were observed depending on the analyte/matrix combination tested. Correction of matrix effects was made by the application of calibration in matrix. Three matrices were selected (uchuva, maracuya, gulupa) to perform matrix calibration in the analysis of all seven fruit varieties studied. The method was validated by recovery experiments in samples spiked at two levels (0.05 and 0.5 mg/kg). The data were satisfactory for the wide majority of analyte/matrix combinations, with most recoveries between 70% and 110% and the RSD below 15%. Several samples collected from the market were finally analyzed. Positive findings were confirmed by evaluating the experimental Q/q ratios and retention times, and comparing them with those of reference standards.

  20. Unimodular Gravity and General Relativity UV divergent contributions to the scattering of massive scalar particles

    NASA Astrophysics Data System (ADS)

    Gonzalez-Martin, S.; Martin, C. P.

    2018-01-01

    We work out the one-loop and order κ2 mphi2 UV divergent contributions, coming from Unimodular Gravity and General Relativity, to the S matrix element of the scattering process phi + phi→ phi + phi in a λ phi4 theory with mass mphi. We show that both Unimodular Gravity and General Relativity give rise to the same UV divergent contributions in Dimensional Regularization. This seems to be at odds with the known result that in a multiplicative MS dimensional regularization scheme the General Relativity corrections, in the de Donder gauge, to the beta function, βλ, of the λ coupling do not vanish, whereas the Unimodular Gravity corrections, in a certain gauge, do vanish. Actually, by comparing the UV divergent contributions calculated in this paper with those which give rise to the non-vanishing gravitational corrections to βλ, one readily concludes that the UV divergent contributions that yield the just mentioned non-vanishing gravitational corrections to βλ do not contribute to the UV divergent behaviour of the S matrix element of phi + phi→ phi + phi. This shows that any physical consequence—such as the existence of asymptotic freedom due to gravitational interactions—drawn from the value of βλ is not physically meaningful.

  1. Statistical Correction of Air Temperature Forecasts for City and Road Weather Applications

    NASA Astrophysics Data System (ADS)

    Mahura, Alexander; Petersen, Claus; Sass, Bent; Gilet, Nicolas

    2014-05-01

    The method for statistical correction of air /road surface temperatures forecasts was developed based on analysis of long-term time-series of meteorological observations and forecasts (from HIgh Resolution Limited Area Model & Road Conditions Model; 3 km horizontal resolution). It has been tested for May-Aug 2012 & Oct 2012 - Mar 2013, respectively. The developed method is based mostly on forecasted meteorological parameters with a minimal inclusion of observations (covering only a pre-history period). Although the st iteration correction is based taking into account relevant temperature observations, but the further adjustment of air and road temperature forecasts is based purely on forecasted meteorological parameters. The method is model independent, e.g. it can be applied for temperature correction with other types of models having different horizontal resolutions. It is relatively fast due to application of the singular value decomposition method for matrix solution to find coefficients. Moreover, there is always a possibility for additional improvement due to extra tuning of the temperature forecasts for some locations (stations), and in particular, where for example, the MAEs are generally higher compared with others (see Gilet et al., 2014). For the city weather applications, new operationalized procedure for statistical correction of the air temperature forecasts has been elaborated and implemented for the HIRLAM-SKA model runs at 00, 06, 12, and 18 UTCs covering forecast lengths up to 48 hours. The procedure includes segments for extraction of observations and forecast data, assigning these to forecast lengths, statistical correction of temperature, one-&multi-days statistical evaluation of model performance, decision-making on using corrections by stations, interpolation, visualisation and storage/backup. Pre-operational air temperature correction runs were performed for the mainland Denmark since mid-April 2013 and shown good results. Tests also showed that the CPU time required for the operational procedure is relatively short (less than 15 minutes including a large time spent for interpolation). These also showed that in order to start correction of forecasts there is no need to have a long-term pre-historical data (containing forecasts and observations) and, at least, a couple of weeks will be sufficient when a new observational station is included and added to the forecast point. Note for the road weather application, the operationalization of the statistical correction of the road surface temperature forecasts (for the RWM system daily hourly runs covering forecast length up to 5 hours ahead) for the Danish road network (for about 400 road stations) was also implemented, and it is running in a test mode since Sep 2013. The method can also be applied for correction of the dew point temperature and wind speed (as a part of observations/ forecasts at synoptical stations), where these both meteorological parameters are parts of the proposed system of equations. The evaluation of the method performance for improvement of the wind speed forecasts is planned as well, with considering possibilities for the wind direction improvements (which is more complex due to multi-modal types of such data distribution). The method worked for the entire domain of mainland Denmark (tested for 60 synoptical and 395 road stations), and hence, it can be also applied for any geographical point within this domain, as through interpolation to about 100 cities' locations (for Danish national byvejr forecasts). Moreover, we can assume that the same method can be used in other geographical areas. The evaluation for other domains (with a focus on Greenland and Nordic countries) is planned. In addition, a similar approach might be also tested for statistical correction of concentrations of chemical species, but such approach will require additional elaboration and evaluation.

  2. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  3. Errors in quantitative backscattered electron analysis of bone standardized by energy-dispersive x-ray spectrometry.

    PubMed

    Vajda, E G; Skedros, J G; Bloebaum, R D

    1998-10-01

    Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.

  4. New insights on ion track morphology in pyrochlores by aberration corrected scanning transmission electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sachan, Ritesh; Zhang, Yanwen; Ou, Xin

    Here we demonstrate the enhanced imaging capabilities of an aberration corrected scanning transmission electron microscope to advance the understanding of ion track structure in pyrochlore structured materials (i.e., Gd 2Ti 2O 7 and Gd 2TiZrO 7). Track formation occurs due to the inelastic transfer of energy from incident ions to electrons, and atomic-level details of track morphology as a function of energy-loss are revealed in the present work. A comparison of imaging details obtained by varying collection angles of detectors is discussed in the present work. A quantitative analysis of phase identification using high-angle annular dark field imaging is performedmore » on the ion tracks. Finally, a novel 3-dimensional track reconstruction method is provided that is based on depth dependent imaging of the ion tracks. The technique is used in extracting the atomic-level details of nanoscale features, such as the disordered ion tracks, which are embedded in relatively thicker matrix. Another relevance of the method is shown by measuring the tilt of the ion tracks relative to the electron beam incidence that helps in knowing the structure and geometry of ion tracks quantitatively.« less

  5. New insights on ion track morphology in pyrochlores by aberration corrected scanning transmission electron microscopy

    DOE PAGES

    Sachan, Ritesh; Zhang, Yanwen; Ou, Xin; ...

    2016-12-13

    Here we demonstrate the enhanced imaging capabilities of an aberration corrected scanning transmission electron microscope to advance the understanding of ion track structure in pyrochlore structured materials (i.e., Gd 2Ti 2O 7 and Gd 2TiZrO 7). Track formation occurs due to the inelastic transfer of energy from incident ions to electrons, and atomic-level details of track morphology as a function of energy-loss are revealed in the present work. A comparison of imaging details obtained by varying collection angles of detectors is discussed in the present work. A quantitative analysis of phase identification using high-angle annular dark field imaging is performedmore » on the ion tracks. Finally, a novel 3-dimensional track reconstruction method is provided that is based on depth dependent imaging of the ion tracks. The technique is used in extracting the atomic-level details of nanoscale features, such as the disordered ion tracks, which are embedded in relatively thicker matrix. Another relevance of the method is shown by measuring the tilt of the ion tracks relative to the electron beam incidence that helps in knowing the structure and geometry of ion tracks quantitatively.« less

  6. Influence of stress interaction on the behavior of off-axis unidirectional composites

    NASA Technical Reports Server (NTRS)

    Pindera, M. J.; Herakovich, C. T.

    1980-01-01

    The yield function for plane stress of a transversely isotropic composite lamina consisting of stiff, linearly elastic fibers and a von Mises matrix material is formulated in terms of Hill's elastic stress concentration factors and a single plastic constraint parameter. The above are subsequently evaluated on the basis of observed average lamina and constituent response for the Avco 5505 boron epoxy system. It is shown that inclusion of residual stresses in the yield function together with the incorporation of Dubey and Hillier's concept of generalized yield stress for anisotropic media in the constitutive equation correctly predicts the trends observed in experiments. The incorporation of the strong axial stress interaction necessary to predict the correct trends in the shear response is directly traced to the high residual axial stresses in the matrix induced during fabrication of the composite.

  7. Predictions for the Dirac C P -violating phase from sum rules

    NASA Astrophysics Data System (ADS)

    Delgadillo, Luis A.; Everett, Lisa L.; Ramos, Raymundo; Stuart, Alexander J.

    2018-05-01

    We explore the implications of recent results relating the Dirac C P -violating phase to predicted and measured leptonic mixing angles within a standard set of theoretical scenarios in which charged lepton corrections are responsible for generating a nonzero value of the reactor mixing angle. We employ a full set of leptonic sum rules as required by the unitarity of the lepton mixing matrix, which can be reduced to predictions for the observable mixing angles and the Dirac C P -violating phase in terms of model parameters. These sum rules are investigated within a given set of theoretical scenarios for the neutrino sector diagonalization matrix for several known classes of charged lepton corrections. The results provide explicit maps of the allowed model parameter space within each given scenario and assumed form of charged lepton perturbations.

  8. Matrix-Product-State Algorithm for Finite Fractional Quantum Hall Systems

    NASA Astrophysics Data System (ADS)

    Liu, Zhao; Bhatt, R. N.

    2015-09-01

    Exact diagonalization is a powerful tool to study fractional quantum Hall (FQH) systems. However, its capability is limited by the exponentially increasing computational cost. In order to overcome this difficulty, density-matrix-renormalization-group (DMRG) algorithms were developed for much larger system sizes. Very recently, it was realized that some model FQH states have exact matrix-product-state (MPS) representation. Motivated by this, here we report a MPS code, which is closely related to, but different from traditional DMRG language, for finite FQH systems on the cylinder geometry. By representing the many-body Hamiltonian as a matrix-product-operator (MPO) and using single-site update and density matrix correction, we show that our code can efficiently search the ground state of various FQH systems. We also compare the performance of our code with traditional DMRG. The possible generalization of our code to infinite FQH systems and other physical systems is also discussed.

  9. Determination of human-use pharmaceuticals in filtered water by direct aqueous injection: high-performance liquid chromatography/tandem mass spectrometry

    USGS Publications Warehouse

    Furlong, Edward T.; Noriega, Mary C.; Kanagy, Christopher J.; Kanagy, Leslie K.; Coffey, Laura J.; Burkhardt, Mark R.

    2014-01-01

    This report describes a method for the determination of 110 human-use pharmaceuticals using a 100-microliter aliquot of a filtered water sample directly injected into a high-performance liquid chromatograph coupled to a triple-quadrupole tandem mass spectrometer using an electrospray ionization source operated in the positive ion mode. The pharmaceuticals were separated by using a reversed-phase gradient of formic acid/ammonium formate-modified water and methanol. Multiple reaction monitoring of two fragmentations of the protonated molecular ion of each pharmaceutical to two unique product ions was used to identify each pharmaceutical qualitatively. The primary multiple reaction monitoring precursor-product ion transition was quantified for each pharmaceutical relative to the primary multiple reaction monitoring precursor-product transition of one of 19 isotope-dilution standard pharmaceuticals or the pesticide atrazine, using an exact stable isotope analogue where possible. Each isotope-dilution standard was selected, when possible, for its chemical similarity to the unlabeled pharmaceutical of interest, and added to the sample after filtration but prior to analysis. Method performance for each pharmaceutical was determined for reagent water, groundwater, treated drinking water, surface water, treated wastewater effluent, and wastewater influent sample matrixes that this method will likely be applied to. Each matrix was evaluated in order of increasing complexity to demonstrate (1) the sensitivity of the method in different water matrixes and (2) the effect of sample matrix, particularly matrix enhancement or suppression of the precursor ion signal, on the quantitative determination of pharmaceutical concentrations. Recovery of water samples spiked (fortified) with the suite of pharmaceuticals determined by this method typically was greater than 90 percent in reagent water, groundwater, drinking water, and surface water. Correction for ambient environmental concentrations of pharmaceuticals hampered the determination of absolute recoveries and method sensitivity of some compounds in some water types, particularly for wastewater effluent and influent samples. The method detection limit of each pharmaceutical was determined from analysis of pharmaceuticals fortified at multiple concentrations in reagent water. The calibration range for each compound typically spanned three orders of magnitude of concentration. Absolute sensitivity for some compounds, using isotope-dilution quantitation, ranged from 0.45 to 94.1 nanograms per liter, primarily as a result of the inherent ionization efficiency of each pharmaceutical in the electrospray ionization process. Holding-time studies indicate that acceptable recoveries of pharmaceuticals can be obtained from filtered water samples held at 4 °C for as long as 9 days after sample collection. Freezing samples to provide for storage for longer periods currently (2014) is under evaluation by the National Water Quality Laboratory.

  10. Nonlinear earthquake analysis of reinforced concrete frames with fiber and Bernoulli-Euler beam-column element.

    PubMed

    Karaton, Muhammet

    2014-01-01

    A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched.

  11. Classification of 'Chemlali' accessions according to the geographical area using chemometric methods of phenolic profiles analysed by HPLC-ESI-TOF-MS.

    PubMed

    Taamalli, Amani; Arráez Román, David; Zarrouk, Mokhtar; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto

    2012-05-01

    The present work describes a classification method of Tunisian 'Chemlali' olive oils based on their phenolic composition and geographical area. For this purpose, the data obtained by HPLC-ESI-TOF-MS from 13 samples of extra virgin olive oils, obtained from different production area throughout the country, were used for this study focusing in 23 phenolics compounds detected. The quantitative results showed a significant variability among the analysed oil samples. Factor analysis method using principal component was applied to the data in order to reduce the number of factors which explain the variability of the selected compounds. The data matrix constructed was subjected to a canonical discriminant analysis (CDA) in order to classify the oil samples. These results showed that 100% of cross-validated original group cases were correctly classified, which proves the usefulness of the selected variables. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Exact solution of corner-modified banded block-Toeplitz eigensystems

    NASA Astrophysics Data System (ADS)

    Cobanera, Emilio; Alase, Abhijeet; Ortiz, Gerardo; Viola, Lorenza

    2017-05-01

    Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified. Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz, independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix, whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev.

  13. Phonons in two-dimensional soft colloidal crystals.

    PubMed

    Chen, Ke; Still, Tim; Schoenholz, Samuel; Aptowicz, Kevin B; Schindler, Michael; Maggs, A C; Liu, Andrea J; Yodh, A G

    2013-08-01

    The vibrational modes of pristine and polycrystalline monolayer colloidal crystals composed of thermosensitive microgel particles are measured using video microscopy and covariance matrix analysis. At low frequencies, the Debye relation for two-dimensional harmonic crystals is observed in both crystal types; at higher frequencies, evidence for van Hove singularities in the phonon density of states is significantly smeared out by experimental noise and measurement statistics. The effects of these errors are analyzed using numerical simulations. We introduce methods to correct for these limitations, which can be applied to disordered systems as well as crystalline ones, and we show that application of the error correction procedure to the experimental data leads to more pronounced van Hove singularities in the pristine crystal. Finally, quasilocalized low-frequency modes in polycrystalline two-dimensional colloidal crystals are identified and demonstrated to correlate with structural defects such as dislocations, suggesting that quasilocalized low-frequency phonon modes may be used to identify local regions vulnerable to rearrangements in crystalline as well as amorphous solids.

  14. Wavefront Measurement in Ophthalmology

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl

    Wavefront sensing or aberration measurement in the eye is a key problem in refractive surgery and vision correction with laser. The accuracy of these measurements is critical for the outcome of the surgery. Practically all clinical methods use laser as a source of light. To better understand the background, we analyze the pre-laser techniques developed over centuries. They allowed new discoveries of the nature of the optical system of the eye, and many served as prototypes for laser-based wavefront sensing technologies. Hartmann's test was strengthened by Platt's lenslet matrix and the CCD two-dimensional photodetector acquired a new life as a Hartmann-Shack sensor in Heidelberg. Tscherning's aberroscope, invented in France, was transformed into a laser device known as a Dresden aberrometer, having seen its reincarnation in Germany with Seiler's help. The clinical ray tracing technique was brought to life by Molebny in Ukraine, and skiascopy was created by Fujieda in Japan. With the maturation of these technologies, new demands now arise for their wider implementation in optometry and vision correction with customized contact and intraocular lenses.

  15. A fast signal subspace approach for the determination of absolute levels from phased microphone array measurements

    NASA Astrophysics Data System (ADS)

    Sarradj, Ennes

    2010-04-01

    Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.

  16. Computer-aided surgical planner for a new bone deformity correction device using axis-angle representation.

    PubMed

    Wu, Ying Ying; Plakseychuk, Anton; Shimada, Kenji

    2014-11-01

    Current external fixators for distraction osteogenesis (DO) are unable to correct all types of deformities in the lower limb and are difficult to use because of the lack of a pre-surgical planning system. We propose a DO system that consists of a surgical planner and a new, easy-to-setup unilateral fixator that not only corrects all lower limb deformity, but also generates the contralateral/predefined bone shape. Conventionally, bulky constructs with six or more joints (six degrees of freedom, 6DOF) are needed to correct a 3D deformity. By applying the axis-angle representation, we can achieve that with a compact construct with only two joints (2DOF). The proposed system makes use of computer-aided design software and computational methods to plan and simulate the planned procedure. Results of our stress analysis suggest that the stiffness of our proposed fixator is comparable to that of the Orthofix unilateral external fixator. We tested the surgical system on a model of an adult deformed tibia and the resulting bone trajectory deviates from the target bone trajectory by 1.8mm, which is below our defined threshold error of 2mm. We also extracted the transformation matrix that defines the deformity from the bone model and simulated the planned procedure. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  18. Retrievals of atmospheric columnar carbon dioxide and methane from GOSAT observations with photon path-length probability density function (PPDF) method

    NASA Astrophysics Data System (ADS)

    Bril, A.; Oshchepkov, S.; Yokota, T.; Yoshida, Y.; Morino, I.; Uchino, O.; Belikov, D. A.; Maksyutov, S. S.

    2014-12-01

    We retrieved the column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) and methane (XCH4) from the radiance spectra measured by Greenhouse gases Observing SATellite (GOSAT) for 48 months of the satellite operation from June 2009. Recent version of the Photon path-length Probability Density Function (PPDF)-based algorithm was used to estimate XCO2 and optical path modifications in terms of PPDF parameters. We also present results of numerical simulations for over-land observations and "sharp edge" tests for sun-glint mode to discuss the algorithm accuracy under conditions of strong optical path modification. For the methane abundance retrieved from 1.67-µm-absorption band we applied optical path correction based on PPDF parameters from 1.6-µm carbon dioxide (CO2) absorption band. Similarly to CO2-proxy technique, this correction assumes identical light path modifications in 1.67-µm and 1.6-µm bands. However, proxy approach needs pre-defined XCO2 values to compute XCH4, whilst the PPDF-based approach does not use prior assumptions on CO2 concentrations.Post-processing data correction for XCO2 and XCH4 over land observations was performed using regression matrix based on multivariate analysis of variance (MANOVA). The MANOVA statistics was applied to the GOSAT retrievals using reference collocated measurements of Total Carbon Column Observing Network (TCCON). The regression matrix was constructed using the parameters that were found to correlate with GOSAT-TCCON discrepancies: PPDF parameters α and ρ, that are mainly responsible for shortening and lengthening of the optical path due to atmospheric light scattering; solar and satellite zenith angles; surface pressure; surface albedo in three GOSAT short wave infrared (SWIR) bands. Application of the post-correction generally improves statistical characteristics of the GOSAT-TCCON correlation diagrams for individual stations as well as for aggregated data.In addition to the analysis of the observations over 12 TCCON stations we estimated temporal and spatial trends (interannual XCO2 and XCH4 variations, seasonal cycles, latitudinal gradients) and compared them with modeled results as well as with similar estimates from other GOSAT retrievals.

  19. An analysis of hypercritical states in elastic and inelastic systems

    NASA Astrophysics Data System (ADS)

    Kowalczk, Maciej

    The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.

  20. On functions of quasi-Toeplitz matrices

    NASA Astrophysics Data System (ADS)

    Bini, D. A.; Massei, S.; Meini, B.

    2017-11-01

    Let a(z)=\\sumi\\in Za_iz^i be a complex-valued function, defined for |z|=1, such that \\sumi=-∞+∞|ia_i|<∞. Consider the semi-infinite Toeplitz matrix T(a)=(ti,j)i,j\\in Z^+ associated with the symbol a(z) such that {ti,j=aj-i}. A quasi-Toeplitz matrix associated with the symbol a(z) is a matrix of the form A=T(a)+E where E=(ei,j), \\sumi,j\\in Z^+|ei,j|<∞, and is called a {QT}-matrix. Given a function f(x) and a {QT}-matrix M, we provide conditions under which f(M) is well defined and is a {QT}-matrix. Moreover, we introduce a parametrization of {QT}-matrices and algorithms for the computation of f(M). We treat the case where f(x) is given in terms of power series and the case where f(x) is defined in terms of a Cauchy integral. This analysis is also applied to finite matrices which can be written as the sum of a Toeplitz matrix and a low rank correction. Bibliography: 27 titles.

Top