Sample records for polynomial-based probe-specific correction

  1. Correction factors for on-line microprobe analysis of multielement alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Brewer, W. D.

    1977-01-01

    An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.

  2. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    ERIC Educational Resources Information Center

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  3. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  4. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  5. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  6. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  7. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  8. Adaptable gene-specific dye bias correction for two-channel DNA microarrays.

    PubMed

    Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank C P

    2009-01-01

    DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available.

  9. Adaptable gene-specific dye bias correction for two-channel DNA microarrays

    PubMed Central

    Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank CP

    2009-01-01

    DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available. PMID:19401678

  10. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    PubMed

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  11. Reed-Solomon Codes and the Deep Hole Problem

    NASA Astrophysics Data System (ADS)

    Keti, Matt

    In many types of modern communication, a message is transmitted over a noisy medium. When this is done, there is a chance that the message will be corrupted. An error-correcting code adds redundant information to the message which allows the receiver to detect and correct errors accrued during the transmission. We will study the famous Reed-Solomon code (found in QR codes, compact discs, deep space probes,ldots) and investigate the limits of its error-correcting capacity. It can be shown that understanding this is related to understanding the "deep hole" problem, which is a question of determining when a received message has, in a sense, incurred the worst possible corruption. We partially resolve this in its traditional context, when the code is based on the finite field F q or Fq*, as well as new contexts, when it is based on a subgroup of F q* or the image of a Dickson polynomial. This is a new and important problem that could give insight on the true error-correcting potential of the Reed-Solomon code.

  12. Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror

    NASA Astrophysics Data System (ADS)

    Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu

    2017-02-01

    Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.

  13. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  14. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  15. A Small and Slim Coaxial Probe for Single Rice Grain Moisture Sensing

    PubMed Central

    You, Kok Yeow; Mun, Hou Kit; You, Li Ling; Salleh, Jamaliah; Abbas, Zulkifly

    2013-01-01

    A moisture detection of single rice grains using a slim and small open-ended coaxial probe is presented. The coaxial probe is suitable for the nondestructive measurement of moisture values in the rice grains ranging from from 9.5% to 26%. Empirical polynomial models are developed to predict the gravimetric moisture content of rice based on measured reflection coefficients using a vector network analyzer. The relationship between the reflection coefficient and relative permittivity were also created using a regression method and expressed in a polynomial model, whose model coefficients were obtained by fitting the data from Finite Element-based simulation. Besides, the designed single rice grain sample holder and experimental set-up were shown. The measurement of single rice grains in this study is more precise compared to the measurement in conventional bulk rice grains, as the random air gap present in the bulk rice grains is excluded. PMID:23493127

  16. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  17. Aberration corrections for free-space optical communications in atmosphere turbulence using orbital angular momentum states.

    PubMed

    Zhao, S M; Leach, J; Gong, L Y; Ding, J; Zheng, B Y

    2012-01-02

    The effect of atmosphere turbulence on light's spatial structure compromises the information capacity of photons carrying the Orbital Angular Momentum (OAM) in free-space optical (FSO) communications. In this paper, we study two aberration correction methods to mitigate this effect. The first one is the Shack-Hartmann wavefront correction method, which is based on the Zernike polynomials, and the second is a phase correction method specific to OAM states. Our numerical results show that the phase correction method for OAM states outperforms the Shark-Hartmann wavefront correction method, although both methods improve significantly purity of a single OAM state and the channel capacities of FSO communication link. At the same time, our experimental results show that the values of participation functions go down at the phase correction method for OAM states, i.e., the correction method ameliorates effectively the bad effect of atmosphere turbulence.

  18. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  19. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  20. "Hook"-calibration of GeneChip-microarrays: theory and algorithm.

    PubMed

    Binder, Hans; Preibisch, Stephan

    2008-08-29

    : The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. : We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. : The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.

  1. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.

  2. The instrument for investigating magnetic fields of isochronous cyclotrons

    NASA Astrophysics Data System (ADS)

    Avreline, N. V.

    2017-12-01

    A new instrument was designed and implemented in order to increase the measurement accuracy of magnetic field maps for isochronous Cyclotrons manufactured by Advanced Cyclotron Systems Inc. This instrument uses the Hall Probe (HP) from New Zealand manufacturer Group3. The specific probe used is MPT-141 HP and can measure magnetic field in the range from 2G to 21kG. Use of a fast ADC NI9239 module and error reduction algorithms, based on a polynomial regression method, allowed to reduce the noise to 0.2G. The design of this instrument allows to measure high gradient magnetic fields, as the resolution of the HP arm angle is within 0.0005° and the radial position resolution is within 25μm. A set of National Instrument interfaces connected to a desktop computer through a network are used as base control and data acquisition systems.

  3. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    NASA Astrophysics Data System (ADS)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.

  4. Field curvature correction method for ultrashort throw ratio projection optics design using an odd polynomial mirror surface.

    PubMed

    Zhuang, Zhenfeng; Chen, Yanting; Yu, Feihong; Sun, Xiaowei

    2014-08-01

    This paper presents a field curvature correction method of designing an ultrashort throw ratio (TR) projection lens for an imaging system. The projection lens is composed of several refractive optical elements and an odd polynomial mirror surface. A curved image is formed in a direction away from the odd polynomial mirror surface by the refractive optical elements from the image formed on the digital micromirror device (DMD) panel, and the curved image formed is its virtual image. Then the odd polynomial mirror surface enlarges the curved image and a plane image is formed on the screen. Based on the relationship between the chief ray from the exit pupil of each field of view (FOV) and the corresponding predescribed position on the screen, the initial profile of the freeform mirror surface is calculated by using segments of the hyperbolic according to the laws of reflection. For further optimization, the value of the high-order odd polynomial surface is used to express the freeform mirror surface through a least-squares fitting method. As an example, an ultrashort TR projection lens that realizes projection onto a large 50 in. screen at a distance of only 510 mm is presented. The optical performance for the designed projection lens is analyzed by ray tracing method. Results show that an ultrashort TR projection lens modulation transfer function of over 60% at 0.5 cycles/mm for all optimization fields is achievable with f-number of 2.0, 126° full FOV, <1% distortion, and 0.46 TR. Moreover, in comparing the proposed projection lens' optical specifications to that of traditional projection lenses, aspheric mirror projection lenses, and conventional short TR projection lenses, results indicate that this projection lens has the advantages of ultrashort TR, low f-number, wide full FOV, and small distortion.

  5. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  6. Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao

    2017-10-01

    Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.

  7. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  8. Calibration free beam hardening correction for cardiac CT perfusion imaging

    NASA Astrophysics Data System (ADS)

    Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.

  9. In-Process Atomic-Force Microscopy (AFM) Based Inspection

    PubMed Central

    Mekid, Samir

    2017-01-01

    A new in-process atomic-force microscopy (AFM) based inspection is presented for nanolithography to compensate for any deviation such as instantaneous degradation of the lithography probe tip. Traditional method used the AFM probes for lithography work and retract to inspect the obtained feature but this practice degrades the probe tip shape and hence, affects the measurement quality. This paper suggests a second dedicated lithography probe that is positioned back-to-back to the AFM probe under two synchronized controllers to correct any deviation in the process compared to specifications. This method shows that the quality improvement of the nanomachining, in progress probe tip wear, and better understanding of nanomachining. The system is hosted in a recently developed nanomanipulator for educational and research purposes. PMID:28561747

  10. A broad range assay for rapid detection and etiologic characterization of bacterial meningitis: performance testing in samples from sub-Sahara.

    PubMed

    Won, Helen; Yang, Samuel; Gaydos, Charlotte; Hardick, Justin; Ramachandran, Padmini; Hsieh, Yu-Hsiang; Kecojevic, Alexander; Njanpop-Lafourcade, Berthe-Marie; Mueller, Judith E; Tameklo, Tsidi Agbeko; Badziklou, Kossi; Gessner, Bradford D; Rothman, Richard E

    2012-09-01

    This study aimed to conduct a pilot evaluation of broad-based multiprobe polymerase chain reaction (PCR) in clinical cerebrospinal fluid (CSF) samples compared to local conventional PCR/culture methods used for bacterial meningitis surveillance. A previously described PCR consisting of initial broad-based detection of Eubacteriales by a universal probe, followed by Gram typing, and pathogen-specific probes was designed targeting variable regions of the 16S rRNA gene. The diagnostic performance of the 16S rRNA assay in "127 CSF samples was evaluated in samples from patients from Togo, Africa, by comparison to conventional PCR/culture methods. Our probes detected Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Uniprobe sensitivity and specificity versus conventional PCR were 100% and 54.6%, respectively. Sensitivity and specificity of uniprobe versus culture methods were 96.5% and 52.5%, respectively. Gram-typing probes correctly typed 98.8% (82/83) and pathogen-specific probes identified 96.4% (80/83) of the positives. This broad-based PCR algorithm successfully detected and provided species level information for multiple bacterial meningitis agents in clinical samples. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. A broad range assay for rapid detection and etiologic characterization of bacterial meningitis: performance testing in samples from sub-Sahara☆, ☆☆,★

    PubMed Central

    Won, Helen; Yang, Samuel; Gaydos, Charlotte; Hardick, Justin; Ramachandran, Padmini; Hsieh, Yu-Hsiang; Kecojevic, Alexander; Njanpop-Lafourcade, Berthe-Marie; Mueller, Judith E.; Tameklo, Tsidi Agbeko; Badziklou, Kossi; Gessner, Bradford D.; Rothman, Richard E.

    2012-01-01

    This study aimed to conduct a pilot evaluation of broad-based multiprobe polymerase chain reaction (PCR) in clinical cerebrospinal fluid (CSF) samples compared to local conventional PCR/culture methods used for bacterial meningitis surveillance. A previously described PCR consisting of initial broad-based detection of Eubacteriales by a universal probe, followed by Gram typing, and pathogen-specific probes was designed targeting variable regions of the 16S rRNA gene. The diagnostic performance of the 16S rRNA assay in “”127 CSF samples was evaluated in samples from patients from Togo, Africa, by comparison to conventional PCR/culture methods. Our probes detected Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Uniprobe sensitivity and specificity versus conventional PCR were 100% and 54.6%, respectively. Sensitivity and specificity of uniprobe versus culture methods were 96.5% and 52.5%, respectively. Gram-typing probes correctly typed 98.8% (82/83) and pathogen-specific probes identified 96.4% (80/83) of the positives. This broad-based PCR algorithm successfully detected and provided species level information for multiple bacterial meningitis agents in clinical samples. PMID:22809694

  12. ArrayInitiative - a tool that simplifies creating custom Affymetrix CDFs

    PubMed Central

    2011-01-01

    Background Probes on a microarray represent a frozen view of a genome and are quickly outdated when new sequencing studies extend our knowledge, resulting in significant measurement error when analyzing any microarray experiment. There are several bioinformatics approaches to improve probe assignments, but without in-house programming expertise, standardizing these custom array specifications as a usable file (e.g. as Affymetrix CDFs) is difficult, owing mostly to the complexity of the specification file format. However, without correctly standardized files there is a significant barrier for testing competing analysis approaches since this file is one of the required inputs for many commonly used algorithms. The need to test combinations of probe assignments and analysis algorithms led us to develop ArrayInitiative, a tool for creating and managing custom array specifications. Results ArrayInitiative is a standalone, cross-platform, rich client desktop application for creating correctly formatted, custom versions of manufacturer-provided (default) array specifications, requiring only minimal knowledge of the array specification rules and file formats. Users can import default array specifications, import probe sequences for a default array specification, design and import a custom array specification, export any array specification to multiple output formats, export the probe sequences for any array specification and browse high-level information about the microarray, such as version and number of probes. The initial release of ArrayInitiative supports the Affymetrix 3' IVT expression arrays we currently analyze, but as an open source application, we hope that others will contribute modules for other platforms. Conclusions ArrayInitiative allows researchers to create new array specifications, in a standard format, based upon their own requirements. This makes it easier to test competing design and analysis strategies that depend on probe definitions. Since the custom array specifications are easily exported to the manufacturer's standard format, researchers can analyze these customized microarray experiments using established software tools, such as those available in Bioconductor. PMID:21548938

  13. Mobile Image Based Color Correction Using Deblurring

    PubMed Central

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2016-01-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space. PMID:28572697

  14. A Formally-Verified Decision Procedure for Univariate Polynomial Computation Based on Sturm's Theorem

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.

    2014-01-01

    Sturm's Theorem is a well-known result in real algebraic geometry that provides a function that computes the number of roots of a univariate polynomial in a semiopen interval. This paper presents a formalization of this theorem in the PVS theorem prover, as well as a decision procedure that checks whether a polynomial is always positive, nonnegative, nonzero, negative, or nonpositive on any input interval. The soundness and completeness of the decision procedure is proven in PVS. The procedure and its correctness properties enable the implementation of a PVS strategy for automatically proving existential and universal univariate polynomial inequalities. Since the decision procedure is formally verified in PVS, the soundness of the strategy depends solely on the internal logic of PVS rather than on an external oracle. The procedure itself uses a combination of Sturm's Theorem, an interval bisection procedure, and the fact that a polynomial with exactly one root in a bounded interval is always nonnegative on that interval if and only if it is nonnegative at both endpoints.

  15. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  16. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  17. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  18. The Ponzano-Regge Model and Parametric Representation

    NASA Astrophysics Data System (ADS)

    Li, Dan

    2014-04-01

    We give a parametric representation of the effective noncommutative field theory derived from a -deformation of the Ponzano-Regge model and define a generalized Kirchhoff polynomial with -correction terms, obtained in a -linear approximation. We then consider the corresponding graph hypersurfaces and the question of how the presence of the correction term affects their motivic nature. We look in particular at the tetrahedron graph, which is the basic case of relevance to quantum gravity. With the help of computer calculations, we verify that the number of points over finite fields of the corresponding hypersurface does not fit polynomials with integer coefficients, hence the hypersurface of the tetrahedron is not polynomially countable. This shows that the correction term can change significantly the motivic properties of the hypersurfaces, with respect to the classical case.

  19. An efficient higher order family of root finders

    NASA Astrophysics Data System (ADS)

    Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.

    2008-06-01

    A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.

  20. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl

    NASA Astrophysics Data System (ADS)

    De Beuckeleer, Liene I.; Herrebout, Wouter A.

    2016-02-01

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.

  1. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  2. Application of overlay modeling and control with Zernike polynomials in an HVM environment

    NASA Astrophysics Data System (ADS)

    Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill

    2016-03-01

    Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.

  3. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement.

    PubMed

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.

  4. Ligand Electron Density Shape Recognition Using 3D Zernike Descriptors

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Prasad; Grandison, Scott; Cowtan, Kevin; Mak, Lora; Lawson, David M.; Morris, Richard J.

    We present a novel approach to crystallographic ligand density interpretation based on Zernike shape descriptors. Electron density for a bound ligand is expanded in an orthogonal polynomial series (3D Zernike polynomials) and the coefficients from this expansion are employed to construct rotation-invariant descriptors. These descriptors can be compared highly efficiently against large databases of descriptors computed from other molecules. In this manuscript we describe this process and show initial results from an electron density interpretation study on a dataset containing over a hundred OMIT maps. We could identify the correct ligand as the first hit in about 30 % of the cases, within the top five in a further 30 % of the cases, and giving rise to an 80 % probability of getting the correct ligand within the top ten matches. In all but a few examples, the top hit was highly similar to the correct ligand in both shape and chemistry. Further extensions and intrinsic limitations of the method are discussed.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, M.; Al-Dayeh, L.; Patel, P.

    It is well known that even small movements of the head can lead to artifacts in fMRI. Corrections for these movements are usually made by a registration algorithm which accounts for translational and rotational motion of the head under a rigid body assumption. The brain, however, is not entirely rigid and images are prone to local deformations due to CSF motion, susceptibility effects, local changes in blood flow and inhomogeneities in the magnetic and gradient fields. Since nonrigid body motion is not adequately corrected by approaches relying on simple rotational and translational corrections, we have investigated a general approach wheremore » an n{sup th} order polynomial is used to map all images onto a common reference image. The coefficients of the polynomial transformation were determined through minimization of the ratio of the variance to the mean of each pixel. Simulation studies were conducted to validate the technique. Results of experimental studies using polynomial transformation for 2D and 3D registration show lower variance to mean ratio compared to simple rotational and translational corrections.« less

  6. A universal TaqMan-based RT-PCR protocol for cost-efficient detection of small noncoding RNA.

    PubMed

    Jung, Ulrike; Jiang, Xiaoou; Kaufmann, Stefan H E; Patzel, Volker

    2013-12-01

    Several methods for the detection of RNA have been developed over time. For small RNA detection, a stem-loop reverse primer-based protocol relying on TaqMan RT-PCR has been described. This protocol requires an individual specific TaqMan probe for each target RNA and, hence, is highly cost-intensive for experiments with small sample sizes or large numbers of different samples. We describe a universal TaqMan-based probe protocol which can be used to detect any target sequence and demonstrate its applicability for the detection of endogenous as well as artificial eukaryotic and bacterial small RNAs. While the specific and the universal probe-based protocol showed the same sensitivity, the absolute sensitivity of detection was found to be more than 100-fold lower for both than previously reported. In subsequent experiments, we found previously unknown limitations intrinsic to the method affecting its feasibility in determination of mature template RISC incorporation as well as in multiplexing. Both protocols were equally specific in discriminating between correct and incorrect small RNA targets or between mature miRNA and its unprocessed RNA precursor, indicating the stem-loop RT-primer, but not the TaqMan probe, triggers target specificity. The presented universal TaqMan-based RT-PCR protocol represents a cost-efficient method for the detection of small RNAs.

  7. Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity

    NASA Astrophysics Data System (ADS)

    Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.

    2018-07-01

    The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.

  8. Open-target sparse sensing of biological agents using DNA microarray

    PubMed Central

    2011-01-01

    Background Current biosensors are designed to target and react to specific nucleic acid sequences or structural epitopes. These 'target-specific' platforms require creation of new physical capture reagents when new organisms are targeted. An 'open-target' approach to DNA microarray biosensing is proposed and substantiated using laboratory generated data. The microarray consisted of 12,900 25 bp oligonucleotide capture probes derived from a statistical model trained on randomly selected genomic segments of pathogenic prokaryotic organisms. Open-target detection of organisms was accomplished using a reference library of hybridization patterns for three test organisms whose DNA sequences were not included in the design of the microarray probes. Results A multivariate mathematical model based on the partial least squares regression (PLSR) was developed to detect the presence of three test organisms in mixed samples. When all 12,900 probes were used, the model correctly detected the signature of three test organisms in all mixed samples (mean(R2)) = 0.76, CI = 0.95), with a 6% false positive rate. A sampling algorithm was then developed to sparsely sample the probe space for a minimal number of probes required to capture the hybridization imprints of the test organisms. The PLSR detection model was capable of correctly identifying the presence of the three test organisms in all mixed samples using only 47 probes (mean(R2)) = 0.77, CI = 0.95) with nearly 100% specificity. Conclusions We conceived an 'open-target' approach to biosensing, and hypothesized that a relatively small, non-specifically designed, DNA microarray is capable of identifying the presence of multiple organisms in mixed samples. Coupled with a mathematical model applied to laboratory generated data, and sparse sampling of capture probes, the prototype microarray platform was able to capture the signature of each organism in all mixed samples with high sensitivity and specificity. It was demonstrated that this new approach to biosensing closely follows the principles of sparse sensing. PMID:21801424

  9. Mismatch and G-Stack Modulated Probe Signals on SNP Microarrays

    PubMed Central

    Binder, Hans; Fasold, Mario; Glomb, Torsten

    2009-01-01

    Background Single nucleotide polymorphism (SNP) arrays are important tools widely used for genotyping and copy number estimation. This technology utilizes the specific affinity of fragmented DNA for binding to surface-attached oligonucleotide DNA probes. We analyze the variability of the probe signals of Affymetrix GeneChip SNP arrays as a function of the probe sequence to identify relevant sequence motifs which potentially cause systematic biases of genotyping and copy number estimates. Methodology/Principal Findings The probe design of GeneChip SNP arrays enables us to disentangle different sources of intensity modulations such as the number of mismatches per duplex, matched and mismatched base pairings including nearest and next-nearest neighbors and their position along the probe sequence. The effect of probe sequence was estimated in terms of triple-motifs with central matches and mismatches which include all 256 combinations of possible base pairings. The probe/target interactions on the chip can be decomposed into nearest neighbor contributions which correlate well with free energy terms of DNA/DNA-interactions in solution. The effect of mismatches is about twice as large as that of canonical pairings. Runs of guanines (G) and the particular type of mismatched pairings formed in cross-allelic probe/target duplexes constitute sources of systematic biases of the probe signals with consequences for genotyping and copy number estimates. The poly-G effect seems to be related to the crowded arrangement of probes which facilitates complex formation of neighboring probes with at minimum three adjacent G's in their sequence. Conclusions The applied method of “triple-averaging” represents a model-free approach to estimate the mean intensity contributions of different sequence motifs which can be applied in calibration algorithms to correct signal values for sequence effects. Rules for appropriate sequence corrections are suggested. PMID:19924253

  10. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl.

    PubMed

    De Beuckeleer, Liene I; Herrebout, Wouter A

    2016-02-05

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. On the robustness of bucket brigade quantum RAM

    NASA Astrophysics Data System (ADS)

    Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa

    2015-12-01

    We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.

  12. A Comparative Study on Preprocessing Techniques in Diabetic Retinopathy Retinal Images: Illumination Correction and Contrast Enhancement

    PubMed Central

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940

  13. Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.

    PubMed

    Schneider, Martin; Iskander, D Robert; Collins, Michael J

    2009-02-01

    High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

  14. Wavefront propagation from one plane to another with the use of Zernike polynomials and Taylor monomials.

    PubMed

    Dai, Guang-ming; Campbell, Charles E; Chen, Li; Zhao, Huawei; Chernyak, Dimitri

    2009-01-20

    In wavefront-driven vision correction, ocular aberrations are often measured on the pupil plane and the correction is applied on a different plane. The problem with this practice is that any changes undergone by the wavefront as it propagates between planes are not currently included in devising customized vision correction. With some valid approximations, we have developed an analytical foundation based on geometric optics in which Zernike polynomials are used to characterize the propagation of the wavefront from one plane to another. Both the boundary and the magnitude of the wavefront change after the propagation. Taylor monomials were used to realize the propagation because of their simple form for this purpose. The method we developed to identify changes in low-order aberrations was verified with the classical vertex correction formula. The method we developed to identify changes in high-order aberrations was verified with ZEMAX ray-tracing software. Although the method may not be valid for highly irregular wavefronts and it was only proven for wavefronts with low-order or high-order aberrations, our analysis showed that changes in the propagating wavefront are significant and should, therefore, be included in calculating vision correction. This new approach could be of major significance in calculating wavefront-driven vision correction whether by refractive surgery, contact lenses, intraocular lenses, or spectacles.

  15. Charactering baseline shift with 4th polynomial function for portable biomedical near-infrared spectroscopy device

    NASA Astrophysics Data System (ADS)

    Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting

    2018-02-01

    The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.

  16. Sandia Higher Order Elements (SHOE) v 0.5 alpha

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    SHOE is research code for characterizing and visualizing higher-order finite elements; it contains a framework for defining classes of interpolation techniques and element shapes; methods for interpolating triangular, quadrilateral, tetrahedral, and hexahedral cells using Lagrange and Legendre polynomial bases of arbitrary order; methods to decompose each element into domains of constant gradient flow (using a polynomial solver to identify critical points); and an isocontouring technique that uses this decomposition to guarantee topological correctness. Please note that this is an alpha release of research software and that some time has passed since it was actively developed; build- and run-time issues likelymore » exist.« less

  17. Classical verification of quantum circuits containing few basis changes

    NASA Astrophysics Data System (ADS)

    Demarie, Tommaso F.; Ouyang, Yingkai; Fitzsimons, Joseph F.

    2018-04-01

    We consider the task of verifying the correctness of quantum computation for a restricted class of circuits which contain at most two basis changes. This contains circuits giving rise to the second level of the Fourier hierarchy, the lowest level for which there is an established quantum advantage. We show that when the circuit has an outcome with probability at least the inverse of some polynomial in the circuit size, the outcome can be checked in polynomial time with bounded error by a completely classical verifier. This verification procedure is based on random sampling of computational paths and is only possible given knowledge of the likely outcome.

  18. Improved multivariate polynomial factoring algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P.S.

    1978-10-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less

  19. Automatic Registration of GF4 Pms: a High Resolution Multi-Spectral Sensor on Board a Satellite on Geostationary Orbit

    NASA Astrophysics Data System (ADS)

    Gao, M.; Li, J.

    2018-04-01

    Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.

  20. Community Analysis of Biofilters Using Fluorescence In Situ Hybridization Including a New Probe for the Xanthomonas Branch of the Class Proteobacteria

    PubMed Central

    Friedrich, Udo; Naismith, Michèle M.; Altendorf, Karlheinz; Lipski, André

    1999-01-01

    Domain-, class-, and subclass-specific rRNA-targeted probes were applied to investigate the microbial communities of three industrial and three laboratory-scale biofilters. The set of probes also included a new probe (named XAN818) specific for the Xanthomonas branch of the class Proteobacteria; this probe is described in this study. The members of the Xanthomonas branch do not hybridize with previously developed rRNA-targeted oligonucleotide probes for the α-, β-, and γ-Proteobacteria. Bacteria of the Xanthomonas branch accounted for up to 4.5% of total direct counts obtained with 4′,6-diamidino-2-phenylindole. In biofilter samples, the relative abundance of these bacteria was similar to that of the γ-Proteobacteria. Actinobacteria (gram-positive bacteria with a high G+C DNA content) and α-Proteobacteria were the most dominant groups. Detection rates obtained with probe EUB338 varied between about 40 and 70%. For samples with high contents of gram-positive bacteria, these percentages were substantially improved when the calculations were corrected for the reduced permeability of gram-positive bacteria when formaldehyde was used as a fixative. The set of applied bacterial class- and subclass-specific probes yielded, on average, 58.5% (± a standard deviation of 23.0%) of the corrected eubacterial detection rates, thus indicating the necessity of additional probes for studies of biofilter communities. The Xanthomonas-specific probe presented here may serve as an efficient tool for identifying potential phytopathogens. In situ hybridization proved to be a practical tool for microbiological studies of biofiltration systems. PMID:10427047

  1. Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra

    NASA Astrophysics Data System (ADS)

    Karstens, William; Smith, David

    2013-03-01

    Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  2. TARGETED PRINCIPLE COMPONENT ANALYSIS: A NEW MOTION ARTIFACT CORRECTION APPROACH FOR NEAR-INFRARED SPECTROSCOPY

    PubMed Central

    YÜCEL, MERYEM A.; SELB, JULIETTE; COOPER, ROBERT J.; BOAS, DAVID A.

    2014-01-01

    As near-infrared spectroscopy (NIRS) broadens its application area to different age and disease groups, motion artifacts in the NIRS signal due to subject movement is becoming an important challenge. Motion artifacts generally produce signal fluctuations that are larger than physiological NIRS signals, thus it is crucial to correct for them before obtaining an estimate of stimulus evoked hemodynamic responses. There are various methods for correction such as principle component analysis (PCA), wavelet-based filtering and spline interpolation. Here, we introduce a new approach to motion artifact correction, targeted principle component analysis (tPCA), which incorporates a PCA filter only on the segments of data identified as motion artifacts. It is expected that this will overcome the issues of filtering desired signals that plagues standard PCA filtering of entire data sets. We compared the new approach with the most effective motion artifact correction algorithms on a set of data acquired simultaneously with a collodion-fixed probe (low motion artifact content) and a standard Velcro probe (high motion artifact content). Our results show that tPCA gives statistically better results in recovering hemodynamic response function (HRF) as compared to wavelet-based filtering and spline interpolation for the Velcro probe. It results in a significant reduction in mean-squared error (MSE) and significant enhancement in Pearson’s correlation coefficient to the true HRF. The collodion-fixed fiber probe with no motion correction performed better than the Velcro probe corrected for motion artifacts in terms of MSE and Pearson’s correlation coefficient. Thus, if the experimental study permits, the use of a collodion-fixed fiber probe may be desirable. If the use of a collodion-fixed probe is not feasible, then we suggest the use of tPCA in the processing of motion artifact contaminated data. PMID:25360181

  3. Correction: Reactions of metallodrugs with proteins: selective binding of phosphane-based platinum(ii) dichlorides to horse heart cytochrome c probed by ESI MS coupled to enzymatic cleavage.

    PubMed

    Mügge, Carolin; Michelucci, Elena; Boscaro, Francesca; Gabbiani, Chiara; Messori, Luigi; Weigand, Wolfgang

    2018-05-23

    Correction for 'Reactions of metallodrugs with proteins: selective binding of phosphane-based platinum(ii) dichlorides to horse heart cytochrome c probed by ESI MS coupled to enzymatic cleavage' by Carolin Mügge et al., Metallomics, 2011, 3, 987-990.

  4. Fuzzy logic controller for the LOLA AO tip-tilt corrector system

    NASA Astrophysics Data System (ADS)

    Sotelo, Pablo D.; Flores, Ruben; Garfias, Fernando; Cuevas, Salvador

    1998-09-01

    At the INSTITUTO DE ASTRONOMIA we developed an adaptive optics system for the correction of the two first orders of the Zernike polynomials measuring the image controid. Here we discus the two system modalities based in two different control strategies and we present simulations comparing the systems. For the classic control system we present telescope results.

  5. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  6. Moisture content measurements of moss (Sphagnum spp.) using commercial sensors

    USGS Publications Warehouse

    Yoshikawa, K.; Overduin, P.P.; Harden, J.W.

    2004-01-01

    Sphagnum (spp.) is widely distributed in permafrost regions around the arctic and subarctic. The moisture content of the moss layer affects the thermal insulative capacity and preservation of permafrost. It also controls the growth and collapse history of palsas and other peat mounds, and is relevant, in general terms, to permafrost thaw (thermokarst). In this study, we test and calibrate seven different soil moisture sensors for measuring the moisture content of Sphagnum moss under laboratory conditions. The soil volume to which each probe is sensitive is one of the important parameters influencing moisture measurement, particularly in a heterogeneous medium such as moss. Each sensor has a unique response to changing moisture content levels, solution salinity, moss bulk density and to the orientation (structure) of the Sphagnum relative to the sensor. All of the probes examined here require unique polynomial calibration equations to obtain moisture content from probe output. We provide polynomial equations for dead and live Sphagnum moss (R2 > 0.99. Copyright ?? 2004 John Wiley & Sons, Ltd.

  7. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  8. Probing baryogenesis through the Higgs boson self-coupling

    NASA Astrophysics Data System (ADS)

    Reichert, M.; Eichhorn, A.; Gies, H.; Pawlowski, J. M.; Plehn, T.; Scherer, M. M.

    2018-04-01

    The link between a modified Higgs self-coupling and the strong first-order phase transition necessary for baryogenesis is well explored for polynomial extensions of the Higgs potential. We broaden this argument beyond leading polynomial expansions of the Higgs potential to higher polynomial terms and to nonpolynomial Higgs potentials. For our quantitative analysis we resort to the functional renormalization group, which allows us to evolve the full Higgs potential to higher scales and finite temperature. In all cases we find that a strong first-order phase transition manifests itself in an enhancement of the Higgs self-coupling by at least 50%, implying that such modified Higgs potentials should be accessible at the LHC.

  9. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    NASA Astrophysics Data System (ADS)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.

  10. Analytical and numerical construction of vertical periodic orbits about triangular libration points based on polynomial expansion relations among directions

    NASA Astrophysics Data System (ADS)

    Qian, Ying-Jing; Yang, Xiao-Dong; Zhai, Guan-Qiao; Zhang, Wei

    2017-08-01

    Innovated by the nonlinear modes concept in the vibrational dynamics, the vertical periodic orbits around the triangular libration points are revisited for the Circular Restricted Three-body Problem. The ζ -component motion is treated as the dominant motion and the ξ and η -component motions are treated as the slave motions. The slave motions are in nature related to the dominant motion through the approximate nonlinear polynomial expansions with respect to the ζ -position and ζ -velocity during the one of the periodic orbital motions. By employing the relations among the three directions, the three-dimensional system can be transferred into one-dimensional problem. Then the approximate three-dimensional vertical periodic solution can be analytically obtained by solving the dominant motion only on ζ -direction. To demonstrate the effectiveness of the proposed method, an accuracy study was carried out to validate the polynomial expansion (PE) method. As one of the applications, the invariant nonlinear relations in polynomial expansion form are used as constraints to obtain numerical solutions by differential correction. The nonlinear relations among the directions provide an alternative point of view to explore the overall dynamics of periodic orbits around libration points with general rules.

  11. Timing Calibration in PET Using a Time Alignment Probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moses, William W.; Thompson, Christopher J.

    2006-05-05

    We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less

  12. Multiplex Detection of Toxigenic Penicillium Species.

    PubMed

    Rodríguez, Alicia; Córdoba, Juan J; Rodríguez, Mar; Andrade, María J

    2017-01-01

    Multiplex PCR-based methods for simultaneous detection and quantification of different mycotoxin-producing Penicillia are useful tools to be used in food safety programs. These rapid and sensitive techniques allow taking corrective actions during food processing or storage for avoiding accumulation of mycotoxins in them. In this chapter, three multiplex PCR-based methods to detect at least patulin- and ochratoxin A-producing Penicillia are detailed. Two of them are different multiplex real-time PCR suitable for monitoring and quantifying toxigenic Penicillium using the nonspecific dye SYBR Green and specific hydrolysis probes (TaqMan). All of them successfully use the same target genes involved in the biosynthesis of such mycotoxins for designing primers and/or probes.

  13. Assessment of Hybrid High-Order methods on curved meshes and comparison with discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Botti, Lorenzo; Di Pietro, Daniele A.

    2018-10-01

    We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.

  14. Tensor calculus in polar coordinates using Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Vasil, Geoffrey M.; Burns, Keaton J.; Lecoanet, Daniel; Olver, Sheehan; Brown, Benjamin P.; Oishi, Jeffrey S.

    2016-11-01

    Spectral methods are an efficient way to solve partial differential equations on domains possessing certain symmetries. The utility of a method depends strongly on the choice of spectral basis. In this paper we describe a set of bases built out of Jacobi polynomials, and associated operators for solving scalar, vector, and tensor partial differential equations in polar coordinates on a unit disk. By construction, the bases satisfy regularity conditions at r = 0 for any tensorial field. The coordinate singularity in a disk is a prototypical case for many coordinate singularities. The work presented here extends to other geometries. The operators represent covariant derivatives, multiplication by azimuthally symmetric functions, and the tensorial relationship between fields. These arise naturally from relations between classical orthogonal polynomials, and form a Heisenberg algebra. Other past work uses more specific polynomial bases for solving equations in polar coordinates. The main innovation in this paper is to use a larger set of possible bases to achieve maximum bandedness of linear operations. We provide a series of applications of the methods, illustrating their ease-of-use and accuracy.

  15. Polynomial equations for science orbits around Europa

    NASA Astrophysics Data System (ADS)

    Cinelli, Marco; Circi, Christian; Ortore, Emiliano

    2015-07-01

    In this paper, the design of science orbits for the observation of a celestial body has been carried out using polynomial equations. The effects related to the main zonal harmonics of the celestial body and the perturbation deriving from the presence of a third celestial body have been taken into account. The third body describes a circular and equatorial orbit with respect to the primary body and, for its disturbing potential, an expansion in Legendre polynomials up to the second order has been considered. These polynomial equations allow the determination of science orbits around Jupiter's satellite Europa, where the third body gravitational attraction represents one of the main forces influencing the motion of an orbiting probe. Thus, the retrieved relationships have been applied to this moon and periodic sun-synchronous and multi-sun-synchronous orbits have been determined. Finally, numerical simulations have been carried out to validate the analytical results.

  16. Derivatives of random matrix characteristic polynomials with applications to elliptic curves

    NASA Astrophysics Data System (ADS)

    Snaith, N. C.

    2005-12-01

    The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.

  17. Optical design of a novel instrument that uses the Hartmann-Shack sensor and Zernike polynomials to measure and simulate customized refraction correction surgery outcomes and patient satisfaction

    NASA Astrophysics Data System (ADS)

    Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2016-03-01

    An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.

  18. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Quantitative Electron Probe Microanalysis: State of the Art

    NASA Technical Reports Server (NTRS)

    Carpernter, P. K.

    2005-01-01

    Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.

  20. A minor groove binder probe real-time PCR assay for discrimination between type 2-based vaccines and field strains of canine parvovirus.

    PubMed

    Decaro, Nicola; Elia, Gabriella; Desario, Costantina; Roperto, Sante; Martella, Vito; Campolo, Marco; Lorusso, Alessio; Cavalli, Alessandra; Buonavoglia, Canio

    2006-09-01

    A minor groove binder (MGB) probe assay was developed to discriminate between type 2-based vaccines and field strains of canine parvovirus (CPV). Considering that most of the CPV vaccines contain the old type 2, no longer circulating in canine population, two MGB probes specific for CPV-2 and the antigenic variants (types 2a, 2b and 2c), respectively, were labeled with different fluorophores. The MGB probe assay was able to discriminate correctly between the old type and the variants, with a detection limit of 10(1) DNA copies and a good reproducibility. Quantitation of the viral DNA loads was accurate, as demonstrated by comparing the CPV DNA titres to those calculated by means of the TaqMan assay recognising all CPV types. This assay will ensure resolution of most diagnostic problems in dogs showing CPV disease shortly after CPV vaccination, although it does not discriminate between field strains and type 2b-based vaccines, recently licensed to market in some countries.

  1. Histogram-driven cupping correction (HDCC) in CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  2. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  3. Results of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) Experiment

    NASA Technical Reports Server (NTRS)

    Wilson, K. E.; Leatherman, P. R.; Cleis, R.; Spinhirne, J.; Fugate, R. Q.

    1997-01-01

    Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes. Phase I of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) experiment demonstrated the first propagation of an atmosphere-compensated laser beam to the lunar retroreflectors. A 1.06-micron Nd:YAG laser beam was propagated through the full aperture of the 1.5-m telescope at the Starfire Optical Range (SOR), Kirtland Air Force Base, New Mexico, to the Apollo 15 retroreflector array at Hadley Rille. Laser guide-star adaptive optics were used to compensate turbulence-induced aberrations across the transmitter's 1.5-m aperture. A 3.5-m telescope, also located at the SOR, was used as a receiver for detecting the return signals. JPL-supplied Chebyshev polynomials of the retroreflector locations were used to develop tracking algorithms for the telescopes. At times we observed in excess of 100 photons returned from a single pulse when the outgoing beam from the 1.5-m telescope was corrected by the adaptive optics system. No returns were detected when the outgoing beam was uncompensated. The experiment was conducted from March through September 1994, during the first or last quarter of the Moon.

  4. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  5. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound.

    PubMed

    Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-10-01

    To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy.

  6. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound

    PubMed Central

    Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-01-01

    Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. Conclusions: The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy. PMID:23039661

  7. Astigmatism corrected common path probe for optical coherence tomography.

    PubMed

    Singh, Kanwarpal; Yamada, Daisuke; Tearney, Guillermo

    2017-03-01

    Optical coherence tomography (OCT) catheters for intraluminal imaging are subject to various artifacts due to reference-sample arm dispersion imbalances and sample arm beam astigmatism. The goal of this work was to develop a probe that minimizes such artifacts. Our probe was fabricated using a single mode fiber at the tip of which a glass spacer and graded index objective lens were spliced to achieve the desired focal distance. The signal was reflected using a curved reflector to correct for astigmatism caused by the thin, protective, transparent sheath that surrounds the optics. The probe design was optimized using Zemax, a commercially available optical design software. Common path interferometric operation was achieved using Fresnel reflection from the tip of the focusing graded index objective lens. The performance of the probe was tested using a custom designed spectrometer-based OCT system. The probe achieved an axial resolution of 15.6 μm in air, a lateral resolution 33 μm, and a sensitivity of 103 dB. A scattering tissue phantom was imaged to test the performance of the probe for astigmatism correction. Images of the phantom confirmed that this common-path, astigmatism-corrected OCT imaging probe had minimal artifacts in the axial, and lateral dimensions. In this work, we developed an astigmatism-corrected, common path probe that minimizes artifacts associated with standard OCT probes. This design may be useful for OCT applications that require high axial and lateral resolutions. Lasers Surg. Med. 49:312-318, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Development of species-specific hybridization probes for marine luminous bacteria by using in vitro DNA amplification.

    PubMed Central

    Wimpee, C F; Nadeau, T L; Nealson, K H

    1991-01-01

    By using two highly conserved region of the luxA gene as primers, polymerase chain reaction amplification methods were used to prepare species-specific probes against the luciferase gene from four major groups of marine luminous bacteria. Laboratory studies with test strains indicated that three of the four probes cross-reacted with themselves and with one or more of the other species at low stringencies but were specific for members of their own species at high stringencies. The fourth probe, generated from Vibrio harveyi DNA, cross-reacted with DNAs from two closely related species, V. orientalis and V. vulnificus. When nonluminous cultures were tested with the species-specific probes, no false-positive results were observed, even at low stringencies. Two field isolates were correctly identified as Photobacterium phosphoreum by using the species-specific hybridization probes at high stringency. A mixed probe (four different hybridization probes) used at low stringency gave positive results with all of the luminous bacteria tested, including the terrestrial species, Xenorhabdus luminescens, and the taxonomically distinct marine bacterial species Shewanella hanedai; minimal cross-hybridization with these species was seen at higher stringencies. Images PMID:1854194

  9. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  10. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  11. Online Removal of Baseline Shift with a Polynomial Function for Hemodynamic Monitoring Using Near-Infrared Spectroscopy.

    PubMed

    Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting

    2018-01-21

    Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.

  12. Locked Nucleic Acid Probe-Based Real-Time PCR Assay for the Rapid Detection of Rifampin-Resistant Mycobacterium tuberculosis

    PubMed Central

    Sun, Chongyun; Li, Chao; Wang, Xiaochen; Liu, Haican; Zhang, Pingping; Zhao, Xiuqin; Wang, Xinrui; Jiang, Yi; Yang, Ruifu; Wan, Kanglin; Zhou, Lei

    2015-01-01

    Drug-resistant Mycobacterium tuberculosis can be rapidly diagnosed through nucleic acid amplification techniques by analyzing the variations in the associated gene sequences. In the present study, a locked nucleic acid (LNA) probe-based real-time PCR assay was developed to identify the mutations in the rpoB gene associated with rifampin (RFP) resistance in M. tuberculosis. Six LNA probes with the discrimination capability of one-base mismatch were designed to monitor the 23 most frequent rpoB mutations. The target mutations were identified using the probes in a “probe dropout” manner (quantification cycle = 0); thus, the proposed technique exhibited superiority in mutation detection. The LNA probe-based real-time PCR assay was developed in a two-tube format with three LNA probes and one internal amplification control probe in each tube. The assay showed excellent specificity to M. tuberculosis with or without RFP resistance by evaluating 12 strains of common non-tuberculosis mycobacteria. The limit of detection of M. tuberculosis was 10 genomic equivalents (GE)/reaction by further introducing a nested PCR method. In a blind validation of 154 clinical mycobacterium isolates, 142/142 (100%) were correctly detected through the assay. Of these isolates, 88/88 (100%) were determined as RFP susceptible and 52/54 (96.3%) were characterized as RFP resistant. Two unrecognized RFP-resistant strains were sequenced and were found to contain mutations outside the range of the 23 mutation targets. In conclusion, this study established a sensitive, accurate, and low-cost LNA probe-based assay suitable for a four-multiplexing real-time PCR instrument. The proposed method can be used to diagnose RFP-resistant tuberculosis in clinical laboratories. PMID:26599667

  13. X-ray fluorescence microscopy artefacts in elemental maps of topologically complex samples: Analytical observations, simulation and a map correction method

    NASA Astrophysics Data System (ADS)

    Billè, Fulvio; Kourousias, George; Luchinat, Enrico; Kiskinova, Maya; Gianoncelli, Alessandra

    2016-08-01

    XRF spectroscopy is among the most widely used non-destructive techniques for elemental analysis. Despite the known angular dependence of X-ray fluorescence (XRF), topological artefacts remain an unresolved issue when using X-ray micro- or nano-probes. In this work we investigate the origin of the artefacts in XRF imaging of topologically complex samples, which are unresolved problems in studies of organic matter due to the limited travel distances of low energy XRF emission from the light elements. In particular we mapped Human Embryonic Kidney (HEK293T) cells. The exemplary results with biological samples, obtained with a soft X-ray scanning microscope installed at a synchrotron facility were used for testing a mathematical model based on detector response simulations, and for proposing an artefact correction method based on directional derivatives. Despite the peculiar and specific application, the methodology can be easily extended to hard X-rays and to set-ups with multi-array detector systems when the dimensions of surface reliefs are in the order of the probing beam size.

  14. Model-based estimates of long-term persistence of induced HPV antibodies: a flexible subject-specific approach.

    PubMed

    Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián

    2013-01-01

    In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.

  15. Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung

    2016-07-01

    In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.

  16. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).

  17. Accurate wavelength measurements of a putative standard for near-infrared diffuse reflection spectrometry.

    PubMed

    Isaksson, Tomas; Yang, Husheng; Kemeny, Gabor J; Jackson, Richard S; Wang, Qian; Alam, M Kathleen; Griffiths, Peter R

    2003-02-01

    The diffuse reflection (DR) spectrum of a sample consisting of a mixture of rare earth oxides and talc was measured at 2 cm-1 resolution, using five different accessories installed on five different Fourier transform near-infrared (FT-NIR) spectrometers from four manufacturers. Peak positions for 37 peaks were determined using two peak-picking algorithms: center-of-mass and polynomial fitting. The wavenumber of the band center reported by either of these techniques was sensitive to the slope of the baseline, and so the baseline of the spectra was corrected using either a polynomial fit or conversion to the second derivative. Significantly different results were obtained with one combination of spectrometer and accessory than the others. Apparently, the beam path through the interferometer and DR accessory was different for this accessory than for any of the other measurements, causing a severe degradation of the resolution. Spectra measured on this instrument were removed as outliers. For measurements made on FT-NIR spectrometers, it is shown that it is important to check the resolution at which the spectrum has been measured using lines in the vibration-rotation spectrum of atmospheric water vapor and to specify the peak-picking and baseline-correction algorithms that are used to process the measured spectra. The variance between the results given by the four different methods of peak-picking and baseline correction was substantially larger than the variance between the remaining five measurements. Certain bands were found to be more suitable than others for use as wavelength standards. A band at 5943.13 cm-1 (1682.62 nm) was found to be the most stable band between the four methods and the six measurements. A band at 5177.04 cm-1 (1931.61 nm) has the highest precision between different measurements when polynomial baseline correction and polynomial peak-picking algorithms are used.

  18. Comparing student performance on paper- and computer-based math curriculum-based measures.

    PubMed

    Hensley, Kiersten; Rankin, Angelica; Hosp, John

    2017-01-01

    As the number of computerized curriculum-based measurement (CBM) tools increases, it is necessary to examine whether or not student performance can generalize across a variety of test administration modes (i.e., paper or computer). The purpose of this study is to compare math fact fluency on paper versus computer for 197 upper elementary students. Students completed identical sets of probes on paper and on the computer, which were then scored for digits correct, problems correct, and accuracy. Results showed a significant difference in performance between the two sets of probes, with higher fluency rates on the paper probes. Because decisions about levels of student support and interventions often rely on measures such as these, more research in this area is needed to examine the potential differences in student performance between paper-based and computer-based CBMs.

  19. Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime; Rakoczy, John; Steincamp, James

    2003-01-01

    Phase retrieval requires calculation of the real-valued phase of the pupil fimction from the image intensity distribution and characteristics of an optical system. Genetic 'algorithms were used to solve two one-dimensional phase retrieval problem. A GA successfully estimated the coefficients of a polynomial expansion of the phase when the number of coefficients was correctly specified. A GA also successfully estimated the multiple p h e s of a segmented optical system analogous to the seven-mirror Systematic Image-Based Optical Alignment (SIBOA) testbed located at NASA s Marshall Space Flight Center. The SIBOA testbed was developed to investigate phase retrieval techniques. Tiphilt and piston motions of the mirrors accomplish phase corrections. A constant phase over each mirror can be achieved by an independent tip/tilt correction: the phase Conection term can then be factored out of the Discrete Fourier Tranform (DFT), greatly reducing computations.

  20. Development of a multiplex DNA-based traceability tool for crop plant materials.

    PubMed

    Voorhuijzen, Marleen M; van Dijk, Jeroen P; Prins, Theo W; Van Hoef, A M Angeline; Seyfarth, Ralf; Kok, Esther J

    2012-01-01

    The authenticity of food is of increasing importance for producers, retailers and consumers. All groups benefit from the correct labelling of the contents of food products. Producers and retailers want to guarantee the origin of their products and check for adulteration with cheaper or inferior ingredients. Consumers are also more demanding about the origin of their food for various socioeconomic reasons. In contrast to this increasing demand, correct labelling has become much more complex because of global transportation networks of raw materials and processed food products. Within the European integrated research project 'Tracing the origin of food' (TRACE), a DNA-based multiplex detection tool was developed-the padlock probe ligation and microarray detection (PPLMD) tool. In this paper, this method is extended to a 15-plex traceability tool with a focus on products of commercial importance such as the emmer wheat Farro della Garfagnana (FdG) and Basmati rice. The specificity of 14 plant-related padlock probes was determined and initially validated in mixtures comprising seven or nine plant species/varieties. One nucleotide difference in target sequence was sufficient for the distinction between the presence or absence of a specific target. At least 5% FdG or Basmati rice was detected in mixtures with cheaper bread wheat or non-fragrant rice, respectively. The results suggested that even lower levels of (un-)intentional adulteration could be detected. PPLMD has been shown to be a useful tool for the detection of fraudulent/intentional admixtures in premium foods and is ready for the monitoring of correct labelling of premium foods worldwide.

  1. An experimental comparison of ETM+ image geometric correction methods in the mountainous areas of Yunnan Province, China

    NASA Astrophysics Data System (ADS)

    Wang, Jinliang; Wu, Xuejiao

    2010-11-01

    Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.

  2. Tutte polynomial in functional magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    García-Castillón, Marlly V.

    2015-09-01

    Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.

  3. Tutorial on Reed-Solomon error correction coding

    NASA Technical Reports Server (NTRS)

    Geisel, William A.

    1990-01-01

    This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.

  4. A point-value enhanced finite volume method based on approximate delta functions

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  5. Computational adaptive optics for broadband optical interferometric tomography of biological tissue.

    PubMed

    Adie, Steven G; Graf, Benedikt W; Ahmad, Adeel; Carney, P Scott; Boppart, Stephen A

    2012-05-08

    Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.

  6. Evaluate More General Integrals Involving Universal Associated Legendre Polynomials via Taylor’s Theorem

    NASA Astrophysics Data System (ADS)

    Yañez-Navarro, G.; Sun, Guo-Hua; Sun, Dong-Sheng; Chen, Chang-Yuan; Dong, Shi-Hai

    2017-08-01

    A few important integrals involving the product of two universal associated Legendre polynomials {P}{l\\prime}{m\\prime}(x), {P}{k\\prime}{n\\prime}(x) and x2a(1 - x2)-p-1, xb(1 ± x)-p-1 and xc(1 -x2)-p-1 (1 ± x) are evaluated using the operator form of Taylor’s theorem and an integral over a single universal associated Legendre polynomial. These integrals are more general since the quantum numbers are unequal, i.e. l‧ ≠ k‧ and m‧ ≠ n‧. Their selection rules are also given. We also verify the correctness of those integral formulas numerically. Supported by 20170938-SIP-IPN, Mexico

  7. Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy

    NASA Technical Reports Server (NTRS)

    Edwards, Lawrence G.; Haberbusch, Mark

    1993-01-01

    The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.

  8. Extending a Property of Cubic Polynomials to Higher-Degree Polynomials

    ERIC Educational Resources Information Center

    Miller, David A.; Moseley, James

    2012-01-01

    In this paper, the authors examine a property that holds for all cubic polynomials given two zeros. This property is discovered after reviewing a variety of ways to determine the equation of a cubic polynomial given specific conditions through algebra and calculus. At the end of the article, they will connect the property to a very famous method…

  9. Local collective motion analysis for multi-probe dynamic imaging and microrheology

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2016-08-01

    Dynamical artifacts, such as mechanical drift, advection, and hydrodynamic flow, can adversely affect multi-probe dynamic imaging and passive particle-tracking microrheology experiments. Alternatively, active driving by molecular motors can cause interesting non-Brownian motion of probes in local regions. Existing drift-correction techniques, which require large ensembles of probes or fast temporal sampling, are inadequate for handling complex spatio-temporal drifts and non-Brownian motion of localized domains containing relatively few probes. Here, we report an analytical method based on local collective motion (LCM) analysis of as few as two probes for detecting the presence of non-Brownian motion and for accurately eliminating it to reveal the underlying Brownian motion. By calculating an ensemble-average, time-dependent, LCM mean square displacement (MSD) of two or more localized probes and comparing this MSD to constituent single-probe MSDs, we can identify temporal regimes during which either thermal or athermal motion dominates. Single-probe motion, when referenced relative to the moving frame attached to the multi-probe LCM trajectory, provides a true Brownian MSD after scaling by an appropriate correction factor that depends on the number of probes used in LCM analysis. We show that LCM analysis can be used to correct many different dynamical artifacts, including spatially varying drifts, gradient flows, cell motion, time-dependent drift, and temporally varying oscillatory advection, thereby offering a significant improvement over existing approaches.

  10. T1 mapping with the variable flip angle technique: A simple correction for insufficient spoiling of transverse magnetization.

    PubMed

    Baudrexel, Simon; Nöth, Ulrike; Schüre, Jan-Rüdiger; Deichmann, Ralf

    2018-06-01

    The variable flip angle method derives T 1 maps from radiofrequency-spoiled gradient-echo data sets, acquired with different flip angles α. Because the method assumes validity of the Ernst equation, insufficient spoiling of transverse magnetization yields errors in T 1 estimation, depending on the chosen radiofrequency-spoiling phase increment (Δϕ). This paper presents a versatile correction method that uses modified flip angles α' to restore the validity of the Ernst equation. Spoiled gradient-echo signals were simulated for three commonly used phase increments Δϕ (50°/117°/150°), different values of α, repetition time (TR), T 1 , and a T 2 of 85 ms. For each parameter combination, α' (for which the Ernst equation yielded the same signal) and a correction factor C Δϕ (α, TR, T 1 ) = α'/α were determined. C Δϕ was found to be independent of T 1 and fitted as polynomial C Δϕ (α, TR), allowing to calculate α' for any protocol using this Δϕ. The accuracy of the correction method for T 2 values deviating from 85 ms was also determined. The method was tested in vitro and in vivo for variable flip angle scans with different acquisition parameters. The technique considerably improved the accuracy of variable flip angle-based T 1 maps in vitro and in vivo. The proposed method allows for a simple correction of insufficient spoiling in gradient-echo data. The required polynomial parameters are supplied for three common Δϕ. Magn Reson Med 79:3082-3092, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Skin image illumination modeling and chromophore identification for melanoma diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Zhao; Zerubia, Josiane

    2015-05-01

    The presence of illumination variation in dermatological images has a negative impact on the automatic detection and analysis of cutaneous lesions. This paper proposes a new illumination modeling and chromophore identification method to correct lighting variation in skin lesion images, as well as to extract melanin and hemoglobin concentrations of human skin, based on an adaptive bilateral decomposition and a weighted polynomial curve fitting, with the knowledge of a multi-layered skin model. Different from state-of-the-art approaches based on the Lambert law, the proposed method, considering both specular reflection and diffuse reflection of the skin, enables us to address highlight and strong shading effects usually existing in skin color images captured in an uncontrolled environment. The derived melanin and hemoglobin indices, directly relating to the pathological tissue conditions, tend to be less influenced by external imaging factors and are more efficient in describing pigmentation distributions. Experiments show that the proposed method gave better visual results and superior lesion segmentation, when compared to two other illumination correction algorithms, both designed specifically for dermatological images. For computer-aided diagnosis of melanoma, sensitivity achieves 85.52% when using our chromophore descriptors, which is 8~20% higher than those derived from other color descriptors. This demonstrates the benefit of the proposed method for automatic skin disease analysis.

  12. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    NASA Astrophysics Data System (ADS)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  13. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients.

    PubMed

    Chen, Weitian; Sica, Christopher T; Meyer, Craig H

    2008-11-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.

  14. Astigmatism-corrected echelle spectrometer using an off-the-shelf cylindrical lens.

    PubMed

    Fu, Xiao; Duan, Fajie; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Lv, Changrong

    2017-10-01

    As a special kind of spectrometer with the Czerny-Turner structure, the echelle spectrometer features two-dimensional dispersion, which leads to a complex astigmatic condition. In this work, we propose an optical design of astigmatism-corrected echelle spectrometer using an off-the-shelf cylindrical lens. The mathematical model considering astigmatism introduced by the off-axis mirrors, the echelle grating, and the prism is established. Our solution features simplified calculation and low-cost construction, which is capable of overall compensation of the astigmatism in a wide spectral range (200-600 nm). An optical simulation utilizing ZEMAX software, astigmatism assessment based on Zernike polynomials, and an instrument experiment is implemented to validate the effect of astigmatism correction. The results demonstrated that astigmatism of the echelle spectrometer was corrected to a large extent, and high spectral resolution better than 0.1 nm was achieved.

  15. Genotyping by alkaline dehybridization using graphically encoded particles.

    PubMed

    Zhang, Huaibin; DeConinck, Adam J; Slimmer, Scott C; Doyle, Patrick S; Lewis, Jennifer A; Nuzzo, Ralph G

    2011-03-01

    This work describes a nonenzymatic, isothermal genotyping method based on the kinetic differences exhibited in the dehybridization of perfectly matched (PM) and single-base mismatched (MM) DNA duplexes in an alkaline solution. Multifunctional encoded hydrogel particles incorporating allele-specific oligonucleotide (ASO) probes in two distinct regions were fabricated by using microfluidic-based stop-flow lithography. Each particle contained two distinct ASO probe sequences differing at a single base position, and thus each particle was capable of simultaneously probing two distinct target alleles. Fluorescently labeled target alleles were annealed to both probe regions of a particle, and the rate of duplex dehybridization was monitored by using fluorescence microscopy. Duplex dehybridization was achieved through an alkaline stimulus using either a pH step function or a temporal pH gradient. When a single target probe sequence was used, the rate of mismatch duplex dehybridization could be discriminated from the rate of perfect match duplex dehybridization. In a more demanding application in which two distinct probe sequences were used, we found that the rate profiles provided a means to discriminate probe dehybridizations from both of the two mismatched duplexes as well as to distinguish at high certainty the dehybridization of the two perfectly matched duplexes. These results demonstrate an ability of alkaline dehybridization to correctly discriminate the rank hierarchy of thermodynamic stability among four sets of perfect match and single-base mismatch duplexes. We further demonstrate that these rate profiles are strongly temperature dependent and illustrate how the sensitivity can be compensated beneficially by the use of an actuating gradient pH field. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Conformal Galilei algebras, symmetric polynomials and singular vectors

    NASA Astrophysics Data System (ADS)

    Křižka, Libor; Somberg, Petr

    2018-01-01

    We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.

  17. Rapid Identification of Staphylococcus aureus Directly from Blood Cultures by Fluorescence In Situ Hybridization with Peptide Nucleic Acid Probes

    PubMed Central

    Oliveira, Kenneth; Procop, Gary W.; Wilson, Deborah; Coull, James; Stender, Henrik

    2002-01-01

    A new fluorescence in situ hybridization (FISH) method with peptide nucleic acid (PNA) probes for identification of Staphylococcus aureus directly from positive blood culture bottles that contain gram-positive cocci in clusters (GPCC) is described. The test (the S. aureus PNA FISH assay) is based on a fluorescein-labeled PNA probe that targets a species-specific sequence of the 16S rRNA of S. aureus. Evaluations with 17 reference strains and 48 clinical isolates, including methicillin-resistant and methicillin-susceptible S. aureus species, coagulase-negative Staphylococcus species, and other clinically relevant and phylogenetically related bacteria and yeast species, showed that the assay had 100% sensitivity and 96% specificity. Clinical trials with 87 blood cultures positive for GPCC correctly identified 36 of 37 (97%) of the S. aureus-positive cultures identified by standard microbiological methods. The positive and negative predictive values were 100 and 98%, respectively. It is concluded that this rapid method (2.5 h) for identification of S. aureus directly from blood culture bottles that contain GPCC offers important information for optimal antibiotic therapy. PMID:11773123

  18. DETECTION OF THERMAL EMISSION FROM A SUPER-EARTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demory, Brice-Olivier; Seager, Sara; Benneke, Bjoern

    2012-06-01

    We report on the detection of infrared light from the super-Earth 55 Cnc e, based on four occultations obtained with Warm Spitzer at 4.5 {mu}m. Our data analysis consists of a two-part process. In a first step, we perform individual analyses of each data set and compare several baseline models to optimally account for the systematics affecting each light curve. We apply independent photometric correction techniques, including polynomial detrending and pixel mapping, that yield consistent results at the 1{sigma} level. In a second step, we perform a global Markov Chain Monte Carlo analysis, including all four data sets that yieldmore » an occultation depth of 131 {+-} 28 ppm, translating to a brightness temperature of 2360 {+-} 300 K in the IRAC 4.5 {mu}m channel. This occultation depth suggests a low Bond albedo coupled to an inefficient heat transport from the planetary day side to the night side, or else possibly that the 4.5 {mu}m observations probe atmospheric layers that are hotter than the maximum equilibrium temperature (i.e., a thermal inversion layer or a deep hot layer). The measured occultation phase and duration are consistent with a circular orbit and improves the 3{sigma} upper limit on 55 Cnc e's orbital eccentricity from 0.25 to 0.06.« less

  19. A fully automated and scalable timing probe-based method for time alignment of the LabPET II scanners

    NASA Astrophysics Data System (ADS)

    Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean

    2018-05-01

    A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).

  20. Geometric analysis and restitution of digital multispectral scanner data arrays

    NASA Technical Reports Server (NTRS)

    Baker, J. R.; Mikhail, E. M.

    1975-01-01

    An investigation was conducted to define causes of geometric defects within digital multispectral scanner (MSS) data arrays, to analyze the resulting geometric errors, and to investigate restitution methods to correct or reduce these errors. Geometric transformation relationships for scanned data, from which collinearity equations may be derived, served as the basis of parametric methods of analysis and restitution of MSS digital data arrays. The linearization of these collinearity equations is presented. Algorithms considered for use in analysis and restitution included the MSS collinearity equations, piecewise polynomials based on linearized collinearity equations, and nonparametric algorithms. A proposed system for geometric analysis and restitution of MSS digital data arrays was used to evaluate these algorithms, utilizing actual MSS data arrays. It was shown that collinearity equations and nonparametric algorithms both yield acceptable results, but nonparametric algorithms possess definite advantages in computational efficiency. Piecewise polynomials were found to yield inferior results.

  1. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less

  2. Developing a statistically powerful measure for quartet tree inference using phylogenetic identities and Markov invariants.

    PubMed

    Sumner, Jeremy G; Taylor, Amelia; Holland, Barbara R; Jarvis, Peter D

    2017-12-01

    Recently there has been renewed interest in phylogenetic inference methods based on phylogenetic invariants, alongside the related Markov invariants. Broadly speaking, both these approaches give rise to polynomial functions of sequence site patterns that, in expectation value, either vanish for particular evolutionary trees (in the case of phylogenetic invariants) or have well understood transformation properties (in the case of Markov invariants). While both approaches have been valued for their intrinsic mathematical interest, it is not clear how they relate to each other, and to what extent they can be used as practical tools for inference of phylogenetic trees. In this paper, by focusing on the special case of binary sequence data and quartets of taxa, we are able to view these two different polynomial-based approaches within a common framework. To motivate the discussion, we present three desirable statistical properties that we argue any invariant-based phylogenetic method should satisfy: (1) sensible behaviour under reordering of input sequences; (2) stability as the taxa evolve independently according to a Markov process; and (3) explicit dependence on the assumption of a continuous-time process. Motivated by these statistical properties, we develop and explore several new phylogenetic inference methods. In particular, we develop a statistically bias-corrected version of the Markov invariants approach which satisfies all three properties. We also extend previous work by showing that the phylogenetic invariants can be implemented in such a way as to satisfy property (3). A simulation study shows that, in comparison to other methods, our new proposed approach based on bias-corrected Markov invariants is extremely powerful for phylogenetic inference. The binary case is of particular theoretical interest as-in this case only-the Markov invariants can be expressed as linear combinations of the phylogenetic invariants. A wider implication of this is that, for models with more than two states-for example DNA sequence alignments with four-state models-we find that methods which rely on phylogenetic invariants are incapable of satisfying all three of the stated statistical properties. This is because in these cases the relevant Markov invariants belong to a class of polynomials independent from the phylogenetic invariants.

  3. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  4. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  5. Application of an oligonucleotide microarray-based nano-amplification technique for the detection of fungal pathogens.

    PubMed

    Lu, Weiping; Gu, Dayong; Chen, Xingyun; Xiong, Renping; Liu, Ping; Yang, Nan; Zhou, Yuanguo

    2010-10-01

    The traditional techniques for diagnosis of invasive fungal infections in the clinical microbiology laboratory need improvement. These techniques are prone to delay results due to their time-consuming process, or result in misidentification of the fungus due to low sensitivity or low specificity. The aim of this study was to develop a method for the rapid detection and identification of fungal pathogens. The internal transcribed spacer two fragments of fungal ribosomal DNA were amplified using a polymerase chain reaction for all samples. Next, the products were hybridized with the probes immobilized on the surface of a microarray. These species-specific probes were designed to detect nine different clinical pathogenic fungi including Candida albicans, Candida tropocalis, Candida glabrata, Candida parapsilosis, Candida krusei, Candida lusitaniae, Candida guilliermondii, Candida keyfr, and Cryptococcus neoformans. The hybridizing signals were enhanced with gold nanoparticles and silver deposition, and detected using a flatbed scanner or visually. Fifty-nine strains of fungal pathogens, including standard and clinically isolated strains, were correctly identified by this method. The sensitivity of the assay for Candida albicans was 10 cells/mL. Ten cultures from clinical specimens and 12 clinical samples spiked with fungi were also identified correctly. This technique offers a reliable alternative to conventional methods for the detection and identification of fungal pathogens. It has higher efficiency, specificity and sensitivity compared with other methods commonly used in the clinical laboratory.

  6. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  7. Rapid Identification and Differentiation of Trichophyton Species, Based on Sequence Polymorphisms of the Ribosomal Internal Transcribed Spacer Regions, by Rolling-Circle Amplification▿

    PubMed Central

    Kong, Fanrong; Tong, Zhongsheng; Chen, Xiaoyou; Sorrell, Tania; Wang, Bin; Wu, Qixuan; Ellis, David; Chen, Sharon

    2008-01-01

    DNA sequencing analyses have demonstrated relatively limited polymorphisms within the fungal internal transcribed spacer (ITS) regions among Trichophyton spp. We sequenced the ITS region (ITS1, 5.8S, and ITS2) for 42 dermatophytes belonging to seven species (Trichophyton rubrum, T. mentagrophytes, T. soudanense, T. tonsurans, Epidermophyton floccosum, Microsporum canis, and M. gypseum) and developed a novel padlock probe and rolling-circle amplification (RCA)-based method for identification of single nucleotide polymorphisms (SNPs) that could be exploited to differentiate between Trichophyton spp. Sequencing results demonstrated intraspecies genetic variation for T. tonsurans, T. mentagrophytes, and T. soudanense but not T. rubrum. Signature sets of SNPs between T. rubrum and T. soudanense (4-bp difference) and T. violaceum and T. soudanense (3-bp difference) were identified. The RCA assay correctly identified five Trichophyton species. Although the use of two “group-specific” probes targeting both the ITS1 and the ITS2 regions were required to identify T. soudanense, the other species were identified by single ITS1- or ITS2-targeted species-specific probes. There was good agreement between ITS sequencing and the RCA assay. Despite limited genetic variation between Trichophyton spp., the sensitive, specific RCA-based SNP detection assay showed potential as a simple, reproducible method for the rapid (2-h) identification of Trichophyton spp. PMID:18234865

  8. Applications of polynomial optimization in financial risk investment

    NASA Astrophysics Data System (ADS)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  9. An analysis of the nucleon spectrum from lattice partially-quenched QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Armour; Allton, C. R.; Leinweber, Derek B.

    2010-09-01

    The chiral extrapolation of the nucleon mass, Mn, is investigated using data coming from 2-flavour partially-quenched lattice simulations. The leading one-loop corrections to the nucleon mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value ofmore » Mn in agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.« less

  10. Adaptive compensation of aberrations in ultrafast 3D microscopy using a deformable mirror

    NASA Astrophysics Data System (ADS)

    Sherman, Leah R.; Albert, O.; Schmidt, Christoph F.; Vdovin, Gleb V.; Mourou, Gerard A.; Norris, Theodore B.

    2000-05-01

    3D imaging using a multiphoton scanning confocal microscope is ultimately limited by aberrations of the system. We describe a system to adaptively compensate the aberrations with a deformable mirror. We have increased the transverse scanning range of the microscope by three with compensation of off-axis aberrations.We have also significantly increased the longitudinal scanning depth with compensation of spherical aberrations from the penetration into the sample. Our correction is based on a genetic algorithm that uses second harmonic or two-photon fluorescence signal excited by femtosecond pulses from the sample as the enhancement parameter. This allows us to globally optimize the wavefront without a wavefront measurement. To improve the speed of the optimization we use Zernike polynomials as the basis for correction. Corrections can be stored in a database for look-up with future samples.

  11. Overcoming the errors of in-house PCR used in the clinical laboratory for the diagnosis of extrapulmonary tuberculosis.

    PubMed

    Kunakorn, M; Raksakai, K; Pracharktam, R; Sattaudom, C

    1999-03-01

    Our experiences from 1993 to 1997 in the development and use of IS6110 base PCR for the diagnosis of extrapulmonary tuberculosis in a routine clinical setting revealed that error-correcting processes can improve existing diagnostic methodology. The reamplification method initially used had a sensitivity of 90.91% and a specificity of 93.75%. The concern was focused on the false positive results of this method caused by product-carryover contamination. This method was changed to single round PCR with carryover prevention by uracil DNA glycosylase (UDG), resulting in a 100% specificity but only 63% sensitivity. Dot blot hybridization was added after the single round PCR, increasing the sensitivity to 87.50%. However, false positivity resulted from the nonspecific dot blot hybridization signal, reducing the specificity to 89.47%. The hybridization of PCR was changed to a Southern blot with a new oligonucleotide probe giving the sensitivity of 85.71% and raising the specificity to 99.52%. We conclude that the PCR protocol for routine clinical use should include UDG for carryover prevention and hybridization with specific probes to optimize diagnostic sensitivity and specificity in extrapulmonary tuberculosis testing.

  12. Synthesized tissue-equivalent dielectric phantoms using salt and polyvinylpyrrolidone solutions.

    PubMed

    Ianniello, Carlotta; de Zwart, Jacco A; Duan, Qi; Deniz, Cem M; Alon, Leeor; Lee, Jae-Seung; Lattanzi, Riccardo; Brown, Ryan

    2018-07-01

    To explore the use of polyvinylpyrrolidone (PVP) for simulated materials with tissue-equivalent dielectric properties. PVP and salt were used to control, respectively, relative permittivity and electrical conductivity in a collection of 63 samples with a range of solute concentrations. Their dielectric properties were measured with a commercial probe and fitted to a 3D polynomial in order to establish an empirical recipe. The material's thermal properties and MR spectra were measured. The empirical polynomial recipe (available at https://www.amri.ninds.nih.gov/cgi-bin/phantomrecipe) provides the PVP and salt concentrations required for dielectric materials with permittivity and electrical conductivity values between approximately 45 and 78, and 0.1 to 2 siemens per meter, respectively, from 50 MHz to 4.5 GHz. The second- (solute concentrations) and seventh- (frequency) order polynomial recipe provided less than 2.5% relative error between the measured and target properties. PVP side peaks in the spectra were minor and unaffected by temperature changes. PVP-based phantoms are easy to prepare and nontoxic, and their semitransparency makes air bubbles easy to identify. The polymer can be used to create simulated material with a range of dielectric properties, negligible spectral side peaks, and long T 2 relaxation time, which are favorable in many MR applications. Magn Reson Med 80:413-419, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Eye aberration analysis with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  14. Novel, Miniature Multi-Hole Probes and High-Accuracy Calibration Algorithms for their use in Compressible Flowfields

    NASA Technical Reports Server (NTRS)

    Rediniotis, Othon K.

    1999-01-01

    Two new calibration algorithms were developed for the calibration of non-nulling multi-hole probes in compressible, subsonic flowfields. The reduction algorithms are robust and able to reduce data from any multi-hole probe inserted into any subsonic flowfield to generate very accurate predictions of the velocity vector, flow direction, total pressure and static pressure. One of the algorithms PROBENET is based on the theory of neural networks, while the other is of a more conventional nature (polynomial approximation technique) and introduces a novel idea of local least-squares fits. Both algorithms have been developed to complete, user-friendly software packages. New technology was developed for the fabrication of miniature multi-hole probes, with probe tip diameters all the way down to 0.035". Several miniature 5- and 7-hole probes, with different probe tip geometries (hemispherical, conical, faceted) and different overall shapes (straight, cobra, elbow probes) were fabricated, calibrated and tested. Emphasis was placed on the development of four stainless-steel conical 7-hole probes, 1/16" in diameter calibrated at NASA Langley for the entire subsonic regime. The developed calibration algorithms were extensively tested with these probes demonstrating excellent prediction capabilities. The probes were used in the "trap wing" wind tunnel tests in the 14'x22' wind tunnel at NASA Langley, providing valuable information on the flowfield over the wing. This report is organized in the following fashion. It consists of a "Technical Achievements" section that summarizes the major achievements, followed by an assembly of journal articles that were produced from this project and ends with two manuals for the two probe calibration algorithms developed.

  15. The use of WaveLight® Contoura to create a uniform cornea: the LYRA Protocol. Part 3: the results of 50 treated eyes.

    PubMed

    Motwani, Manoj

    2017-01-01

    To demonstrate how using the Wavelight Contoura measured astigmatism and axis eliminates corneal astigmatism and creates uniformly shaped corneas. A retrospective analysis was conducted of the first 50 eyes to have bilateral full WaveLight ® Contoura LASIK correction of measured astigmatism and axis (vs conventional manifest refraction), using the Layer Yolked Reduction of Astigmatism Protocol in all cases. All patients had astigmatism corrected, and had at least 1 week of follow-up. Accuracy to desired refractive goal was assessed by postoperative refraction, aberration reduction via calculation of polynomials, and postoperative visions were analyzed as a secondary goal. The average difference of astigmatic power from manifest to measured was 0.5462D (with a range of 0-1.69D), and the average difference of axis was 14.94° (with a range of 0°-89°). Forty-seven of 50 eyes had a goal of plano, 3 had a monovision goal. Astigmatism was fully eliminated from all but 2 eyes, and 1 eye had regression with astigmatism. Of the eyes with plano as the goal, 80.85% were 20/15 or better, and 100% were 20/20 or better. Polynomial analysis postoperatively showed that at 6.5 mm, the average C3 was reduced by 86.5% and the average C5 by 85.14%. Using WaveLight ® Contoura measured astigmatism and axis removes higher order aberrations and allows for the creation of a more uniform cornea with accurate removal of astigmatism, and reduction of aberration polynomials. WaveLight ® Contoura successfully links the refractive correction layer and aberration repair layer using the Layer Yolked Reduction of Astigmatism Protocol to demonstrate how aberration removal can affect refractive correction.

  16. Echocardiographic anatomy of the mitral valve: a critical appraisal of 2-dimensional imaging protocols with a 3-dimensional perspective.

    PubMed

    Mahmood, Feroze; Hess, Philip E; Matyal, Robina; Mackensen, G Burkhard; Wang, Angela; Qazi, Aisha; Panzica, Peter J; Lerner, Adam B; Maslow, Andrew

    2012-10-01

    To highlight the limitations of traditional 2-dimensional (2D) echocardiographic mitral valve (MV) examination methodologies, which do not account for patient-specific transesophageal echocardiographic (TEE) probe adjustments made during an actual clinical perioperative TEE examination. Institutional quality-improvement project. Tertiary care hospital. Attending anesthesiologists certified by the National Board of Echocardiography. Using the technique of multiplanar reformatting with 3-dimensional (3D) data, ambiguous 2D images of the MV were generated, which resembled standard midesophageal 2D views. Based on the 3D image, the MV scallops visualized in each 2D image were recognized exactly by the position of the scan plane. Twenty-three such 2D MV images were created in a presentation from the 3D datasets. Anesthesia staff members (n = 13) were invited to view the presentation based on the 2D images only and asked to identify the MV scallops. Their responses were scored as correct or incorrect based on the 3D image. The overall accuracy was 30.4% in identifying the MV scallops. The transcommissural view was identified correctly >90% of the time. The accuracy of the identification of A1, A3, P1, and P3 scallops was <50%. The accuracy of the identification of A2P2 scallops was ≥50%. In the absence of information on TEE probe adjustments performed to acquire a specific MV image, it is possible to misidentify the scallops. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Functional Form of the Radiometric Equation for the SNPP VIIRS Reflective Solar Bands: An Initial Study

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Xiong, Xiaoxiong

    2016-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (SNPP) satellite is a passive scanning radiometer and an imager, observing radiative energy from the Earth in 22 spectral bands from 0.41 to 12 microns which include 14 reflective solar bands (RSBs). Extending the formula used by the Moderate Resolution Imaging Spectroradiometer instruments, currently the VIIRS determines the sensor aperture spectral radiance through a quadratic polynomial of its detector digital count. It has been known that for the RSBs the quadratic polynomial is not adequate in the design specified spectral radiance region and using a quadratic polynomial could drastically increase the errors in the polynomial coefficients, leading to possible large errors in the determined aperture spectral radiance. In addition, it is very desirable to be able to extend the radiance calculation formula to correctly retrieve the aperture spectral radiance with the level beyond the design specified range. In order to more accurately determine the aperture spectral radiance from the observed digital count, we examine a few polynomials of the detector digital count to calculate the sensor aperture spectral radiance.

  18. Lattice Boltzmann method for bosons and fermions and the fourth-order Hermite polynomial expansion.

    PubMed

    Coelho, Rodrigo C V; Ilha, Anderson; Doria, Mauro M; Pereira, R M; Aibe, Valter Yoshihiko

    2014-04-01

    The Boltzmann equation with the Bhatnagar-Gross-Krook collision operator is considered for the Bose-Einstein and Fermi-Dirac equilibrium distribution functions. We show that the expansion of the microscopic velocity in terms of Hermite polynomials must be carried to the fourth order to correctly describe the energy equation. The viscosity and thermal coefficients, previously obtained by Yang et al. [Shi and Yang, J. Comput. Phys. 227, 9389 (2008); Yang and Hung, Phys. Rev. E 79, 056708 (2009)] through the Uehling-Uhlenbeck approach, are also derived here. Thus the construction of a lattice Boltzmann method for the quantum fluid is possible provided that the Bose-Einstein and Fermi-Dirac equilibrium distribution functions are expanded to fourth order in the Hermite polynomials.

  19. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  20. Thermodynamic correction of particle concentrations measured by underwing probes on fast flying aircraft

    NASA Astrophysics Data System (ADS)

    Weigel, R.; Spichtinger, P.; Mahnke, C.; Klingebiel, M.; Afchine, A.; Petzold, A.; Krämer, M.; Costa, A.; Molleker, S.; Jurkat, T.; Minikin, A.; Borrmann, S.

    2015-12-01

    Particle concentration measurements with underwing probes on aircraft are impacted by air compression upstream of the instrument body as a function of flight velocity. In particular for fast-flying aircraft the necessity arises to account for compression of the air sample volume. Hence, a correction procedure is needed to invert measured particle number concentrations to ambient conditions that is commonly applicable for different instruments to gain comparable results. In the compression region where the detection of particles occurs (i.e. under factual measurement conditions), pressure and temperature of the air sample are increased compared to ambient (undisturbed) conditions in certain distance away from the aircraft. Conventional procedures for scaling the measured number densities to ambient conditions presume that the particle penetration speed through the instruments' detection area equals the aircraft speed (True Air Speed, TAS). However, particle imaging instruments equipped with pitot-tubes measuring the Probe Air Speed (PAS) of each underwing probe reveal PAS values systematically below those of the TAS. We conclude that the deviation between PAS and TAS is mainly caused by the compression of the probed air sample. From measurements during two missions in 2014 with the German Gulfstream G-550 (HALO - High Altitude LOng range) research aircraft we develop a procedure to correct the measured particle concentration to ambient conditions using a thermodynamic approach. With the provided equation the corresponding concentration correction factor ξ is applicable to the high frequency measurements of each underwing probe which is equipped with its own air speed sensor (e.g. a pitot-tube). ξ-values of 1 to 0.85 are calculated for air speeds (i.e. TAS) between 60 and 260 m s-1. From HALO data it is found that ξ does not significantly vary between the different deployed instruments. Thus, for the current HALO underwing probe configuration a parameterisation of ξ as a function of TAS is provided for instances if PAS measurements are lacking. The ξ-correction yields higher ambient particle concentration by about 15-25 % compared to conventional procedures - an improvement which can be considered as significant for many research applications. The calculated ξ-values are specifically related to the considered HALO underwing probe arrangement and may differ for other aircraft or instrument geometries. Moreover, the ξ-correction may not cover all impacts originating from high flight velocities and from interferences between the instruments and, e.g., the aircraft wings and/or fuselage. Consequently, it is important that PAS (as a function of TAS) is individually measured by each probe deployed underneath the wings of a fast-flying aircraft.

  1. SU-E-T-614: Derivation of Equations to Define Inflection Points and Its Analysis in Flattening Filter Free Photon Beams Based On the Principle of Polynomial function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muralidhar, K Raja; Komanduri, K

    2014-06-01

    Purpose: The objective of this work is to present a mechanism for calculating inflection points on profiles at various depths and field sizes and also a significant study on the percentage of doses at the inflection points for various field sizes and depths for 6XFFF and 10XFFF energy profiles. Methods: Graphical representation was done on Percentage of dose versus Inflection points. Also using the polynomial function, the authors formulated equations for calculating spot-on inflection point on the profiles for 6X FFF and 10X FFF energies for all field sizes and at various depths. Results: In a flattening filter free radiationmore » beam which is not like in Flattened beams, the dose at inflection point of the profile decreases as field size increases for 10XFFF. Whereas in 6XFFF, the dose at the inflection point initially increases up to 10x10cm2 and then decreases. The polynomial function was fitted for both FFF beams for all field sizes and depths. For small fields less than 5x5 cm2 the inflection point and FWHM are almost same and hence analysis can be done just like in FF beams. A change in 10% of dose can change the field width by 1mm. Conclusion: The present study, Derivative of equations based on the polynomial equation to define inflection point concept is precise and accurate way to derive the inflection point dose on any FFF beam profile at any depth with less than 1% accuracy. Corrections can be done in future studies based on the multiple number of machine data. Also a brief study was done to evaluate the inflection point positions with respect to dose in FFF energies for various field sizes and depths for 6XFFF and 10XFFF energy profiles.« less

  2. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  3. FY 2016 Status Report: CIRFT Testing Data Analyses and Updated Curvature Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Wang, Hong

    This report provides a detailed description of FY15 test result corrections/analysis based on the FY16 Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) test program methodology update used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal transportation conditions. The CIRFT consists of a U-frame testing setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages to a universal testing machine. The curvature of rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are used and clamped to the side connecting platesmore » of the U-frame to capture the deformation of the rod. The contact-based measurement, or three-LVDT-based curvature measurement system, on SNF rods has been proven to be quite reliable in CIRFT testing. However, how the LVDT head contacts the SNF rod may have a significant effect on the curvature measurement, depending on the magnitude and direction of rod curvature. It has been demonstrated that the contact/curvature issues can be corrected by using a correction on the sensor spacing. The sensor spacing defines the separation of the three LVDT probes and is a critical quantity in calculating the rod curvature once the deflections are obtained. The sensor spacing correction can be determined by using chisel-type probes. The method has been critically examined this year and has been shown to be difficult to implement in a hot cell environment, and thus cannot be implemented effectively. A correction based on the proposed equivalent gauge-length has the required flexibility and accuracy and can be appropriately used as a correction factor. The correction method based on the equivalent gauge length has been successfully demonstrated in CIRFT data analysis for the dynamic tests conducted on Limerick (LMK) (17 tests), North Anna (NA) (6 tests), and Catawba mixed oxide (MOX) (10 tests) SNF samples. These CIRFT tests were completed in FY14 and FY15. Specifically, the data sets obtained from measurement and monitoring were processed and analyzed. The fatigue life of rods has been characterized in terms of moment, curvature, and equivalent stress and strain..« less

  4. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR

    PubMed Central

    Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong

    2016-01-01

    Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973

  5. Atmospheric Dynamics on Venus, Jupiter, and Saturn: An Observational and Analytical Study

    NASA Technical Reports Server (NTRS)

    Bridger, Alison; Magalhaes, Julio A.; Young, Richard E.

    2000-01-01

    Determining the static stability of Jupiter's atmosphere below the visible cloud levels is important for understanding the dynamical modes by which energy and momentum are transported through Jupiter's deep troposphere. The Galileo Probe Atmospheric Structure Investigation (ASI) employed pressure and temperature sensors to directly measure these state variables during the parachute-descent phase, which started at a pressure (p) of 0.4 bars and ended at p= 22 bars. The internal temperature of the probe underwent large temperature fluctuations which significantly exceeded design specifications. Corrections for these anomalous interior temperatures have been evaluated based on laboratory data acquired after the mission using the flight spare hardware. The corrections to the pressure sensor readings was particularly large and the uncertainties in the atmospheric pressures derived from the p sensor measurements may still be significant. We have sought to estimate the formal uncertainties in the static stability derived from the p and T sensor measurements directly and to devise means of assessing the static stability of Jupiter's atmosphere which do not rely on the p sensor data.

  6. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients

    PubMed Central

    Chen, Weitian; Sica, Christopher T.; Meyer, Craig H.

    2008-01-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method. PMID:18956462

  7. A method for automatically extracting infectious disease-related primers and probes from the literature

    PubMed Central

    2010-01-01

    Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041

  8. RPM-WEBBSYS: A web-based computer system to apply the rational polynomial method for estimating static formation temperatures of petroleum and geothermal wells

    NASA Astrophysics Data System (ADS)

    Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.

    2015-12-01

    A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.

  9. Contact angle measurement with a smartphone

    NASA Astrophysics Data System (ADS)

    Chen, H.; Muros-Cobos, Jesus L.; Amirfazli, A.

    2018-03-01

    In this study, a smartphone-based contact angle measurement instrument was developed. Compared with the traditional measurement instruments, this instrument has the advantage of simplicity, compact size, and portability. An automatic contact point detection algorithm was developed to allow the instrument to correctly detect the drop contact points. Two different contact angle calculation methods, Young-Laplace and polynomial fitting methods, were implemented in this instrument. The performance of this instrument was tested first with ideal synthetic drop profiles. It was shown that the accuracy of the new system with ideal synthetic drop profiles can reach 0.01% with both Young-Laplace and polynomial fitting methods. Conducting experiments to measure both static and dynamic (advancing and receding) contact angles with the developed instrument, we found that the smartphone-based instrument can provide accurate and practical measurement results as the traditional commercial instruments. The successful demonstration of use of a smartphone (mobile phone) to conduct contact angle measurement is a significant advancement in the field as it breaks the dominate mold of use of a computer and a bench bound setup for such systems since their appearance in 1980s.

  10. Contact angle measurement with a smartphone.

    PubMed

    Chen, H; Muros-Cobos, Jesus L; Amirfazli, A

    2018-03-01

    In this study, a smartphone-based contact angle measurement instrument was developed. Compared with the traditional measurement instruments, this instrument has the advantage of simplicity, compact size, and portability. An automatic contact point detection algorithm was developed to allow the instrument to correctly detect the drop contact points. Two different contact angle calculation methods, Young-Laplace and polynomial fitting methods, were implemented in this instrument. The performance of this instrument was tested first with ideal synthetic drop profiles. It was shown that the accuracy of the new system with ideal synthetic drop profiles can reach 0.01% with both Young-Laplace and polynomial fitting methods. Conducting experiments to measure both static and dynamic (advancing and receding) contact angles with the developed instrument, we found that the smartphone-based instrument can provide accurate and practical measurement results as the traditional commercial instruments. The successful demonstration of use of a smartphone (mobile phone) to conduct contact angle measurement is a significant advancement in the field as it breaks the dominate mold of use of a computer and a bench bound setup for such systems since their appearance in 1980s.

  11. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  12. Glowing locked nucleic acids: brightly fluorescent probes for detection of nucleic acids in cells.

    PubMed

    Østergaard, Michael E; Cheguru, Pallavi; Papasani, Madhusudhan R; Hill, Rodney A; Hrdlicka, Patrick J

    2010-10-13

    Fluorophore-modified oligonucleotides have found widespread use in genomics and enable detection of single-nucleotide polymorphisms, real-time monitoring of PCR, and imaging of mRNA in living cells. Hybridization probes modified with polarity-sensitive fluorophores and molecular beacons (MBs) are among the most popular approaches to produce hybridization-induced increases in fluorescence intensity for nucleic acid detection. In the present study, we demonstrate that the 2'-N-(pyren-1-yl)carbonyl-2'-amino locked nucleic acid (LNA) monomer X is a highly versatile building block for generation of efficient hybridization probes and quencher-free MBs. The hybridization and fluorescence properties of these Glowing LNA probes are efficiently modulated and optimized by changes in probe backbone chemistry and architecture. Correctly designed probes are shown to exhibit (a) high affinity toward RNA targets, (b) excellent mismatch discrimination, (c) high biostability, and (d) pronounced hybridization-induced increases in fluorescence intensity leading to formation of brightly fluorescent duplexes with unprecedented emission quantum yields (Φ(F) = 0.45-0.89) among pyrene-labeled oligonucleotides. Finally, specific binding between messenger RNA and multilabeled quencher-free MBs based on Glowing LNA monomers is demonstrated (a) using in vitro transcription assays and (b) by quantitative fluorometric assays and direct microscopic observation of probes bound to mRNA in its native form. These features render Glowing LNA as promising diagnostic probes for biomedical applications.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oloff, L.-P., E-mail: oloff@physik.uni-kiel.de; Hanff, K.; Stange, A.

    With the advent of ultrashort-pulsed extreme ultraviolet sources, such as free-electron lasers or high-harmonic-generation (HHG) sources, a new research field for photoelectron spectroscopy has opened up in terms of femtosecond time-resolved pump-probe experiments. The impact of the high peak brilliance of these novel sources on photoemission spectra, so-called vacuum space-charge effects caused by the Coulomb interaction among the photoemitted probe electrons, has been studied extensively. However, possible distortions of the energy and momentum distributions of the probe photoelectrons caused by the low photon energy pump pulse due to the nonlinear emission of electrons have not been studied in detail yet.more » Here, we systematically investigate these pump laser-induced space-charge effects in a HHG-based experiment for the test case of highly oriented pyrolytic graphite. Specifically, we determine how the key parameters of the pump pulse—the excitation density, wavelength, spot size, and emitted electron energy distribution—affect the measured time-dependent energy and momentum distributions of the probe photoelectrons. The results are well reproduced by a simple mean-field model, which could open a path for the correction of pump laser-induced space-charge effects and thus toward probing ultrafast electron dynamics in strongly excited materials.« less

  14. Strategies for Determining Correct Cytochrome P450 Contributions in Hepatic Clearance Predictions: In Vitro-In Vivo Extrapolation as Modelling Approach and Tramadol as Proof-of Concept Compound.

    PubMed

    T'jollyn, Huybrecht; Snoeys, Jan; Van Bocxlaer, Jan; De Bock, Lies; Annaert, Pieter; Van Peer, Achiel; Allegaert, Karel; Mannens, Geert; Vermeulen, An; Boussery, Koen

    2017-06-01

    Although the measurement of cytochrome P450 (CYP) contributions in metabolism assays is straightforward, determination of actual in vivo contributions might be challenging. How representative are in vitro for in vivo CYP contributions? This article proposes an improved strategy for the determination of in vivo CYP enzyme-specific metabolic contributions, based on in vitro data, using an in vitro-in vivo extrapolation (IVIVE) approach. Approaches are exemplified using tramadol as model compound, and CYP2D6 and CYP3A4 as involved enzymes. Metabolism data for tramadol and for the probe substrates midazolam (CYP3A4) and dextromethorphan (CYP2D6) were gathered in human liver microsomes (HLM) and recombinant human enzyme systems (rhCYP). From these probe substrates, an activity-adjustment factor (AAF) was calculated per CYP enzyme, for the determination of correct hepatic clearance contributions. As a reference, tramadol CYP contributions were scaled-back from in vivo data (retrograde approach) and were compared with the ones derived in vitro. In this view, the AAF is an enzyme-specific factor, calculated from reference probe activity measurements in vitro and in vivo, that allows appropriate scaling of a test drug's in vitro activity to the 'healthy volunteer' population level. Calculation of an AAF, thus accounts for any 'experimental' or 'batch-specific' activity difference between in vitro HLM and in vivo derived activity. In this specific HLM batch, for CYP3A4 and CYP2D6, an AAF of 0.91 and 1.97 was calculated, respectively. This implies that, in this batch, the in vitro CYP3A4 activity is 1.10-fold higher and the CYP2D6 activity 1.97-fold lower, compared to in vivo derived CYP activities. This study shows that, in cases where the HLM pool does not represent the typical mean population CYP activities, AAF correction of in vitro metabolism data, optimizes CYP contributions in the prediction of hepatic clearance. Therefore, in vitro parameters for any test compound, obtained in a particular batch, should be corrected with the AAF for the respective enzymes. In the current study, especially the CYP2D6 contribution was found, to better reflect the average in vivo situation. It is recommended that this novel approach is further evaluated using a broader range of compounds.

  15. Selection of turning-on fluorogenic probe as protein-specific detector obtained via the 10BASEd-T

    NASA Astrophysics Data System (ADS)

    Uematsu, Shuta; Midorikawa, Taiki; Ito, Yuji; Taki, Masumi

    2017-01-01

    In order to obtain a molecular probe for specific protein detection, we have synthesized fluorogenic probe library of vast diversity on bacteriophage T7 via the gp10 based-thioetherification (10BASEd-T). A remarkable turning- on probe which is excitable by widely applicable visible light was selected from the library.

  16. Super-resolution pupil filtering for visual performance enhancement using adaptive optics

    NASA Astrophysics Data System (ADS)

    Zhao, Lina; Dai, Yun; Zhao, Junlei; Zhou, Xiaojun

    2018-05-01

    Ocular aberration correction can significantly improve visual function of the human eye. However, even under ideal aberration correction conditions, pupil diffraction restricts the resolution of retinal images. Pupil filtering is a simple super-resolution (SR) method that can overcome this diffraction barrier. In this study, a 145-element piezoelectric deformable mirror was used as a pupil phase filter because of its programmability and high fitting accuracy. Continuous phase-only filters were designed based on Zernike polynomial series and fitted through closed-loop adaptive optics. SR results were validated using double-pass point spread function images. Contrast sensitivity was further assessed to verify the SR effect on visual function. An F-test was conducted for nested models to statistically compare different CSFs. These results indicated CSFs for the proposed SR filter were significantly higher than the diffraction correction (p < 0.05). As such, the proposed filter design could provide useful guidance for supernormal vision optical correction of the human eye.

  17. Simple Proof of Jury Test for Complex Polynomials

    NASA Astrophysics Data System (ADS)

    Choo, Younseok; Kim, Dongmin

    Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.

  18. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  19. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  20. Viabilty of atomistic potentials for thermodynamic properties of carbon dioxide at low temperatures.

    PubMed

    Kuznetsova, Tatyana; Kvamme, Bjørn

    2001-11-30

    Investigation into volumetric and energetic properties of several atomistic models mimicking carbon dioxide geometry and quadrupole momentum covered the liquid-vapor coexistence curve. Thermodynamic integration over a polynomial and an exponential-polynomial path was used to calculate free energy. Computational results showed that model using GROMOS Lennard-Jones parameters was unsuitable for bulk CO(2) simulations. On the other hand, model with potential fitted to reproduce only correct density-pressure relationship in the supercritical region proved to yield correct enthalpy of vaporization and free energy of liquid CO(2) in the low-temperature region. Except for molar volume at the upper part of the vapor-liquid equilibrium line, the bulk properties of exp-6-1 parametrization of ab initio CO(2) potential were in a close agreement with the experimental results. Copyright 2001 John Wiley & Sons, Inc. J Comput Chem 22: 1772-1781, 2001

  1. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  2. Bohm-criterion approximation versus optimal matched solution for a cylindrical probe in radial-motion theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Din, Alif

    2016-08-15

    The theory of positive-ion collection by a probe immersed in a low-pressure plasma was reviewed and extended by Allen et al. [Proc. Phys. Soc. 70, 297 (1957)]. The numerical computations for cylindrical and spherical probes in a sheath region were presented by F. F. Chen [J. Nucl. Energy C 7, 41 (1965)]. Here, in this paper, the sheath and presheath solutions for a cylindrical probe are matched through a numerical matching procedure to yield “matched” potential profile or “M solution.” The solution based on the Bohm criterion approach “B solution” is discussed for this particular problem. The comparison of cylindricalmore » probe characteristics obtained from the correct potential profile (M solution) and the approximated Bohm-criterion approach are different. This raises questions about the correctness of cylindrical probe theories relying only on the Bohm-criterion approach. Also the comparison between theoretical and experimental ion current characteristics shows that in an argon plasma the ions motion towards the probe is almost radial.« less

  3. Robust stability of fractional order polynomials with complicated uncertainty structure

    PubMed Central

    Şenol, Bilal; Pekař, Libor

    2017-01-01

    The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173

  4. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    NASA Astrophysics Data System (ADS)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  5. Structure design and characteristic analysis of micro-nano probe based on six dimensional micro-force measuring principle

    NASA Astrophysics Data System (ADS)

    Yang, Hong-tao; Cai, Chun-mei; Fang, Chuan-zhi; Wu, Tian-feng

    2013-10-01

    In order to develop micro-nano probe having error self-correcting function and good rigidity structure, a new micro-nano probe system was developed based on six-dimensional micro-force measuring principle. The structure and working principle of the probe was introduced in detail. The static nonlinear decoupling method was established with BP neural network to do the static decoupling for the dimension coupling existing in each direction force measurements. The optimal parameters of BP neural network were selected and the decoupling simulation experiments were done. The maximum probe coupling rate after decoupling is 0.039% in X direction, 0.025% in Y direction and 0.027% in Z direction. The static measurement sensitivity of the probe can reach 10.76μɛ / mN in Z direction and 14.55μɛ / mN in X and Y direction. The modal analysis and harmonic response analysis under three dimensional harmonic load of the probe were done by using finite element method. The natural frequencies under different vibration modes were obtained and the working frequency of the probe was determined, which is higher than 10000 Hz . The transient response analysis of the probe was done, which indicates that the response time of the probe can reach 0.4 ms. From the above results, it is shown that the developed micro-nano probe meets triggering requirements of micro-nano probe. Three dimension measuring force can be measured precisely by the developed probe, which can be used to predict and correct the force deformation error and the touch error of the measuring ball and the measuring rod.

  6. Injection molding lens metrology using software configurable optical test system

    NASA Astrophysics Data System (ADS)

    Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian

    2016-10-01

    Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.

  7. Investigation of magneto-hemodynamic flow in a semi-porous channel using orthonormal Bernstein polynomials

    NASA Astrophysics Data System (ADS)

    Hosseini, E.; Loghmani, G. B.; Heydari, M.; Rashidi, M. M.

    2017-07-01

    In this paper, the problem of the magneto-hemodynamic laminar viscous flow of a conducting physiological fluid in a semi-porous channel under a transverse magnetic field is investigated numerically. Using a Berman's similarity transformation, the two-dimensional momentum conservation partial differential equations can be written as a system of nonlinear ordinary differential equations incorporating Lorentizian magneto-hydrodynamic body force terms. A new computational method based on the operational matrix of derivative of orthonormal Bernstein polynomials for solving the resulting differential systems is introduced. Moreover, by using the residual correction process, two types of error estimates are provided and reported to show the strength of the proposed method. Graphical and tabular results are presented to investigate the influence of the Hartmann number ( Ha) and the transpiration Reynolds number ( Re on velocity profiles in the channel. The results are compared with those obtained by previous works to confirm the accuracy and efficiency of the proposed scheme.

  8. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network

    PubMed Central

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701

  9. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    PubMed

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  10. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  11. On computation of Gröbner bases for linear difference systems

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.

    2006-04-01

    In this paper, we present an algorithm for computing Gröbner bases of linear ideals in a difference polynomial ring over a ground difference field. The input difference polynomials generating the ideal are also assumed to be linear. The algorithm is an adaptation to difference ideals of our polynomial algorithm based on Janet-like reductions.

  12. Securing Color Fidelity in 3D Architectural Heritage Scenarios.

    PubMed

    Gaiani, Marco; Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio

    2017-10-25

    Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy ('color characterization').

  13. Securing Color Fidelity in 3D Architectural Heritage Scenarios

    PubMed Central

    Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio

    2017-01-01

    Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’). PMID:29068359

  14. Deep Belief Networks Learn Context Dependent Behavior

    PubMed Central

    Raudies, Florian; Zilli, Eric A.; Hasselmo, Michael E.

    2014-01-01

    With the goal of understanding behavioral mechanisms of generalization, we analyzed the ability of neural networks to generalize across context. We modeled a behavioral task where the correct responses to a set of specific sensory stimuli varied systematically across different contexts. The correct response depended on the stimulus (A,B,C,D) and context quadrant (1,2,3,4). The possible 16 stimulus-context combinations were associated with one of two responses (X,Y), one of which was correct for half of the combinations. The correct responses varied symmetrically across contexts. This allowed responses to previously unseen stimuli (probe stimuli) to be generalized from stimuli that had been presented previously. By testing the simulation on two or more stimuli that the network had never seen in a particular context, we could test whether the correct response on the novel stimuli could be generated based on knowledge of the correct responses in other contexts. We tested this generalization capability with a Deep Belief Network (DBN), Multi-Layer Perceptron (MLP) network, and the combination of a DBN with a linear perceptron (LP). Overall, the combination of the DBN and LP had the highest success rate for generalization. PMID:24671178

  15. A new version of Stochastic-parallel-gradient-descent algorithm (SPGD) for phase correction of a distorted orbital angular momentum (OAM) beam

    NASA Astrophysics Data System (ADS)

    Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian

    2018-02-01

    Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.

  16. Probe measurements of the electron velocity distribution function in beams: Low-voltage beam discharge in helium

    NASA Astrophysics Data System (ADS)

    Sukhomlinov, V.; Mustafaev, A.; Timofeev, N.

    2018-04-01

    Previously developed methods based on the single-sided probe technique are altered and applied to measure the anisotropic angular spread and narrow energy distribution functions of charged particle (electron and ion) beams. The conventional method is not suitable for some configurations, such as low-voltage beam discharges, electron beams accelerated in near-wall and near-electrode layers, and vacuum electron beam sources. To determine the range of applicability of the proposed method, simple algebraic relationships between the charged particle energies and their angular distribution are obtained. The method is verified for the case of the collisionless mode of a low-voltage He beam discharge, where the traditional method for finding the electron distribution function with the help of a Legendre polynomial expansion is not applicable. This leads to the development of a physical model of the formation of the electron distribution function in a collisionless low-voltage He beam discharge. The results of a numerical calculation based on Monte Carlo simulations are in good agreement with the experimental data obtained using the new method.

  17. Recursive approach to the moment-based phase unwrapping method.

    PubMed

    Langley, Jason A; Brice, Robert G; Zhao, Qun

    2010-06-01

    The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.

  18. Adaptive optics with a magnetic deformable mirror: applications in the human eye

    NASA Astrophysics Data System (ADS)

    Fernandez, Enrique J.; Vabre, Laurent; Hermann, Boris; Unterhuber, Angelika; Povazay, Boris; Drexler, Wolfgang

    2006-10-01

    A novel deformable mirror using 52 independent magnetic actuators (MIRAO 52, Imagine Eyes) is presented and characterized for ophthalmic applications. The capabilities of the device to reproduce different surfaces, in particular Zernike polynomials up to the fifth order, are investigated in detail. The study of the influence functions of the deformable mirror reveals a significant linear response with the applied voltage. The correcting device also presents a high fidelity in the generation of surfaces. The ranges of production of Zernike polynomials fully cover those typically found in the human eye, even for the cases of highly aberrated eyes. Data from keratoconic eyes are confronted with the obtained ranges, showing that the deformable mirror is able to compensate for these strong aberrations. Ocular aberration correction with polychromatic light, using a near Gaussian spectrum of 130 nm full width at half maximum centered at 800 nm, in five subjects is accomplished by simultaneously using the deformable mirror and an achromatizing lens, in order to compensate for the monochromatic and chromatic aberrations, respectively. Results from living eyes, including one exhibiting 4.66 D of myopia and a near pathologic cornea with notable high order aberrations, show a practically perfect aberration correction. Benefits and applications of simultaneous monochromatic and chromatic aberration correction are finally discussed in the context of retinal imaging and vision.

  19. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  20. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  1. The hairpin resonator: A plasma density measuring technique revisited

    NASA Astrophysics Data System (ADS)

    Piejak, R. B.; Godyak, V. A.; Garner, R.; Alexandrovich, B. M.; Sternberg, N.

    2004-04-01

    A microwave resonator probe is a resonant structure from which the relative permittivity of the surrounding medium can be determined. Two types of microwave resonator probes (referred to here as hairpin probes) have been designed and built to determine the electron density in a low-pressure gas discharge. One type, a transmission probe, is a functional equivalent of the original microwave resonator probe introduced by R. L. Stenzel [Rev. Sci. Instrum. 47, 603 (1976)], modified to increase coupling to the hairpin structure and to minimize plasma perturbation. The second type, a reflection probe, differs from the transmission probe in that it requires only one coaxial feeder cable. A sheath correction, based on the fluid equations for collisionless ions in a cylindrical electron-free sheath, is presented here to account for the sheath that naturally forms about the hairpin structure immersed in plasma. The sheath correction extends the range of electron density that can be accurately measured with a particular wire separation of the hairpin structure. Experimental measurements using the hairpin probe appear to be highly reproducible. Comparisons with Langmuir probes show that the Langmuir probe determines an electron density that is 20-30% lower than the hairpin. Further comparisons, with both an interferometer and a Langmuir probe, show hairpin measurements to be in good agreement with the interferometer while Langmuir probe measurements again result in a lower electron density.

  2. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  3. Zernike Basis to Cartesian Transformations

    NASA Astrophysics Data System (ADS)

    Mathar, R. J.

    2009-12-01

    The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.

  4. Multicolor microRNA FISH effectively differentiates tumor types

    PubMed Central

    Renwick, Neil; Cekan, Pavol; Masry, Paul A.; McGeary, Sean E.; Miller, Jason B.; Hafner, Markus; Li, Zhen; Mihailovic, Aleksandra; Morozov, Pavel; Brown, Miguel; Gogakos, Tasos; Mobin, Mehrpouya B.; Snorrason, Einar L.; Feilotter, Harriet E.; Zhang, Xiao; Perlis, Clifford S.; Wu, Hong; Suárez-Fariñas, Mayte; Feng, Huichen; Shuda, Masahiro; Moore, Patrick S.; Tron, Victor A.; Chang, Yuan; Tuschl, Thomas

    2013-01-01

    MicroRNAs (miRNAs) are excellent tumor biomarkers because of their cell-type specificity and abundance. However, many miRNA detection methods, such as real-time PCR, obliterate valuable visuospatial information in tissue samples. To enable miRNA visualization in formalin-fixed paraffin-embedded (FFPE) tissues, we developed multicolor miRNA FISH. As a proof of concept, we used this method to differentiate two skin tumors, basal cell carcinoma (BCC) and Merkel cell carcinoma (MCC), with overlapping histologic features but distinct cellular origins. Using sequencing-based miRNA profiling and discriminant analysis, we identified the tumor-specific miRNAs miR-205 and miR-375 in BCC and MCC, respectively. We addressed three major shortcomings in miRNA FISH, identifying optimal conditions for miRNA fixation and ribosomal RNA (rRNA) retention using model compounds and high-pressure liquid chromatography (HPLC) analyses, enhancing signal amplification and detection by increasing probe-hapten linker lengths, and improving probe specificity using shortened probes with minimal rRNA sequence complementarity. We validated our method on 4 BCC and 12 MCC tumors. Amplified miR-205 and miR-375 signals were normalized against directly detectable reference rRNA signals. Tumors were classified using predefined cutoff values, and all were correctly identified in blinded analysis. Our study establishes a reliable miRNA FISH technique for parallel visualization of differentially expressed miRNAs in FFPE tumor tissues. PMID:23728175

  5. Interbasis expansions in the Zernike system

    NASA Astrophysics Data System (ADS)

    Atakishiyev, Natig M.; Pogosyan, George S.; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation with free boundary conditions on the unit disk that was proposed by Frits Zernike in 1934 to find Jacobi polynomial solutions (indicated as I) serves to define a classical system and a quantum system which have been found to be superintegrable. We have determined two new orthogonal polynomial solutions (indicated as II and III) that are separable and involve Legendre and Gegenbauer polynomials. Here we report on their three interbasis expansion coefficients: between the I-II and I-III bases, they are given by F32(⋯|1 ) polynomials that are also special su(2) Clebsch-Gordan coefficients and Hahn polynomials. Between the II-III bases, we find an expansion expressed by F43(⋯|1 ) 's and Racah polynomials that are related to the Wigner 6j coefficients.

  6. Method for obtaining electron energy-density functions from Langmuir-probe data using a card-programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhurst, G.R.

    This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.

  7. SHAPE Selection (SHAPES) enrich for RNA structure signal in SHAPE sequencing-based probing data

    PubMed Central

    Poulsen, Line Dahl; Kielpinski, Lukasz Jan; Salama, Sofie R.; Krogh, Anders; Vinther, Jeppe

    2015-01-01

    Selective 2′ Hydroxyl Acylation analyzed by Primer Extension (SHAPE) is an accurate method for probing of RNA secondary structure. In existing SHAPE methods, the SHAPE probing signal is normalized to a no-reagent control to correct for the background caused by premature termination of the reverse transcriptase. Here, we introduce a SHAPE Selection (SHAPES) reagent, N-propanone isatoic anhydride (NPIA), which retains the ability of SHAPE reagents to accurately probe RNA structure, but also allows covalent coupling between the SHAPES reagent and a biotin molecule. We demonstrate that SHAPES-based selection of cDNA–RNA hybrids on streptavidin beads effectively removes the large majority of background signal present in SHAPE probing data and that sequencing-based SHAPES data contain the same amount of RNA structure data as regular sequencing-based SHAPE data obtained through normalization to a no-reagent control. Moreover, the selection efficiently enriches for probed RNAs, suggesting that the SHAPES strategy will be useful for applications with high-background and low-probing signal such as in vivo RNA structure probing. PMID:25805860

  8. Thermodynamic correction of particle concentrations measured by underwing probes on fast-flying aircraft

    NASA Astrophysics Data System (ADS)

    Weigel, Ralf; Spichtinger, Peter; Mahnke, Christoph; Klingebiel, Marcus; Afchine, Armin; Petzold, Andreas; Krämer, Martina; Costa, Anja; Molleker, Sergej; Reutter, Philipp; Szakáll, Miklós; Port, Max; Grulich, Lucas; Jurkat, Tina; Minikin, Andreas; Borrmann, Stephan

    2016-10-01

    Particle concentration measurements with underwing probes on aircraft are impacted by air compression upstream of the instrument body as a function of flight velocity. In particular, for fast-flying aircraft the necessity arises to account for compression of the air sample volume. Hence, a correction procedure is needed to invert measured particle number concentrations to ambient conditions that is commonly applicable to different instruments to gain comparable results. In the compression region where the detection of particles occurs (i.e. under factual measurement conditions), pressure and temperature of the air sample are increased compared to ambient (undisturbed) conditions in certain distance away from the aircraft. Conventional procedures for scaling the measured number densities to ambient conditions presume that the air volume probed per time interval is determined by the aircraft speed (true air speed, TAS). However, particle imaging instruments equipped with pitot tubes measuring the probe air speed (PAS) of each underwing probe reveal PAS values systematically below those of the TAS. We conclude that the deviation between PAS and TAS is mainly caused by the compression of the probed air sample. From measurements during two missions in 2014 with the German Gulfstream G-550 (HALO - High Altitude LOng range) research aircraft we develop a procedure to correct the measured particle concentration to ambient conditions using a thermodynamic approach. With the provided equation, the corresponding concentration correction factor ξ is applicable to the high-frequency measurements of the underwing probes, each of which is equipped with its own air speed sensor (e.g. a pitot tube). ξ values of 1 to 0.85 are calculated for air speeds (i.e. TAS) between 60 and 250 m s-1. For different instruments at individual wing position the calculated ξ values exhibit strong consistency, which allows for a parameterisation of ξ as a function of TAS for the current HALO underwing probe configuration. The ability of cloud particles to adopt changes of air speed between ambient and measurement conditions depends on the cloud particles' inertia as a function of particle size (diameter Dp). The suggested inertia correction factor μ (Dp) for liquid cloud drops ranges between 1 (for Dp < 70 µm) and 0.8 (for 100 µm < Dp < 225 µm) but it needs to be applied carefully with respect to the particles' phase and nature. The correction of measured concentration by both factors, ξ and μ (Dp), yields higher ambient particle concentration by about 10-25 % compared to conventional procedures - an improvement which can be considered as significant for many research applications. The calculated ξ values are specifically related to the considered HALO underwing probe arrangement and may differ for other aircraft. Moreover, suggested corrections may not cover all impacts originating from high flight velocities and from interferences between the instruments and e.g. the aircraft wings and/or fuselage. Consequently, it is important that PAS (as a function of TAS) is individually measured by each probe deployed underneath the wings of a fast-flying aircraft.

  9. Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data

    NASA Astrophysics Data System (ADS)

    d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter

    2010-12-01

    IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.

  10. Stable, non-dissipative, and conservative flux-reconstruction schemes in split forms

    NASA Astrophysics Data System (ADS)

    Abe, Yoshiaki; Morinaka, Issei; Haga, Takanori; Nonomura, Taku; Shibata, Hisaichi; Miyaji, Koji

    2018-01-01

    A stable, non-dissipative, and conservative flux-reconstruction (FR) scheme is constructed and demonstrated for the compressible Euler and Navier-Stokes equations. A proposed FR framework adopts a split form (also known as the skew-symmetric form) for convective terms. Sufficient conditions to satisfy both the primary conservation (PC) and kinetic energy preservation (KEP) properties are rigorously derived by polynomial-based analysis for a general FR framework. It is found that the split form needs to be expressed in the PC split form or KEP split form to satisfy each property in discrete sense. The PC split form is retrieved from existing general forms (Kennedy and Gruber [33]); in contrast, we have newly introduced the KEP split form as a comprehensive form constituting a KEP scheme in the FR framework. Furthermore, Gauss-Lobatto (GL) solution points and g2 correction function are required to satisfy the KEP property while any correction functions are available for the PC property. The split-form FR framework to satisfy the KEP property, eventually, is similar to the split-form DGSEM-GL method proposed by Gassner [23], but which, in this study, is derived solely by polynomial-based analysis without explicitly using the diagonal-norm SBP property. Based on a series of numerical tests (e.g., Sod shock tube), both the PC and KEP properties have been verified. We have also demonstrated that using a non-dissipative KEP flux, a sixteenth-order (p15) simulation of the viscous Taylor-Green vortex (Re = 1 , 600) is stable and its results are free of unphysical oscillations on relatively coarse mesh (total number of degrees of freedom (DoFs) is 1283).

  11. Design of thermocouple probes for measurement of rocket exhaust plume temperatures

    NASA Astrophysics Data System (ADS)

    Warren, R. C.

    1994-06-01

    This paper summarizes a literature survey on high temperature measurement and describes the design of probes used in plume measurements. There were no cases reported of measurements in extreme environments such as exist in solid rocket exhausts, but there were a number of thermocouple designs which had been used under less extreme conditions and which could be further developed. Tungsten-rhenium(W-Rh) thermocouples had the combined properties of strength at high temperatures, high thermoelectric emf, and resistance to chemical attack. A shielded probe was required, both to protect the thermocouple junction, and to minimise radiative heat losses. After some experimentation, a twin shielded design made from molybdenum gave acceptable results. Corrections for thermal conduction losses were made based on a method obtained from the literature. Radiation losses were minimized with this probe design, and corrections for these losses were too complex and unreliable to be included.

  12. Unlabeled oligonucleotides as internal temperature controls for genotyping by amplicon melting.

    PubMed

    Seipp, Michael T; Durtschi, Jacob D; Liew, Michael A; Williams, Jamie; Damjanovich, Kristy; Pont-Kingdon, Genevieve; Lyon, Elaine; Voelkerding, Karl V; Wittwer, Carl T

    2007-07-01

    Amplicon melting is a closed-tube method for genotyping that does not require probes, real-time analysis, or allele-specific polymerase chain reaction. However, correct differentiation of homozygous mutant and wild-type samples by melting temperature (Tm) requires high-resolution melting and closely controlled reaction conditions. When three different DNA extraction methods were used to isolate DNA from whole blood, amplicon Tm differences of 0.03 to 0.39 degrees C attributable to the extractions were observed. To correct for solution chemistry differences between samples, complementary unlabeled oligonucleotides were included as internal temperature controls to shift and scale the temperature axis of derivative melting plots. This adjustment was applied to a duplex amplicon melting assay for the methylenetetrahydrofolate reductase variants 1298A>C and 677C>T. High- and low-temperature controls bracketing the amplicon melting region decreased the Tm SD within homozygous genotypes by 47 to 82%. The amplicon melting assay was 100% concordant to an adjacent hybridization probe (HybProbe) melting assay when temperature controls were included, whereas a 3% error rate was observed without temperature correction. In conclusion, internal temperature controls increase the accuracy of genotyping by high-resolution amplicon melting and should also improve results on lower resolution instruments.

  13. Automatic transperineal ultrasound probe positioning based on CT scan for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Camps, S. M.; Verhaegen, F.; Paiva Fonesca, G.; de With, P. H. N.; Fontanarosa, D.

    2017-03-01

    Image interpretation is crucial during ultrasound image acquisition. A skilled operator is typically needed to verify if the correct anatomical structures are all visualized and with sufficient quality. The need for this operator is one of the major reasons why presently ultrasound is not widely used in radiotherapy workflows. To solve this issue, we introduce an algorithm that uses anatomical information derived from a CT scan to automatically provide the operator with a patient-specific ultrasound probe setup. The first application we investigated, for its relevance to radiotherapy, is 4D transperineal ultrasound image acquisition for prostate cancer patients. As initial test, the algorithm was applied on a CIRS multi-modality pelvic phantom. Probe setups were calculated in order to allow visualization of the prostate and adjacent edges of bladder and rectum, as clinically required. Five of the proposed setups were reproduced using a precision robotic arm and ultrasound volumes were acquired. A gel-filled probe cover was used to ensure proper acoustic coupling, while taking into account possible tilted positions of the probe with respect to the flat phantom surface. Visual inspection of the acquired volumes revealed that clinical requirements were fulfilled. Preliminary quantitative evaluation was also performed. The mean absolute distance (MAD) was calculated between actual anatomical structure positions and positions predicted by the CT-based algorithm. This resulted in a MAD of (2.8±0.4) mm for prostate, (2.5±0.6) mm for bladder and (2.8±0.6) mm for rectum. These results show that no significant systematic errors due to e.g. probe misplacement were introduced.

  14. Development of Thinopyrum ponticum-specific molecular markers and FISH probes based on SLAF-seq technology.

    PubMed

    Liu, Liqin; Luo, Qiaoling; Teng, Wan; Li, Bin; Li, Hongwei; Li, Yiwen; Li, Zhensheng; Zheng, Qi

    2018-05-01

    Based on SLAF-seq, 67 Thinopyrum ponticum-specific markers and eight Th. ponticum-specific FISH probes were developed, and these markers and probes could be used for detection of alien chromatin in a wheat background. Decaploid Thinopyrum ponticum (2n = 10x = 70) is a valuable gene reservoir for wheat improvement. Identification of Th. ponticum introgression would facilitate its transfer into diverse wheat genetic backgrounds and its practical utilization in wheat improvement. Based on specific-locus-amplified fragment sequencing (SLAF-seq) technology, 67 new Th. ponticum-specific molecular markers and eight Th. ponticum-specific fluorescence in situ hybridization (FISH) probes have been developed from a tiny wheat-Th. ponticum translocation line. These newly developed molecular markers allowed the detection of Th. ponticum DNA in a variety of materials specifically and steadily at high throughput. According to the hybridization signal pattern, the eight Th. ponticum-specific probes could be divided into two groups. The first group including five dispersed repetitive sequence probes could identify Th. ponticum chromatin more sensitively and accurately than genomic in situ hybridization (GISH). Whereas the second group having three tandem repetitive sequence probes enabled the discrimination of Th. ponticum chromosomes together with another clone pAs1 in wheat-Th. ponticum partial amphiploid Xiaoyan 68.

  15. New Laboratory Technique to Determine Thermal Conductivity of Complex Regolith Simulants Under High Vacuum

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Christensen, P. R.

    2016-12-01

    Laboratory measurements have been necessary to interpret thermal data of planetary surfaces for decades. We present a novel radiometric laboratory method to determine temperature-dependent thermal conductivity of complex regolith simulants under high vacuum and across a wide range of temperatures. Here, we present our laboratory method, strategy, and initial results. This method relies on radiometric temperature measurements instead of contact measurements, eliminating the need to disturb the sample with thermal probes. We intend to determine the conductivity of grains that are up to 2 cm in diameter and to parameterize the effects of angularity, sorting, layering, composition, and cementation. These results will support the efforts of the OSIRIS-REx team in selecting a site on asteroid Bennu that is safe and meets grain size requirements for sampling. Our system consists of a cryostat vacuum chamber with an internal liquid nitrogen dewar. A granular sample is contained in a cylindrical cup that is 4 cm in diameter and 1 to 6 cm deep. The surface of the sample is exposed to vacuum and is surrounded by a black liquid nitrogen cold shroud. Once the system has equilibrated at 80 K, the base of the sample cup is rapidly heated to 450 K. An infrared camera observes the sample from above to monitor its temperature change over time. We have built a time-dependent finite element model of the experiment in COMSOL Multiphysics. Boundary temperature conditions and all known material properties (including surface emissivities) are included to replicate the experiment as closely as possible. The Optimization module in COMSOL is specifically designed for parameter estimation. Sample thermal conductivity is assumed to be a quadratic or cubic polynomial function of temperature. We thus use gradient-based optimization methods in COMSOL to vary the polynomial coefficients in an effort to reduce the least squares error between the measured and modeled sample surface temperature.

  16. The use of WaveLight® Contoura to create a uniform cornea: the LYRA Protocol. Part 3: the results of 50 treated eyes

    PubMed Central

    Motwani, Manoj

    2017-01-01

    Purpose To demonstrate how using the Wavelight Contoura measured astigmatism and axis eliminates corneal astigmatism and creates uniformly shaped corneas. Patients and methods A retrospective analysis was conducted of the first 50 eyes to have bilateral full WaveLight® Contoura LASIK correction of measured astigmatism and axis (vs conventional manifest refraction), using the Layer Yolked Reduction of Astigmatism Protocol in all cases. All patients had astigmatism corrected, and had at least 1 week of follow-up. Accuracy to desired refractive goal was assessed by postoperative refraction, aberration reduction via calculation of polynomials, and postoperative visions were analyzed as a secondary goal. Results The average difference of astigmatic power from manifest to measured was 0.5462D (with a range of 0–1.69D), and the average difference of axis was 14.94° (with a range of 0°–89°). Forty-seven of 50 eyes had a goal of plano, 3 had a monovision goal. Astigmatism was fully eliminated from all but 2 eyes, and 1 eye had regression with astigmatism. Of the eyes with plano as the goal, 80.85% were 20/15 or better, and 100% were 20/20 or better. Polynomial analysis postoperatively showed that at 6.5 mm, the average C3 was reduced by 86.5% and the average C5 by 85.14%. Conclusions Using WaveLight® Contoura measured astigmatism and axis removes higher order aberrations and allows for the creation of a more uniform cornea with accurate removal of astigmatism, and reduction of aberration polynomials. WaveLight® Contoura successfully links the refractive correction layer and aberration repair layer using the Layer Yolked Reduction of Astigmatism Protocol to demonstrate how aberration removal can affect refractive correction. PMID:28553071

  17. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  18. Establishing a direct connection between detrended fluctuation analysis and Fourier analysis

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken

    2015-10-01

    To understand methodological features of the detrended fluctuation analysis (DFA) using a higher-order polynomial fitting, we establish the direct connection between DFA and Fourier analysis. Based on an exact calculation of the single-frequency response of the DFA, the following facts are shown analytically: (1) in the analysis of stochastic processes exhibiting a power-law scaling of the power spectral density (PSD), S (f ) ˜f-β , a higher-order detrending in the DFA has no adverse effect in the estimation of the DFA scaling exponent α , which satisfies the scaling relation α =(β +1 )/2 ; (2) the upper limit of the scaling exponents detectable by the DFA depends on the order of polynomial fit used in the DFA, and is bounded by m +1 , where m is the order of the polynomial fit; (3) the relation between the time scale in the DFA and the corresponding frequency in the PSD are distorted depending on both the order of the DFA and the frequency dependence of the PSD. We can improve the scale distortion by introducing the corrected time scale in the DFA corresponding to the inverse of the frequency scale in the PSD. In addition, our analytical approach makes it possible to characterize variants of the DFA using different types of detrending. As an application, properties of the detrending moving average algorithm are discussed.

  19. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  20. Optical alignment procedure utilizing neural networks combined with Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Adil, Fatime Zehra; Konukseven, Erhan İlhan; Balkan, Tuna; Adil, Ömer Faruk

    2017-05-01

    In the design of pilot helmets with night vision capability, to not limit or block the sight of the pilot, a transparent visor is used. The reflected image from the coated part of the visor must coincide with the physical human sight image seen through the nonreflecting regions of the visor. This makes the alignment of the visor halves critical. In essence, this is an alignment problem of two optical parts that are assembled together during the manufacturing process. Shack-Hartmann wavefront sensor is commonly used for the determination of the misalignments through wavefront measurements, which are quantified in terms of the Zernike polynomials. Although the Zernike polynomials provide very useful feedback about the misalignments, the corrective actions are basically ad hoc. This stems from the fact that there exists no easy inverse relation between the misalignment measurements and the physical causes of the misalignments. This study aims to construct this inverse relation by making use of the expressive power of the neural networks in such complex relations. For this purpose, a neural network is designed and trained in MATLAB® regarding which types of misalignments result in which wavefront measurements, quantitatively given by Zernike polynomials. This way, manual and iterative alignment processes relying on trial and error will be replaced by the trained guesses of a neural network, so the alignment process is reduced to applying the counter actions based on the misalignment causes. Such a training requires data containing misalignment and measurement sets in fine detail, which is hard to obtain manually on a physical setup. For that reason, the optical setup is completely modeled in Zemax® software, and Zernike polynomials are generated for misalignments applied in small steps. The performance of the neural network is experimented and found promising in the actual physical setup.

  1. Coherent orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es

    2013-08-15

    We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less

  2. Animal Viruses Probe dataset (AVPDS) for microarray-based diagnosis and identification of viruses.

    PubMed

    Yadav, Brijesh S; Pokhriyal, Mayank; Vasishtha, Dinesh P; Sharma, Bhaskar

    2014-03-01

    AVPDS (Animal Viruses Probe dataset) is a dataset of virus-specific and conserve oligonucleotides for identification and diagnosis of viruses infecting animals. The current dataset contain 20,619 virus specific probes for 833 viruses and their subtypes and 3,988 conserved probes for 146 viral genera. Dataset of virus specific probe has been divided into two fields namely virus name and probe sequence. Similarly conserved probes for virus genera table have genus, and subgroup within genus name and probe sequence. The subgroup within genus is artificially divided subgroups with no taxonomic significance and contains probes which identifies viruses in that specific subgroup of the genus. Using this dataset we have successfully diagnosed the first case of Newcastle disease virus in sheep and reported a mixed infection of Bovine viral diarrhea and Bovine herpesvirus in cattle. These dataset also contains probes which cross reacts across species experimentally though computationally they meet specifications. These probes have been marked. We hope that this dataset will be useful in microarray-based detection of viruses. The dataset can be accessed through the link https://dl.dropboxusercontent.com/u/94060831/avpds/HOME.html.

  3. Ground temperature measurement by PRT-5 for maps experiment

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campos, Rafael G.; Tututi, Eduardo S.

    We study the Schwinger model on a lattice constructed from zeros of the Hermite polynomials that incorporates a lattice derivative and a discrete Fourier transform with many properties. Such a lattice produces a Klein-Gordon equation for the boson field and the correct value of the mass in the asymptotic limit.

  5. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Cieplak, Agnieszka; Slosar, Anze

    2017-01-01

    The Lyman-alpha forest has become a powerful cosmological probe of the underlying matter distribution at high redshift. It is a highly non-linear field with much information present beyond the two-point statistics of the power spectrum. The flux probability distribution function (PDF) in particular has been used as a successful probe of small-scale physics. In addition to the cosmological evolution however, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over the binned PDF as is commonly done. Since the n-th coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. In addition, we use hydrodynamic cosmological simulations to demonstrate that in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a finite small number of well-measured quantities.

  6. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Cieplak, Agnieszka; Slosar, Anze

    2018-01-01

    The Lyman-alpha forest has become a powerful cosmological probe at intermediate redshift. It is a highly non-linear field with much information present beyond the power spectrum. The flux probability flux distribution (PDF) in particular has been a successful probe of small scale physics. However, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring the coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. Since the n-th Legendre coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. Additionally, in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a small amount of well-measured quantities. Finally, we find that measuring fewer quasars with high signal-to-noise produces a higher amount of recoverable information.

  7. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  8. Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction

    PubMed Central

    Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang

    2016-01-01

    Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638

  9. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  10. Effects of Error Correction during Assessment Probes on the Acquisition of Sight Words for Students with Moderate Intellectual Disabilities

    ERIC Educational Resources Information Center

    Waugh, Rebecca E.

    2010-01-01

    Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…

  11. Effects of Error Correction during Assessment Probes on the Acquisition of Sight Words for Students with Moderate Intellectual Disabilities

    ERIC Educational Resources Information Center

    Waugh, Rebecca E.; Alberto, Paul A.; Fredrick, Laura D.

    2011-01-01

    Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…

  12. Application of polynomial su(1, 1) algebra to Pöschl-Teller potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong-Biao, E-mail: zhanghb017@nenu.edu.cn; Lu, Lu

    2013-12-15

    Two novel polynomial su(1, 1) algebras for the physical systems with the first and second Pöschl-Teller (PT) potentials are constructed, and their specific representations are presented. Meanwhile, these polynomial su(1, 1) algebras are used as an algebraic technique to solve eigenvalues and eigenfunctions of the Hamiltonians associated with the first and second PT potentials. The algebraic approach explores an appropriate new pair of raising and lowing operators K-circumflex{sub ±} of polynomial su(1, 1) algebra as a pair of shift operators of our Hamiltonians. In addition, two usual su(1, 1) algebras associated with the first and second PT potentials are derivedmore » naturally from the polynomial su(1, 1) algebras built by us.« less

  13. Spline function approximation techniques for image geometric distortion representation. [for registration of multitemporal remote sensor imagery

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1975-01-01

    Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.

  14. Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Saxena, Nitin

    Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less

  15. Generalized clustering conditions of Jack polynomials at negative Jack parameter {alpha}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernevig, B. Andrei; Department of Physics, Princeton University, Princeton, New Jersey 08544; Haldane, F. D. M.

    We present several conjectures on the behavior and clustering properties of Jack polynomials at a negative parameter {alpha}=-(k+1/r-1), with partitions that violate the (k,r,N)- admissibility rule of [Feigin et al. [Int. Math. Res. Notices 23, 1223 (2002)]. We find that the ''highest weight'' Jack polynomials of specific partitions represent the minimum degree polynomials in N variables that vanish when s distinct clusters of k+1 particles are formed, where s and k are positive integers. Explicit counting formulas are conjectured. The generalized clustering conditions are useful in a forthcoming description of fractional quantum Hall quasiparticles.

  16. Evaluating carotenoid changes in tomatoes during postharvest ripening using Raman chemical imaging

    NASA Astrophysics Data System (ADS)

    Qin, Jianwei; Chao, Kuanglin; Kim, Moon S.

    2011-06-01

    Lycopene is a major carotenoid in tomatoes and its content varies considerably during postharvest ripening. Hence evaluating lycopene changes can be used to monitor the ripening of tomatoes. Raman chemical imaging technique is promising for mapping constituents of interest in complex food matrices. In this study, a benchtop point-scanning Raman chemical imaging system was developed to evaluate lycopene content in tomatoes at different maturity stages. The system consists of a 785 nm laser, a fiber optic probe, a dispersive imaging spectrometer, a spectroscopic CCD camera, and a two-axis positioning table. Tomato samples at different ripeness stages (i.e., green, breaker, turning, pink, light red, and red) were selected and cut before imaging. Hyperspectral Raman images were acquired from cross sections of the fruits in the wavenumber range of 200 to 2500 cm-1 with a spatial resolution of 1 mm. The Raman spectrum of pure lycopene was measured as reference for spectral matching. A polynomial curve-fitting method was used to correct for the underlying fluorescence background in the Raman spectra of the tomatoes. A hyperspectral image classification method was developed based on spectral information divergence to identify lycopene in the tomatoes. Raman chemical images were created to visualize quantity and spatial distribution of the lycopene at different ripeness stages. The lycopene patterns revealed the mechanism of lycopene generation during the postharvest development of the tomatoes. The method and findings of this study form a basis for the future development of a Raman-based nondestructive approach for monitoring internal maturity of the tomatoes.

  17. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  18. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.

  19. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system.

    PubMed

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-21

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer's Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients.

  20. Computing border bases using mutant strategies

    NASA Astrophysics Data System (ADS)

    Ullah, E.; Abbas Khan, S.

    2014-01-01

    Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.

  1. Wavefront reconstruction from non-modulated pyramid wavefront sensor data using a singular value type expansion

    NASA Astrophysics Data System (ADS)

    Hutterer, Victoria; Ramlau, Ronny

    2018-03-01

    The new generation of extremely large telescopes includes adaptive optics systems to correct for atmospheric blurring. In this paper, we present a new method of wavefront reconstruction from non-modulated pyramid wavefront sensor data. The approach is based on a simplified sensor model represented as the finite Hilbert transform of the incoming phase. Due to the non-compactness of the finite Hilbert transform operator the classical theory for singular systems is not applicable. Nevertheless, we can express the Moore-Penrose inverse as a singular value type expansion with weighted Chebychev polynomials.

  2. N-glycosylation of plant recombinant pharmaceuticals.

    PubMed

    Bardor, Muriel; Cabrera, Gleysin; Stadlmann, Johannes; Lerouge, Patrice; Cremata, José A; Gomord, Véronique; Fitchette, Anne-Catherine

    2009-01-01

    N-glycosylation is a maturation event necessary for the correct function, efficiency, and stability of a high number of biopharmaceuticals. This chapter presented here proposes various methods to determine whether, how, and where a plant pharmaceutical is N-glycosylated. These methods rely on blot detection with glycan-specific probes, specific deglycosylation of glycoproteins followed by mass spectrometry, N-glycan profile analysis, and glycopeptide identification by LC-MS.

  3. Computer simulation of photorefractive keratectomy for the correction of myopia and hyperopia

    NASA Astrophysics Data System (ADS)

    Pinault, Pascal; 'Huillier, J. P.

    1996-01-01

    Photorefractive keratectomy (PRK) performed by means of the 193 nm excimer laser has stimulated considerable interest in the ophthalmic community because this new procedure has the potential to correct myopia, hyperopia, and astigmatism. The use of a laser beam to remove a controlled amount of tissue from the cornea implies that both the energy density of the laser beam and the target removed rate are accurately known. In addition, the required tissue ablation profile to achieve refractive correction must be predicted by optical calculations. This paper investigates: (1) Optical computations based on raytracing model to determine what anterior profile of cornea is needed postoperatively for ametropia. (2) Maximal depth of the removed corneal tissue against the ablation zone treated. And (3) the thickness of ablated corneal lenticule at any distance from the optical axis. Relationships between these data are well fitted by polynomial regressive curves in order to be useful as an algorithm in the computer-controlled delivery of the ArF laser beam.

  4. The Fixed-Links Model in Combination with the Polynomial Function as a Tool for Investigating Choice Reaction Time Data

    ERIC Educational Resources Information Center

    Schweizer, Karl

    2006-01-01

    A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…

  5. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  6. High-throughput microarray technology in diagnostics of enterobacteria based on genome-wide probe selection and regression analysis.

    PubMed

    Friedrich, Torben; Rahmann, Sven; Weigel, Wilfried; Rabsch, Wolfgang; Fruth, Angelika; Ron, Eliora; Gunzer, Florian; Dandekar, Thomas; Hacker, Jörg; Müller, Tobias; Dobrindt, Ulrich

    2010-10-21

    The Enterobacteriaceae comprise a large number of clinically relevant species with several individual subspecies. Overlapping virulence-associated gene pools and the high overall genome plasticity often interferes with correct enterobacterial strain typing and risk assessment. Array technology offers a fast, reproducible and standardisable means for bacterial typing and thus provides many advantages for bacterial diagnostics, risk assessment and surveillance. The development of highly discriminative broad-range microbial diagnostic microarrays remains a challenge, because of marked genome plasticity of many bacterial pathogens. We developed a DNA microarray for strain typing and detection of major antimicrobial resistance genes of clinically relevant enterobacteria. For this purpose, we applied a global genome-wide probe selection strategy on 32 available complete enterobacterial genomes combined with a regression model for pathogen classification. The discriminative power of the probe set was further tested in silico on 15 additional complete enterobacterial genome sequences. DNA microarrays based on the selected probes were used to type 92 clinical enterobacterial isolates. Phenotypic tests confirmed the array-based typing results and corroborate that the selected probes allowed correct typing and prediction of major antibiotic resistances of clinically relevant Enterobacteriaceae, including the subspecies level, e.g. the reliable distinction of different E. coli pathotypes. Our results demonstrate that the global probe selection approach based on longest common factor statistics as well as the design of a DNA microarray with a restricted set of discriminative probes enables robust discrimination of different enterobacterial variants and represents a proof of concept that can be adopted for diagnostics of a wide range of microbial pathogens. Our approach circumvents misclassifications arising from the application of virulence markers, which are highly affected by horizontal gene transfer. Moreover, a broad range of pathogens have been covered by an efficient probe set size enabling the design of high-throughput diagnostics.

  7. Temperature and SAR measurements in deep-body hyperthermia with thermocouple thermometry.

    PubMed

    De Leeuw, A A; Crezee, J; Lagendijk, J J

    1993-01-01

    Multisensor (7-14) thermocouple thermometry is used at our department for temperature measurement with our 'Coaxial TEM' regional hyperthermia system. A special design of the thermometry system with high resolution (0.005 degrees C) and fast data-acquisition (all channels within 320 ms) together with a pulsed power technique allows assessment of specific absorption rate (SAR) information in patients along catheter tracks. A disadvantage of thermocouple thermometry, EM interference, is almost entirely eliminated by application of absorbing ferrite beads around the probe leads. We investigated the effect of remaining disturbance on the temperature decay after power-off, both experimentally in phantoms and in the clinic, and with numerical simulations. Probe and tissue characteristics influence the response time tau dist of the decay of the disturbance. In our clinical practice a normal pulse sequence is 50 s power-on, 10 s power-off: a response time longer than the power-off time results in a deflection of the temperature course at the start. Based on analysis of temperature decays correction of temperature is possible. A double-pulse technique is introduced to provide an initial correction of temperature, and fast information about accuracy. Sometimes disturbance with a relatively long response time occurs, probably due to a bad contact between probe, catheter and/or tissue. Thermocouple thermometry proved to be suitable to measure the SAR along a catheter track. This is used to optimize the SAR distribution by patient positioning before treatment. A clinical example illustrates this.

  8. APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musson, John C.; Seaton, Chad; Spata, Mike F.

    2012-11-01

    Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementationmore » of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.« less

  9. An algorithmic approach to solving polynomial equations associated with quantum circuits

    NASA Astrophysics Data System (ADS)

    Gerdt, V. P.; Zinin, M. V.

    2009-12-01

    In this paper we present two algorithms for reducing systems of multivariate polynomial equations over the finite field F 2 to the canonical triangular form called lexicographical Gröbner basis. This triangular form is the most appropriate for finding solutions of the system. On the other hand, the system of polynomials over F 2 whose variables also take values in F 2 (Boolean polynomials) completely describes the unitary matrix generated by a quantum circuit. In particular, the matrix itself can be computed by counting the number of solutions (roots) of the associated polynomial system. Thereby, efficient construction of the lexicographical Gröbner bases over F 2 associated with quantum circuits gives a method for computing their circuit matrices that is alternative to the direct numerical method based on linear algebra. We compare our implementation of both algorithms with some other software packages available for computing Gröbner bases over F 2.

  10. High quality adaptive optics zoom with adaptive lenses

    NASA Astrophysics Data System (ADS)

    Quintavalla, M.; Santiago, F.; Bonora, S.; Restaino, S.

    2018-02-01

    We present the combined use of large aperture adaptive lens with large optical power modulation with a multi actuator adaptive lens. The Multi-actuator Adaptive Lens (M-AL) can correct up to the 4th radial order of Zernike polynomials, without any obstructions (electrodes and actuators) placed inside its clear aperture. We demonstrated that the use of both lenses together can lead to better image quality and to the correction of aberrations of adaptive optics optical systems.

  11. Southern Great Plains 1997 hydrology experiment: The spatial and temporal distribution of soil moisture within a quarter section pasture field

    NASA Technical Reports Server (NTRS)

    Tsegaye, T.; Coleman, T.; Tadesse, W.; Rajbhandari, N.; Senwo, Z.; Crosson, W.; Surrency, J.

    1998-01-01

    Understanding the spatial and temporal distribution of soil moisture near the soil surface is important to relate ground truth data to remotely sensed data using an electronically scanned thinned array radiometer (ESTAR). The research was conducted at the A-ARM EF site in the Little Washita Watershed in Chickasha Oklahoma. Soil moisture was measured on a 100 x 100-m grid on a quarter section (0.8 km by 0.8 km) size field where the DOE A-ARM SWATS is located. This site has several drainage channels and small ponds. The site is under four different land use practices, namely active pastureland, non-grazed pastureland covered with thick grass, forest area covered with trees, and a single residential area. Soil moisture was measured with a Time Domain Reflectometry (TDR) Delta-T 6-cm theta-probe and gravimetric soil moisture (GSM) technique for the top 6 cm of the soil depth. A fourth order polynomial equation was fitted to each probe calibration curve. The correlation between TDR and GSM measurement technique ranges from 0.81 to 0.91. Comparison of the spatial and temporal distribution of soil moisture measured by the TDR and GSM techniques showed very strong similarities. Such TDR probes can be used successfully to replace the GSM techniques to measure soil moisture content rapidly and accurately with site specific calibration.

  12. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  13. Optimal sharpening of compensated comb decimation filters: analysis and design.

    PubMed

    Troncoso Romero, David Ernesto; Laddomada, Massimiliano; Jovanovic Dolecek, Gordana

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature.

  14. Polynomial reduction and evaluation of tree- and loop-level CHY amplitudes

    DOE PAGES

    Zlotnikov, Michael

    2016-08-24

    We develop a polynomial reduction procedure that transforms any gauge fixed CHY amplitude integrand for n scattering particles into a σ-moduli multivariate polynomial of what we call the standard form. We show that a standard form polynomial must have a specific ladder type monomial structure, which has finite size at any n, with highest multivariate degree given by (n – 3)(n – 4)/2. This set of monomials spans a complete basis for polynomials with rational coefficients in kinematic data on the support of scattering equations. Subsequently, at tree and one-loop level, we employ the global residue theorem to derive amore » prescription that evaluates any CHY amplitude by means of collecting simple residues at infinity only. Furthermore, the prescription is then applied explicitly to some tree and one-loop amplitude examples.« less

  15. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens.

    PubMed

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V

    2015-08-24

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.

  17. Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs. NBER Working Paper No. 20405

    ERIC Educational Resources Information Center

    Gelman, Andrew; Imbens, Guido

    2014-01-01

    It is common in regression discontinuity analysis to control for high order (third, fourth, or higher) polynomials of the forcing variable. We argue that estimators for causal effects based on such methods can be misleading, and we recommend researchers do not use them, and instead use estimators based on local linear or quadratic polynomials or…

  18. Towards standardized assessment of endoscope optical performance: geometric distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Desai, Viraj N.; Ngo, Ying Z.; Cheng, Wei-Chung; Pfefer, Joshua

    2013-12-01

    Technological advances in endoscopes, such as capsule, ultrathin and disposable devices, promise significant improvements in safety, clinical effectiveness and patient acceptance. Unfortunately, the industry lacks test methods for preclinical evaluation of key optical performance characteristics (OPCs) of endoscopic devices that are quantitative, objective and well-validated. As a result, it is difficult for researchers and developers to compare image quality and evaluate equivalence to, or improvement upon, prior technologies. While endoscope OPCs include resolution, field of view, and depth of field, among others, our focus in this paper is geometric image distortion. We reviewed specific test methods for distortion and then developed an objective, quantitative test method based on well-defined experimental and data processing steps to evaluate radial distortion in the full field of view of an endoscopic imaging system. Our measurements and analyses showed that a second-degree polynomial equation could well describe the radial distortion curve of a traditional endoscope. The distortion evaluation method was effective for correcting the image and can be used to explain other widely accepted evaluation methods such as picture height distortion. Development of consensus standards based on promising test methods for image quality assessment, such as the method studied here, will facilitate clinical implementation of innovative endoscopic devices.

  19. Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levinson, Zachary; Verduijn, Erik; Wood, Obert R.

    2016-04-01

    Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less

  20. On Polynomial Solutions of Linear Differential Equations with Polynomial Coefficients

    ERIC Educational Resources Information Center

    Si, Do Tan

    1977-01-01

    Demonstrates a method for solving linear differential equations with polynomial coefficients based on the fact that the operators z and D + d/dz are known to be Hermitian conjugates with respect to the Bargman and Louck-Galbraith scalar products. (MLH)

  1. Design Method for Numerical Function Generators Based on Polynomial Approximation for FPGA Implementation

    DTIC Science & Technology

    2007-08-01

    with a Design Specification de- scribed by Scilab [26], a MATLAB-like software applica- tion, and ends up with HDL code. The Design Specifica- tion...Conf. on Field Programmable Logic and Applications (FPL’05), Tampere, Finland, pp. 118–123, Aug. 2005. [26] Scilab 3.0, INRIA-ENPC, France, http

  2. Polynomial fuzzy observer designs: a sum-of-squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O

    2012-10-01

    This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.

  3. Unlabeled Oligonucleotides as Internal Temperature Controls for Genotyping by Amplicon Melting

    PubMed Central

    Seipp, Michael T.; Durtschi, Jacob D.; Liew, Michael A.; Williams, Jamie; Damjanovich, Kristy; Pont-Kingdon, Genevieve; Lyon, Elaine; Voelkerding, Karl V.; Wittwer, Carl T.

    2007-01-01

    Amplicon melting is a closed-tube method for genotyping that does not require probes, real-time analysis, or allele-specific polymerase chain reaction. However, correct differentiation of homozygous mutant and wild-type samples by melting temperature (Tm) requires high-resolution melting and closely controlled reaction conditions. When three different DNA extraction methods were used to isolate DNA from whole blood, amplicon Tm differences of 0.03 to 0.39°C attributable to the extractions were observed. To correct for solution chemistry differences between samples, complementary unlabeled oligonucleotides were included as internal temperature controls to shift and scale the temperature axis of derivative melting plots. This adjustment was applied to a duplex amplicon melting assay for the methylenetetrahydrofolate reductase variants 1298A>C and 677C>T. High- and low-temperature controls bracketing the amplicon melting region decreased the Tm SD within homozygous genotypes by 47 to 82%. The amplicon melting assay was 100% concordant to an adjacent hybridization probe (HybProbe) melting assay when temperature controls were included, whereas a 3% error rate was observed without temperature correction. In conclusion, internal temperature controls increase the accuracy of genotyping by high-resolution amplicon melting and should also improve results on lower resolution instruments. PMID:17591926

  4. Total Water Content Measurements with an Isokinetic Sampling Probe

    NASA Technical Reports Server (NTRS)

    Reehorst, Andrew L.; Miller, Dean R.; Bidwell, Colin S.

    2010-01-01

    The NASA Glenn Research Center has developed a Total Water Content (TWC) Isokinetic Sampling Probe. Since it is not sensitive to cloud water particle phase nor size, it is particularly attractive to support super-cooled large droplet and high ice water content aircraft icing studies. The instrument is comprised of the Sampling Probe, Sample Flow Control, and Water Vapor Measurement subsystems. Analysis and testing have been conducted on the subsystems to ensure their proper function and accuracy. End-to-end bench testing has also been conducted to ensure the reliability of the entire instrument system. A Stokes Number based collection efficiency correction was developed to correct for probe thickness effects. The authors further discuss the need to ensure that no condensation occurs within the instrument plumbing. Instrument measurements compared to facility calibrations from testing in the NASA Glenn Icing Research Tunnel are presented and discussed. There appears to be liquid water content and droplet size effects in the differences between the two measurement techniques.

  5. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  6. Restoring the lattice of Si-based atom probe reconstructions for enhanced information on dopant positioning.

    PubMed

    Breen, Andrew J; Moody, Michael P; Ceguerra, Anna V; Gault, Baptiste; Araullo-Peters, Vicente J; Ringer, Simon P

    2015-12-01

    The following manuscript presents a novel approach for creating lattice based models of Sb-doped Si directly from atom probe reconstructions for the purposes of improving information on dopant positioning and directly informing quantum mechanics based materials modeling approaches. Sophisticated crystallographic analysis techniques are used to detect latent crystal structure within the atom probe reconstructions with unprecedented accuracy. A distortion correction algorithm is then developed to precisely calibrate the detected crystal structure to the theoretically known diamond cubic lattice. The reconstructed atoms are then positioned on their most likely lattice positions. Simulations are then used to determine the accuracy of such an approach and show that improvements to short-range order measurements are possible for noise levels and detector efficiencies comparable with experimentally collected atom probe data. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. A new class of homogeneous nucleic acid probes based on specific displacement hybridization

    PubMed Central

    Li, Qingge; Luan, Guoyan; Guo, Qiuping; Liang, Jixuan

    2002-01-01

    We have developed a new class of probes for homogeneous nucleic acid detection based on the proposed displacement hybridization. Our probes consist of two complementary oligodeoxyribonucleotides of different length labeled with a fluorophore and a quencher in close proximity in the duplex. The probes on their own are quenched, but they become fluorescent upon displacement hybridization with the target. These probes display complete discrimination between a perfectly matched target and single nucleotide mismatch targets. A comparison of double-stranded probes with corresponding linear probes confirms that the presence of the complementary strand significantly enhances their specificity. Using four such probes labeled with different color fluorophores, each designed to recognize a different target, we have demonstrated that multiple targets can be distinguished in the same solution, even if they differ from one another by as little as a single nucleotide. Double-stranded probes were used in real-time nucleic acid amplifications as either probes or as primers. In addition to its extreme specificity and flexibility, the new class of probes is simple to design and synthesize, has low cost and high sensitivity and is accessible to a wide range of labels. This class of probes should find applications in a variety of areas wherever high specificity of nucleic acid hybridization is relevant. PMID:11788731

  8. Application specific serial arithmetic arrays

    NASA Technical Reports Server (NTRS)

    Winters, K.; Mathews, D.; Thompson, T.

    1990-01-01

    High performance systolic arrays of serial-parallel multiplier elements may be rapidly constructed for specific applications by applying hardware description language techniques to a library of full-custom CMOS building blocks. Single clock pre-charged circuits have been implemented for these arrays at clock rates in excess of 100 Mhz using economical 2-micron (minimum feature size) CMOS processes, which may be quickly configured for a variety of applications. A number of application-specific arrays are presented, including a 2-D convolver for image processing, an integer polynomial solver, and a finite-field polynomial solver.

  9. Analysis of nodal aberration properties in off-axis freeform system design.

    PubMed

    Shi, Haodong; Jiang, Huilin; Zhang, Xin; Wang, Chao; Liu, Tao

    2016-08-20

    Freeform surfaces have the advantage of balancing off-axis aberration. In this paper, based on the framework of nodal aberration theory (NAT) applied to the coaxial system, the third-order astigmatism and coma wave aberration expressions of an off-axis system with Zernike polynomial surfaces are derived. The relationship between the off-axis and surface shape acting on the nodal distributions is revealed. The nodal aberration properties of the off-axis freeform system are analyzed and validated by using full-field displays (FFDs). It has been demonstrated that adding Zernike terms, up to nine, to the off-axis system modifies the nodal locations, but the field dependence of the third-order aberration does not change. On this basis, an off-axis two-mirror freeform system with 500 mm effective focal length (EFL) and 300 mm entrance pupil diameter (EPD) working in long-wave infrared is designed. The field constant aberrations induced by surface tilting are corrected by selecting specific Zernike terms. The design results show that the nodes of third-order astigmatism and coma move back into the field of view (FOV). The modulation transfer function (MTF) curves are above 0.4 at 20 line pairs per millimeter (lp/mm) which meets the infrared reconnaissance requirement. This work provides essential insight and guidance for aberration correction in off-axis freeform system design.

  10. Algorithms and applications of aberration correction and American standard-based digital evaluation in surface defects evaluating system

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-11-01

    The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.

  11. Polynomial interpretation of multipole vectors

    NASA Astrophysics Data System (ADS)

    Katz, Gabriel; Weeks, Jeff

    2004-09-01

    Copi, Huterer, Starkman, and Schwarz introduced multipole vectors in a tensor context and used them to demonstrate that the first-year Wilkinson microwave anisotropy probe (WMAP) quadrupole and octopole planes align at roughly the 99.9% confidence level. In the present article, the language of polynomials provides a new and independent derivation of the multipole vector concept. Bézout’s theorem supports an elementary proof that the multipole vectors exist and are unique (up to rescaling). The constructive nature of the proof leads to a fast, practical algorithm for computing multipole vectors. We illustrate the algorithm by finding exact solutions for some simple toy examples and numerical solutions for the first-year WMAP quadrupole and octopole. We then apply our algorithm to Monte Carlo skies to independently reconfirm the estimate that the WMAP quadrupole and octopole planes align at the 99.9% level.

  12. Image simulation for electron energy loss spectroscopy

    DOE PAGES

    Oxley, Mark P.; Pennycook, Stephen J.

    2007-10-22

    In this paper, aberration correction of the probe forming optics of the scanning transmission electron microscope has allowed the probe-forming aperture to be increased in size, resulting in probes of the order of 1 Å in diameter. The next generation of correctors promise even smaller probes. Improved spectrometer optics also offers the possibility of larger electron energy loss spectrometry detectors. The localization of images based on core-loss electron energy loss spectroscopy is examined as function of both probe-forming aperture and detector size. The effective ionization is nonlocal in nature, and two common local approximations are compared to full nonlocal calculations.more » Finally, the affect of the channelling of the electron probe within the sample is also discussed.« less

  13. Theoretical study of the dynamic magnetic response of ferrofluid to static and alternating magnetic fields

    NASA Astrophysics Data System (ADS)

    Batrudinov, Timur M.; Ambarov, Alexander V.; Elfimova, Ekaterina A.; Zverev, Vladimir S.; Ivanov, Alexey O.

    2017-06-01

    The dynamic magnetic response of ferrofluid in a static uniform external magnetic field to a weak, linear polarized, alternating magnetic field is investigated theoretically. The ferrofluid is modeled as a system of dipolar hard spheres, suspended in a long cylindrical tube whose long axis is parallel to the direction of the static and alternating magnetic fields. The theory is based on the Fokker-Planck-Brown equation formulated for the case when the both static and alternating magnetic fields are applied. The solution of the Fokker-Planck-Brown equation describing the orientational probability density of a randomly chosen dipolar particle is expressed as a series in terms of the spherical Legendre polynomials. The obtained analytical expression connecting three neighboring coefficients of the series makes possible to determine the probability density with any order of accuracy in terms of Legendre polynomials. The analytical formula for the probability density truncated at the first Legendre polynomial is evaluated and used for the calculation of the magnetization and dynamic susceptibility spectra. In the absence of the static magnetic field the presented theory gives the correct single-particle Debye-theory result, which is the exact solution of the Fokker-Planck-Brown equation for the case of applied weak alternating magnetic field. The influence of the static magnetic field on the dynamic susceptibility is analyzed in terms of the low-frequency behavior of the real part and the position of the peak in the imaginary part.

  14. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    PubMed Central

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804

  15. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    PubMed

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Estimating Extracellular Spike Waveforms from CA1 Pyramidal Cells with Multichannel Electrodes

    PubMed Central

    Molden, Sturla; Moldestad, Olve; Storm, Johan F.

    2013-01-01

    Extracellular (EC) recordings of action potentials from the intact brain are embedded in background voltage fluctuations known as the “local field potential” (LFP). In order to use EC spike recordings for studying biophysical properties of neurons, the spike waveforms must be separated from the LFP. Linear low-pass and high-pass filters are usually insufficient to separate spike waveforms from LFP, because they have overlapping frequency bands. Broad-band recordings of LFP and spikes were obtained with a 16-channel laminar electrode array (silicone probe). We developed an algorithm whereby local LFP signals from spike-containing channel were modeled using locally weighted polynomial regression analysis of adjoining channels without spikes. The modeled LFP signal was subtracted from the recording to estimate the embedded spike waveforms. We tested the method both on defined spike waveforms added to LFP recordings, and on in vivo-recorded extracellular spikes from hippocampal CA1 pyramidal cells in anaesthetized mice. We show that the algorithm can correctly extract the spike waveforms embedded in the LFP. In contrast, traditional high-pass filters failed to recover correct spike shapes, albeit produceing smaller standard errors. We found that high-pass RC or 2-pole Butterworth filters with cut-off frequencies below 12.5 Hz, are required to retrieve waveforms comparable to our method. The method was also compared to spike-triggered averages of the broad-band signal, and yielded waveforms with smaller standard errors and less distortion before and after the spike. PMID:24391714

  17. Numerical solutions for Helmholtz equations using Bernoulli polynomials

    NASA Astrophysics Data System (ADS)

    Bicer, Kubra Erdem; Yalcinbas, Salih

    2017-07-01

    This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.

  18. PCR-enzyme-linked immunosorbent assay and partial rRNA gene sequencing: a rational approach to identifying mycobacteria.

    PubMed Central

    Patel, S; Yates, M; Saunders, N A

    1997-01-01

    A PCR-enzyme-linked immunosorbent assay (ELISA) for amplification and rapid identification of mycobacterial DNA coding for 16S rRNA was developed. The PCR selectively targeted and amplified part of the 16S rRNA gene from all mycobacteria while simultaneously labelling one strand of the amplified product with a 5' fluorescein-labelled primer. The identity of the labelled strand was subsequently determined by hybridization to a panel of mycobacterial species-specific capture probes, which were immobilized via their 5' biotin ends to a streptavidin-coated microtiter plate. Specific hybridization of a 5' fluorescein-labelled strand to a species probe was detected colorimetrically with an anti-fluorescein enzyme conjugate. The assay was able to identify 10 Mycobacterium spp. A probe able to hybridize to all Mycobacterium species (All1) was also included. By a heminested PCR, the assay was sensitive enough to detect as little as 10 fg of DNA, which is equivalent to approximately three bacilli. The assay was able to detect and identify mycobacteria directly from sputa. The specificities of the capture probes were assessed by analysis of 60 mycobacterial strains corresponding to 18 species. Probes Avi1, Int1, Kan1, Xen1, Che1, For1, Mal1, Ter1, and Gor1 were specific. The probe Tbc1 cross-hybridized with the Mycobacterium terrae amplicon. Analysis of 35 strains tested blind resulted in 34 strains being correctly identified. This method could be used for rapid identification of early cultures and may be suitable for the detection and concurrent identification of mycobacteria within clinical specimens. PMID:9276419

  19. A stable and high-order accurate discontinuous Galerkin based splitting method for the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Piatkowski, Marian; Müthing, Steffen; Bastian, Peter

    2018-03-01

    In this paper we consider discontinuous Galerkin (DG) methods for the incompressible Navier-Stokes equations in the framework of projection methods. In particular we employ symmetric interior penalty DG methods within the second-order rotational incremental pressure correction scheme. The major focus of the paper is threefold: i) We propose a modified upwind scheme based on the Vijayasundaram numerical flux that has favourable properties in the context of DG. ii) We present a novel postprocessing technique in the Helmholtz projection step based on H (div) reconstruction of the pressure correction that is computed locally, is a projection in the discrete setting and ensures that the projected velocity satisfies the discrete continuity equation exactly. As a consequence it also provides local mass conservation of the projected velocity. iii) Numerical results demonstrate the properties of the scheme for different polynomial degrees applied to two-dimensional problems with known solution as well as large-scale three-dimensional problems. In particular we address second-order convergence in time of the splitting scheme as well as its long-time stability.

  20. Color calibration of swine gastrointestinal tract images acquired by radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung

    2016-01-01

    The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.

  1. The statistics of relativistic electron pitch angle distribution in the Earth's radiation belt based on the Van Allen Probes measurements

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Freidel, R. H. W.; Chen, Y.; Henderson, M. G.; Kanekal, S. G.; Baker, D. N.; Spence, H. E.; Reeves, G. D.

    2015-12-01

    The relativistic electron pitch angle distribution (PAD) is an important characteristic of radiation belt electrons, which can give information on source or loss processes in a specific region. Using data from MagEIS and REPT instruments onboard the Van Allen Probes, a statistical survey of relativistic electron pitch angle distribution (PAD) is performed. By fitting relativistic electron PADs to Legendre polynomials, an empirical model of PADs as a function of L (from 1.4 to 6), MLT, electron energy (~100 keV - 5 MeV), and geomagnetic activity is developed and many intriguing features are found. In the outer radiation belt, an unexpected dawn/dusk asymmetry of ultra-relativistic electrons is found during quiet times, with the asymmetry becoming stronger at higher energies and at higher L shells. This may indicate the existence of physical processes acting on the relativistic electrons on the order of drift period, or be a signature of the partial ring current. In the inner belt and slot region, 100s of keV pitch angle distributions with minima at 90° are shown to be persistent in the inner belt and appears in the slot region during storm times. The model also shows clear energy dependence and L shell dependence of 90°-minimum pitch angle distribution. On the other hand, the head-and-shoulder pitch angle distributions are found during quiet times in the slot region, and the energy, L shell and geomagnetic activity dependence of those PADs are consistent with the wave-particle interaction caused by hiss waves.

  2. Characterization of a 29.4-kilodalton structural protein of Giardia lamblia and localization to the ventral disk [corrected

    PubMed Central

    Aggarwal, A; Adam, R D; Nash, T E

    1989-01-01

    The amino acid sequence of a 29.4-kilodalton [corrected] structural protein located in the ventral disk and axostyle of Giardia lamblia was determined. Clone lambda M16 from a mung bean expression library in lambda gt11 expressed a fusion protein recognized by three different isolate-specific antisera and sera from G. lamblia-infected gerbils. One of the three EcoRI fragments (M16; 1.26 kilobases) encoded the recognized protein. Sequence analysis revealed a single open reading frame of 813 base pairs. Two areas showed conservation of the positions of some amino acids. The abundance of arginine, glutamic acid, and threonine was increased. Two potential alpha-helical regions were deduced in the regions of repeats. Antisera to the M16 fusion protein reacted specifically with internal components of the ventral disk and axostyle, as well as Giardia fractions enriched for ventral disk structural proteins. An identical protein was recognized in different isolates by anti-M16, and a single identical band was recognized in Southern blots using the M16 1.26-kilobase fragment as a probe. Therefore, the 29.4-kilodaltion [corrected] protein appears to be highly conserved compared with variant surface proteins. Images PMID:2925253

  3. Ensemble kalman filtering to perform data assimilation with soil water content probes and pedotransfer functions in modeling water flow in variably saturated soils

    USDA-ARS?s Scientific Manuscript database

    Data from modern soil water contents probes can be used for data assimilation in soil water flow modeling, i.e. continual correction of the flow model performance based on observations. The ensemble Kalman filter appears to be an appropriate method for that. The method requires estimates of the unce...

  4. Development and Evaluation of a Hydrostatic Dynamical Core Using the Spectral Element/Discontinuous Galerkin Methods

    DTIC Science & Technology

    2014-04-01

    The CG and DG horizontal discretization employs high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...and DG horizontal discretization employs high-order nodal basis functions 29 associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...Inside 235 each element we build ( 1)N + Gauss-Lobatto- Legendre (GLL) quadrature points, where N 236 indicate the polynomial order of the basis

  5. Tympanometry in infants: a study of the sensitivity and specificity of 226-Hz and 1,000-Hz probe tones.

    PubMed

    Carmo, Michele Picanço; Costa, Nayara Thais de Oliveira; Momensohn-Santos, Teresa Maria

    2013-10-01

    Introduction For infants under 6 months, the literature recommends 1,000-Hz tympanometry, which has a greater sensitivity for the correct identification of middle ear disorders in this population. Objective To systematically analyze national and international publications found in electronic databases that used tympanometry with 226-Hz and 1,000-Hz probe tones. Data Synthesis Initially, we identified 36 articles in the SciELO database, 11 in the Latin American and Caribbean Literature on the Health Sciences (LILACS) database, 199 in MEDLINE, 0 in the Cochrane database, 16 in ISI Web of Knowledge, and 185 in the Scopus database. We excluded 433 articles because they did not fit the selection criteria, leaving 14 publications that were analyzed in their entirety. Conclusions The 1,000-Hz tone test has greater sensitivity and specificity for the correct identification of tympanometric curve changes. However, it is necessary to clarify the doubts that still exist regarding the use of this test frequency. Improved methods for rating curves, standardization of normality criteria, and the types of curves found in infants should be addressed.

  6. Tympanometry in Infants: A Study of the Sensitivity and Specificity of 226-Hz and 1,000-Hz Probe Tones

    PubMed Central

    Carmo, Michele Picanço; Costa, Nayara Thais de Oliveira; Momensohn-Santos, Teresa Maria

    2013-01-01

    Introduction For infants under 6 months, the literature recommends 1,000-Hz tympanometry, which has a greater sensitivity for the correct identification of middle ear disorders in this population. Objective To systematically analyze national and international publications found in electronic databases that used tympanometry with 226-Hz and 1,000-Hz probe tones. Data Synthesis Initially, we identified 36 articles in the SciELO database, 11 in the Latin American and Caribbean Literature on the Health Sciences (LILACS) database, 199 in MEDLINE, 0 in the Cochrane database, 16 in ISI Web of Knowledge, and 185 in the Scopus database. We excluded 433 articles because they did not fit the selection criteria, leaving 14 publications that were analyzed in their entirety. Conclusions The 1,000-Hz tone test has greater sensitivity and specificity for the correct identification of tympanometric curve changes. However, it is necessary to clarify the doubts that still exist regarding the use of this test frequency. Improved methods for rating curves, standardization of normality criteria, and the types of curves found in infants should be addressed. PMID:25992044

  7. Correction of geometric distortion in Propeller echo planar imaging using a modified reversed gradient approach.

    PubMed

    Chang, Hing-Chiu; Chuang, Tzu-Chao; Lin, Yi-Ru; Wang, Fu-Nien; Huang, Teng-Yi; Chung, Hsiao-Wen

    2013-04-01

    This study investigates the application of a modified reversed gradient algorithm to the Propeller-EPI imaging method (periodically rotated overlapping parallel lines with enhanced reconstruction based on echo-planar imaging readout) for corrections of geometric distortions due to the EPI readout. Propeller-EPI acquisition was executed with 360-degree rotational coverage of the k-space, from which the image pairs with opposite phase-encoding gradient polarities were extracted for reversed gradient geometric and intensity corrections. The spatial displacements obtained on a pixel-by-pixel basis were fitted using a two-dimensional polynomial followed by low-pass filtering to assure correction reliability in low-signal regions. Single-shot EPI images were obtained on a phantom, whereas high spatial resolution T2-weighted and diffusion tensor Propeller-EPI data were acquired in vivo from healthy subjects at 3.0 Tesla, to demonstrate the effectiveness of the proposed algorithm. Phantom images show success of the smoothed displacement map concept in providing improvements of the geometric corrections at low-signal regions. Human brain images demonstrate prominently superior reconstruction quality of Propeller-EPI images with modified reversed gradient corrections as compared with those obtained without corrections, as evidenced from verification against the distortion-free fast spin-echo images at the same level. The modified reversed gradient method is an effective approach to obtain high-resolution Propeller-EPI images with substantially reduced artifacts.

  8. Magnetic Resonance Imaging-derived Flow Parameters for the Analysis of Cardiovascular Diseases and Drug Development.

    PubMed

    Michael, Dada O; Bamidele, Awojoyogbe O; Adewale, Adesola O; Karem, Boubaker

    2013-01-01

    Nuclear magnetic resonance (NMR) allows for fast, accurate and noninvasive measurement of fluid flow in restricted and non-restricted media. The results of such measurements may be possible for a very small B 0 field and can be enhanced through detailed examination of generating functions that may arise from polynomial solutions of NMR flow equations in terms of Legendre polynomials and Boubaker polynomials. The generating functions of these polynomials can present an array of interesting possibilities that may be useful for understanding the basic physics of extracting relevant NMR flow information from which various hemodynamic problems can be carefully studied. Specifically, these results may be used to develop effective drugs for cardiovascular-related diseases.

  9. Magnetic Resonance Imaging-derived Flow Parameters for the Analysis of Cardiovascular Diseases and Drug Development

    PubMed Central

    Michael, Dada O.; Bamidele, Awojoyogbe O.; Adewale, Adesola O.; Karem, Boubaker

    2013-01-01

    Nuclear magnetic resonance (NMR) allows for fast, accurate and noninvasive measurement of fluid flow in restricted and non-restricted media. The results of such measurements may be possible for a very small B0 field and can be enhanced through detailed examination of generating functions that may arise from polynomial solutions of NMR flow equations in terms of Legendre polynomials and Boubaker polynomials. The generating functions of these polynomials can present an array of interesting possibilities that may be useful for understanding the basic physics of extracting relevant NMR flow information from which various hemodynamic problems can be carefully studied. Specifically, these results may be used to develop effective drugs for cardiovascular-related diseases. PMID:25114546

  10. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  11. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks.

    PubMed

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-03-24

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.

  12. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks

    PubMed Central

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-01-01

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632

  13. Chebyshev polynomials in the spectral Tau method and applications to Eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Johnson, Duane

    1996-01-01

    Chebyshev Spectral methods have received much attention recently as a technique for the rapid solution of ordinary differential equations. This technique also works well for solving linear eigenvalue problems. Specific detail is given to the properties and algebra of chebyshev polynomials; the use of chebyshev polynomials in spectral methods; and the recurrence relationships that are developed. These formula and equations are then applied to several examples which are worked out in detail. The appendix contains an example FORTRAN program used in solving an eigenvalue problem.

  14. Periodontitis is related to lung volumes and airflow limitation: a cross-sectional study.

    PubMed

    Holtfreter, Birte; Richter, Stefanie; Kocher, Thomas; Dörr, Marcus; Völzke, Henry; Ittermann, Till; Obst, Anne; Schäper, Christoph; John, Ulrich; Meisel, Peter; Grotevendt, Anne; Felix, Stephan B; Ewert, Ralf; Gläser, Sven

    2013-12-01

    This study aimed to assess the potential association of periodontal diseases with lung volumes and airflow limitation in a general adult population. Based on a representative population sample of the Study of Health in Pomerania (SHIP), 1463 subjects aged 25-86 years were included. Periodontal status was assessed by clinical attachment loss (CAL), probing depth and number of missing teeth. Lung function was measured using spirometry, body plethysmography and diffusing capacity of the lung for carbon monoxide. Linear regression models using fractional polynomials were used to assess associations between periodontal disease and lung function. Fibrinogen and high-sensitivity C-reactive protein (hs-CRP) were evaluated as potential intermediate factors. After full adjustment for potential confounders mean CAL was significantly associated with variables of mobile dynamic and static lung volumes, airflow limitation and hyperinflation (p<0.05). Including fibrinogen and hs-CRP did not change coefficients of mean CAL; associations remained statistically significant. Mean CAL was not associated with total lung capacity and diffusing capacity of the lung for carbon monoxide. Associations were confirmed for mean probing depth, extent measures of CAL/probing depth and number of missing teeth. Periodontal disease was significantly associated with reduced lung volumes and airflow limitation in this general adult population sample. Systemic inflammation did not provide a mechanism linking both diseases.

  15. Single field double inflation and primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannike, K.; Marzola, L.; Raidal, M.

    Within the framework of scalar-tensor theories, we study the conditions that allow single field inflation dynamics on small cosmological scales to significantly differ from that of the large scales probed by the observations of cosmic microwave background. The resulting single field double inflation scenario is characterised by two consequent inflation eras, usually separated by a period where the slow-roll approximation fails. At large field values the dynamics of the inflaton is dominated by the interplay between its non-minimal coupling to gravity and the radiative corrections to the inflaton self-coupling. For small field values the potential is, instead, dominated by amore » polynomial that results in a hilltop inflation. Without relying on the slow-roll approximation, which is invalidated by the appearance of the intermediate stage, we propose a concrete model that matches the current measurements of inflationary observables and employs the freedom granted by the framework on small cosmological scales to give rise to a sizeable population of primordial black holes generated by large curvature fluctuations. We find that these features generally require a potential with a local minimum. We show that the associated primordial black hole mass function is only approximately lognormal.« less

  16. Simultaneous Inference For The Mean Function Based on Dense Functional Data

    PubMed Central

    Cao, Guanqun; Yang, Lijian; Todem, David

    2012-01-01

    A polynomial spline estimator is proposed for the mean function of dense functional data together with a simultaneous confidence band which is asymptotically correct. In addition, the spline estimator and its accompanying confidence band enjoy oracle efficiency in the sense that they are asymptotically the same as if all random trajectories are observed entirely and without errors. The confidence band is also extended to the difference of mean functions of two populations of functional data. Simulation experiments provide strong evidence that corroborates the asymptotic theory while computing is efficient. The confidence band procedure is illustrated by analyzing the near infrared spectroscopy data. PMID:22665964

  17. GUIDELINES FOR INSTALLATION AND SAMPLING OF SUB-SLAB VAPOR PROBES TO SUPPORT ASSESSMENT OF VAPOR INTRUSION

    EPA Science Inventory

    The purpose of this paper is to provide guidelines for sub-slab sampling using dedicated vapor probes. Use of dedicated vapor probes allows for multiple sample events before and after corrective action and for vacuum testing to enhance the design and monitoring of a corrective m...

  18. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  19. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  20. Von Bertalanffy's dynamics under a polynomial correction: Allee effect and big bang bifurcation

    NASA Astrophysics Data System (ADS)

    Leonel Rocha, J.; Taha, A. K.; Fournier-Prunaret, D.

    2016-02-01

    In this work we consider new one-dimensional populational discrete dynamical systems in which the growth of the population is described by a family of von Bertalanffy's functions, as a dynamical approach to von Bertalanffy's growth equation. The purpose of introducing Allee effect in those models is satisfied under a correction factor of polynomial type. We study classes of von Bertalanffy's functions with different types of Allee effect: strong and weak Allee's functions. Dependent on the variation of four parameters, von Bertalanffy's functions also includes another class of important functions: functions with no Allee effect. The complex bifurcation structures of these von Bertalanffy's functions is investigated in detail. We verified that this family of functions has particular bifurcation structures: the big bang bifurcation of the so-called “box-within-a-box” type. The big bang bifurcation is associated to the asymptotic weight or carrying capacity. This work is a contribution to the study of the big bang bifurcation analysis for continuous maps and their relationship with explosion birth and extinction phenomena.

  1. Creation and validation of the Singapore birth nomograms for birth weight, length and head circumference based on a 12-year birth cohort.

    PubMed

    Poon, Woei Bing; Fook-Chong, Stephanie M C; Ler, Grace Y L; Loh, Zhi Wen; Yeo, Cheo Lian

    2014-06-01

    Both gestation and birth weight have significant impact on mortality and morbidity in newborn infants. Nomograms at birth allow classification of infants into small for gestational age (SGA) and large for gestational age (LGA) categories, for risk stratification and more intensive monitoring. To date, the growth charts for preterm newborn infants in Singapore are based on the Fenton growth charts, which are constructed based on combining data from various Western growth cohorts. Hence, we aim to create Singapore nomograms for birth weight, length and head circumference at birth, which would reflect the norms and challenges faced by local infants. Growth parameters of all babies born or admitted to our unit from 2001 to 2012 were retrieved. Following exclusion of outliers, nomograms for 3 percentiles of 10th, 50th, and 90th were generated for the gestational age (GA) ranges of 25 to 42 weeks using quantile regression (QR) combined with the use of restricted cubic splines. Various polynomial models (second to third degrees) were investigated for suitability of fit. The optimum QR model was found to be a third degree polynomial with a single knotted cubic spline in the mid-point of the GA range, at 33.5 weeks. Check for goodness of fit was done by visual inspection first. Next, check was performed to ensure the correct proportion: 10% of all cases fall above the upper 90th percentile and 10% fall below the lower 10th percentile. Furthermore, an alternative formula-based method of nomogram construction, using mean, standard deviation (SD) and assumption of normality at each gestational age, was used for counterchecking. A total of 13,403 newborns were included in the analysis. The new infant-foetal growth charts with respect to birth weight, heel-crown length and occipitofrontal circumference from 25 to 42 weeks gestations with the 10th, 50th and 90th were presented. Nomograms for birth weight, length and head circumference at birth had significant impact on neonatal practice and validation of the Singapore birth nomograms against Fenton growth charts showed better sensitivity and comparable specificity, positive and negative predictive values.

  2. Improved volumetric measurement of brain structure with a distortion correction procedure using an ADNI phantom.

    PubMed

    Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi

    2013-06-01

    Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.

  3. Polynomial Conjoint Analysis of Similarities: A Model for Constructing Polynomial Conjoint Measurement Algorithms.

    ERIC Educational Resources Information Center

    Young, Forrest W.

    A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…

  4. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    PubMed

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  5. Correction of erroneously packed protein's side chains in the NMR structure based on ab initio chemical shift calculations.

    PubMed

    Zhu, Tong; Zhang, John Z H; He, Xiao

    2014-09-14

    In this work, protein side chain (1)H chemical shifts are used as probes to detect and correct side-chain packing errors in protein's NMR structures through structural refinement. By applying the automated fragmentation quantum mechanics/molecular mechanics (AF-QM/MM) method for ab initio calculation of chemical shifts, incorrect side chain packing was detected in the NMR structures of the Pin1 WW domain. The NMR structure is then refined by using molecular dynamics simulation and the polarized protein-specific charge (PPC) model. The computationally refined structure of the Pin1 WW domain is in excellent agreement with the corresponding X-ray structure. In particular, the use of the PPC model yields a more accurate structure than that using the standard (nonpolarizable) force field. For comparison, some of the widely used empirical models for chemical shift calculations are unable to correctly describe the relationship between the particular proton chemical shift and protein structures. The AF-QM/MM method can be used as a powerful tool for protein NMR structure validation and structural flaw detection.

  6. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  7. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  8. Design and development by direct polishing of the WFXT thin polynomial mirror shells

    NASA Astrophysics Data System (ADS)

    Proserpio, L.; Campana, S.; Citterio, O.; Civitani, M.; Combrinck, H.; Conconi, P.; Cotroneo, V.; Freeman, R.; Mattini, E.; Langstrof, P.; Morton, R.; Motta, G.; Oberle, O.; Pareschi, G.; Parodi, G.; Pels, C.; Schenk, C.; Stock, R.; Tagliaferri, G.

    2017-11-01

    The Wide Field X-ray Telescope (WFXT) is a medium class mission proposed to address key questions about cosmic origins and physics of the cosmos through an unprecedented survey of the sky in the soft X-ray band (0.2-6 keV) [1], [2]. In order to get the desired angular resolution of 10 arcsec (5 arcsec goal) on the entire 1 degrees Field Of View (FOV), the design of the optical system is based on nested grazing-incidence polynomial profiles mirrors, and assumes a focal plane curvature and plate scale corrections among the shells. This design guarantees an increased angular resolution also at large off-axis positions with respect to the usually adopted Wolter I configuration. In order to meet the requirements in terms of mass and effective area (less than 1200 kg, 6000 cm2 @ 1 keV), the nested shells are thin and made of quartz glass. The telescope assembly is composed by three identical modules of 78 nested shells each, with diameter up to 1.1 m, length in the range of 200-440 mm and thickness of less than 2.2 mm. At this regard, a deterministic direct polishing method is under investigation to manufacture the WFXT thin grazing-incidence mirrors made of quartz. The direct polishing method has already been used for past missions (as Einstein, Rosat, Chandra) but based on much thicker shells (10 mm ore more). The technological challenge for WFXT is to apply the same approach but for 510 times thinner shells. The proposed approach is based on two main steps: first, quartz glass tubes available on the market are ground to conical profiles; second the pre-shaped shells are polished to the required polynomial profiles using a CNC polishing machine. In this paper, preliminary results on the direct grinding and polishing of prototypes shells made by quartz glass with low thickness, representative of the WFXT optical design, are presented.

  9. Priming effects under correct change detection and change blindness.

    PubMed

    Caudek, Corrado; Domini, Fulvio

    2013-03-01

    In three experiments, we investigated the priming effects induced by an image change on a successive animate/inanimate decision task. We studied both perceptual (Experiments 1 and 2) and conceptual (Experiment 3) priming effects, under correct change detection and change blindness (CB). Under correct change detection, we found larger positive priming effects on congruent trials for probes representing animate entities than for probes representing artifactual objects. Under CB, we found performance impairment relative to a "no-change" baseline condition. This inhibition effect induced by CB was modulated by the semantic congruency between the changed item and the probe in the case of probe images, but not for probe words. We discuss our results in the context of the literature on the negative priming effect. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  11. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens

    PubMed Central

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.

    2015-01-01

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169

  12. Cylinder surface test with Chebyshev polynomial fitting method

    NASA Astrophysics Data System (ADS)

    Yu, Kui-bang; Guo, Pei-ji; Chen, Xi

    2017-10-01

    Zernike polynomials fitting method is often applied in the test of optical components and systems, used to represent the wavefront and surface error in circular domain. Zernike polynomials are not orthogonal in rectangular region which results in its unsuitable for the test of optical element with rectangular aperture such as cylinder surface. Applying the Chebyshev polynomials which are orthogonal among the rectangular area as an substitution to the fitting method, can solve the problem. Corresponding to a cylinder surface with diameter of 50 mm and F number of 1/7, a measuring system has been designed in Zemax based on Fizeau Interferometry. The expressions of the two-dimensional Chebyshev polynomials has been given and its relationship with the aberration has been presented. Furthermore, Chebyshev polynomials are used as base items to analyze the rectangular aperture test data. The coefficient of different items are obtained from the test data through the method of least squares. Comparing the Chebyshev spectrum in different misalignment, it show that each misalignment is independence and has a certain relationship with the certain Chebyshev terms. The simulation results show that, through the Legendre polynomials fitting method, it will be a great improvement in the efficient of the detection and adjustment of the cylinder surface test.

  13. Pulse transmission transmitter including a higher order time derivate filter

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-09-23

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission transmitter includes: a clock; a pseudorandom polynomial generator coupled to the clock, the pseudorandom polynomial generator having a polynomial load input; an exclusive-OR gate coupled to the pseudorandom polynomial generator, the exclusive-OR gate having a serial data input; a programmable delay circuit coupled to both the clock and the exclusive-OR gate; a pulse generator coupled to the programmable delay circuit; and a higher order time derivative filter coupled to the pulse generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zlotnikov, Michael

    We develop a polynomial reduction procedure that transforms any gauge fixed CHY amplitude integrand for n scattering particles into a σ-moduli multivariate polynomial of what we call the standard form. We show that a standard form polynomial must have a specific ladder type monomial structure, which has finite size at any n, with highest multivariate degree given by (n – 3)(n – 4)/2. This set of monomials spans a complete basis for polynomials with rational coefficients in kinematic data on the support of scattering equations. Subsequently, at tree and one-loop level, we employ the global residue theorem to derive amore » prescription that evaluates any CHY amplitude by means of collecting simple residues at infinity only. Furthermore, the prescription is then applied explicitly to some tree and one-loop amplitude examples.« less

  15. Update on laser vision correction using wavefront analysis with the CustomCornea system and LADARVision 193-nm excimer laser

    NASA Astrophysics Data System (ADS)

    Maguen, Ezra I.; Salz, James J.; McDonald, Marguerite B.; Pettit, George H.; Papaioannou, Thanassis; Grundfest, Warren S.

    2002-06-01

    A study was undertaken to assess whether results of laser vision correction with the LADARVISION 193-nm excimer laser (Alcon-Autonomous technologies) can be improved with the use of wavefront analysis generated by a proprietary system including a Hartman-Schack sensor and expressed using Zernicke polynomials. A total of 82 eyes underwent LASIK in several centers with an improved algorithm, using the CustomCornea system. A subgroup of 48 eyes of 24 patients was randomized so that one eye undergoes conventional treatment and one eye undergoes treatment based on wavefront analysis. Treatment parameters were equal for each type of refractive error. 83% of all eyes had uncorrected vision of 20/20 or better and 95% were 20/25 or better. In all groups, uncorrected visual acuities did not improve significantly in eyes treated with wavefront analysis compared to conventional treatments. Higher order aberrations were consistently better corrected in eyes undergoing treatment based on wavefront analysis for LASIK at 6 months postop. In addition, the number of eyes with reduced RMS was significantly higher in the subset of eyes treated with a wavefront algorithm (38% vs. 5%). Wavefront technology may improve the outcomes of laser vision correction with the LADARVISION excimer laser. Further refinements of the technology and clinical trials will contribute to this goal.

  16. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  17. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  18. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  19. Numerical algebraic geometry for model selection and its application to the life sciences

    PubMed Central

    Gross, Elizabeth; Davis, Brent; Ho, Kenneth L.; Bates, Daniel J.

    2016-01-01

    Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology. PMID:27733697

  20. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  1. Spectral colors capture and reproduction based on digital camera

    NASA Astrophysics Data System (ADS)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  2. Synthesis and characterization of time-resolved fluorescence probes for evaluation of competitive binding to melanocortin receptors.

    PubMed

    Alleti, Ramesh; Vagner, Josef; Dehigaspitiya, Dilani Chathurika; Moberg, Valerie E; Elshan, N G R D; Tafreshi, Narges K; Brabez, Nabila; Weber, Craig S; Lynch, Ronald M; Hruby, Victor J; Gillies, Robert J; Morse, David L; Mash, Eugene A

    2013-09-01

    Probes for use in time-resolved fluorescence competitive binding assays at melanocortin receptors based on the parental ligands MSH(4), MSH(7), and NDP-α-MSH were prepared by solid phase synthesis methods, purified, and characterized. The saturation binding of these probes was studied using HEK-293 cells engineered to overexpress the human melanocortin 4 receptor (hMC4R) as well as the human cholecystokinin 2 receptor (hCCK2R). The ratios of non-specific binding to total binding approached unity at high concentrations for each probe. At low probe concentrations, receptor-mediated binding and uptake was discernable, and so probe concentrations were kept as low as possible in determining Kd values. The Eu-DTPA-PEGO-MSH(4) probe exhibited low specific binding relative to non-specific binding, even at low nanomolar concentrations, and was deemed unsuitable for use in competition binding assays. The Eu-DTPA-PEGO probes based on MSH(7) and NDP-α-MSH exhibited Kd values of 27±3.9nM and 4.2±0.48nM, respectively, for binding with hMC4R. These probes were employed in competitive binding assays to characterize the interactions of hMC4R with monovalent and divalent MSH(4), MSH(7), and NDP-α-MSH constructs derived from squalene. Results from assays with both probes reflected only statistical enhancements, suggesting improper ligand spacing on the squalene scaffold for the divalent constructs. The Ki values from competitive binding assays that employed the MSH(7)-based probe were generally lower than the Ki values obtained when the probe based on NDP-α-MSH was employed, which is consistent with the greater potency of the latter probe. The probe based on MSH(7) was also competed with monovalent, divalent, and trivalent MSH(4) constructs that previously demonstrated multivalent binding in competitive binding assays against a variant of the probe based on NDP-α-MSH. Results from these assays confirm multivalent binding, but suggest a more modest increase in avidity for these MSH(4) constructs than was previously reported. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. HybProbes-based real-time PCR assay for specific identification of Streptomyces scabies and Streptomyces europaeiscabiei, the potato common scab pathogens.

    PubMed

    Xu, R; Falardeau, J; Avis, T J; Tambong, J T

    2016-02-01

    The aim of this study was to develop and validate a HybProbes-based real-time PCR assay targeting the trpB gene for specific identification of Streptomyces scabies and Streptomyces europaeiscabiei. Four primer pairs and a fluorescent probe were designed and evaluated for specificity in identifying S. scabies and Streptomyces europaeiscabiei, the potato common scab pathogens. The specificity of the HybProbes-based real-time PCR assay was evaluated using 46 bacterial strains, 23 Streptomyces strains and 23 non-Streptomyces bacterial species. Specific and strong fluorescence signals were detected from all nine strains of S. scabies and Streptomyces europaeiscabiei. No fluorescence signal was detected from 14 strains of other Streptomyces species and all non-Streptomyces strains. The identification was corroborated by the melting curve analysis that was performed immediately after the amplification step. Eight of the nine S. scabies and S. europaeiscabiei strains exhibited a unique melting peak, at Tm of 69·1°C while one strain, Warba-6, had a melt peak at Tm of 65·4°C. This difference in Tm peaks could be attributed to a guanine to cytosine mutation in strain Warba-6 at the region spanning the donor HybProbe. The reported HybProbes assay provides a more specific tool for accurate identification of S. scabies and S. europaeiscabiei strains. This study reports a novel assay based on HybProbes chemistry for rapid and accurate identification of the potato common scab pathogens. Since the HybProbes chemistry requires two probes for positive identification, the assay is considered to be more specific than conventional PCR or TaqMan real-time PCR. The developed assay would be a useful tool with great potential in early diagnosis and detection of common scab pathogens of potatoes in infected plants or for surveillance of potatoes grown in soil environment. © 2015 Her Majesty the Queen in Right of Canada © 2015 The Society for Applied Microbiology.

  4. By any other name: when will preschoolers produce several labels for a referent?

    PubMed

    Deák, G O; Yen, L; Pettit, J

    2001-10-01

    Two experiments investigated why preschool children sometimes produce multiple words for a referent (i.e. polynomy), but other times seem to allow only one word. In Experiment 1, 40 three- and four-year-olds completed a modification of Deák & Maratsos' (1998) naming task. Although social demands to produce multiple words were reduced, children produced, on average, more than two words per object. Number of words produced was predicted by receptive vocabulary. Lexical insight (i.e. knowing that a word refers to function or appearance) and metalexical beliefs (i.e. that a hypothetical referent has one label, or more than one) were not preconditions of polynomy. Polynomy was independent of bias to map novel words to unfamiliar referents. In Experiment 2, 40 three- and four-year-olds learned new words for nameable objects. Children showed a correction effect, yet produced more than two words per object. Children do not have a generalized one-word-per-object bias, even during word learning. Other explanations (e.g. contextual restriction of lexical access) are discussed.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lue Xing; Sun Kun; Wang Pan

    In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less

  6. An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1989-01-01

    An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.

  7. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  8. Development of Prevotella intermedia-specific PCR primers based on the nucleotide sequences of a DNA probe Pig27.

    PubMed

    Kim, Min Jung; Hwang, Kyung Hwan; Lee, Young-Seok; Park, Jae-Yoon; Kook, Joong-Ki

    2011-03-01

    The aim of this study was to develop Prevotella intermedia-specific PCR primers based on the P. intermedia-specific DNA probe. The P. intermedia-specific DNA probe was screened by inverted dot blot hybridization and confirmed by Southern blot hybridization. The nucleotide sequences of the species-specific DNA probes were determined using a chain termination method. Southern blot analysis showed that the DNA probe, Pig27, detected only the genomic DNA of P. intermedia strains. PCR showed that the PCR primers, Pin-F1/Pin-R1, had species-specificity for P. intermedia. The detection limits of the PCR primer sets were 0.4pg of the purified genomic DNA of P. intermedia ATCC 49046. These results suggest that the PCR primers, Pin-F1/Pin-R1, could be useful in the detection of P. intermedia as well as in the development of a PCR kit in epidemiological studies related to periodontal diseases. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.

  9. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  10. Utilizing Intrinsic Properties of Polyaniline to Detect Nucleic Acid Hybridization through UV-Enhanced Electrostatic Interaction.

    PubMed

    Sengupta, Partha Pratim; Gloria, Jared N; Amato, Dahlia N; Amato, Douglas V; Patton, Derek L; Murali, Beddhu; Flynt, Alex S

    2015-10-12

    Detection of specific RNA or DNA molecules by hybridization to "probe" nucleic acids via complementary base-pairing is a powerful method for analysis of biological systems. Here we describe a strategy for transducing hybridization events through modulating intrinsic properties of the electroconductive polymer polyaniline (PANI). When DNA-based probes electrostatically interact with PANI, its fluorescence properties are increased, a phenomenon that can be enhanced by UV irradiation. Hybridization of target nucleic acids results in dissociation of probes causing PANI fluorescence to return to basal levels. By monitoring restoration of base PANI fluorescence as little as 10(-11) M (10 pM) of target oligonucleotides could be detected within 15 min of hybridization. Detection of complementary oligos was specific, with introduction of a single mismatch failing to form a target-probe duplex that would dissociate from PANI. Furthermore, this approach is robust and is capable of detecting specific RNAs in extracts from animals. This sensor system improves on previously reported strategies by transducing highly specific probe dissociation events through intrinsic properties of a conducting polymer without the need for additional labels.

  11. Reply to ‘No correction for the light propagation within the cube: Comment on Relativistic theory of the falling cube gravimeter’

    NASA Astrophysics Data System (ADS)

    Ashby, Neil

    2018-06-01

    The comment (Nagornyi 2018 Metrologia) claims that, notwithstanding the conclusions stated in the paper Relativistic theory of the falling cube gravimeter (Ashby 2008 Metrologia 55 1–10), there is no need to consider the dimensions or refractive index of the cube in fitting data from falling cube absolute gravimeters; additional questions are raised about matching quartic polynomials while determining only three quantities. The comment also suggests errors were made in Ashby (2008 Metrologia 55 1–10) while implementing the fitting routines on which the conclusions were based. The main contention of the comment is shown to be invalid because retarded time was not properly used in constructing a fictitious cube position. Such a fictitious position, fixed relative to the falling cube, is derived and shown to be dependent on cube dimensions and refractive index. An example is given showing how in the present context, polynomials of fourth order can be effectively matched by determining only three quantities, and a new compact characterization of the interference signal arriving at the detector is given. Work of the U.S. government, not subject to copyright.

  12. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  13. Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields

    NASA Astrophysics Data System (ADS)

    Milstead, Jonathan

    The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.

  14. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  15. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system

    PubMed Central

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-01

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer’s Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26-cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients. PMID:28033119

  16. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  17. Quantification of different Eubacterium spp. in human fecal samples with species-specific 16S rRNA-targeted oligonucleotide probes.

    PubMed

    Schwiertz, A; Le Blay, G; Blaut, M

    2000-01-01

    Species-specific 16S rRNA-targeted, Cy3 (indocarbocyanine)-labeled oligonucleotide probes were designed and validated to quantify different Eubacterium species in human fecal samples. Probes were directed at Eubacterium barkeri, E. biforme, E. contortum, E. cylindroides (two probes), E. dolichum, E. hadrum, E. lentum, E. limosum, E. moniliforme, and E. ventriosum. The specificity of the probes was tested with the type strains and a range of common intestinal bacteria. With one exception, none of the probes showed cross-hybridization under stringent conditions. The species-specific probes were applied to fecal samples obtained from 12 healthy volunteers. E. biforme, E. cylindroides, E. hadrum, E. lentum, and E. ventriosum could be determined. All other Eubacterium species for which probes had been designed were under the detection limit of 10(7) cells g (dry weight) of feces(-1). The cell counts obtained are essentially in accordance with the literature data, which are based on colony counts. This shows that whole-cell in situ hybridization with species-specific probes is a valuable tool for the enumeration of Eubacterium species in feces.

  18. Quantification of Different Eubacterium spp. in Human Fecal Samples with Species-Specific 16S rRNA-Targeted Oligonucleotide Probes

    PubMed Central

    Schwiertz, Andreas; Le Blay, Gwenaelle; Blaut, Michael

    2000-01-01

    Species-specific 16S rRNA-targeted, Cy3 (indocarbocyanine)-labeled oligonucleotide probes were designed and validated to quantify different Eubacterium species in human fecal samples. Probes were directed at Eubacterium barkeri, E. biforme, E. contortum, E. cylindroides (two probes), E. dolichum, E. hadrum, E. lentum, E. limosum, E. moniliforme, and E. ventriosum. The specificity of the probes was tested with the type strains and a range of common intestinal bacteria. With one exception, none of the probes showed cross-hybridization under stringent conditions. The species-specific probes were applied to fecal samples obtained from 12 healthy volunteers. E. biforme, E. cylindroides, E. hadrum, E. lentum, and E. ventriosum could be determined. All other Eubacterium species for which probes had been designed were under the detection limit of 107 cells g (dry weight) of feces−1. The cell counts obtained are essentially in accordance with the literature data, which are based on colony counts. This shows that whole-cell in situ hybridization with species-specific probes is a valuable tool for the enumeration of Eubacterium species in feces. PMID:10618251

  19. Comprehensive characterization of evolutionary conserved breakpoints in four New World Monkey karyotypes compared to Chlorocebus aethiops and Homo sapiens.

    PubMed

    Fan, Xiaobo; Supiwong, Weerayuth; Weise, Anja; Mrasek, Kristin; Kosyakova, Nadezda; Tanomtong, Alongkoad; Pinthong, Krit; Trifonov, Vladimir A; Cioffi, Marcelo de Bello; Grothmann, Pierre; Liehr, Thomas; Oliveira, Edivaldo H C de

    2015-11-01

    Comparative cytogenetic analysis in New World Monkeys (NWMs) using human multicolor banding (MCB) probe sets were not previously done. Here we report on an MCB based FISH-banding study complemented with selected locus-specific and heterochromatin specific probes in four NWMs and one Old World Monkey (OWM) species, i.e. in Alouatta caraya (ACA), Callithrix jacchus (CJA), Cebus apella (CAP), Saimiri sciureus (SSC), and Chlorocebus aethiops (CAE), respectively. 107 individual evolutionary conserved breakpoints (ECBs) among those species were identified and compared with those of other species in previous reports. Especially for chromosomal regions being syntenic to human chromosomes 6, 8, 9, 10, 11, 12 and 16 previously cryptic rearrangements could be observed. 50.4% (54/107) NWM-ECBs were colocalized with those of OWMs, 62.6% (62/99) NWM-ECBs were related with those of Hylobates lar (HLA) and 66.3% (71/107) NWM-ECBs corresponded with those known from other mammalians. Furthermore, human fragile sites were aligned with the ECBs found in the five studied species and interestingly 66.3% ECBs colocalized with those fragile sites (FS). Overall, this study presents detailed chromosomal maps of one OWM and four NWM species. This data will be helpful to further investigation on chromosome evolution in NWM and hominoids in general and is prerequisite for correct interpretation of future sequencing based genomic studies in those species.

  20. Multiple-correction hybrid k-exact schemes for high-order compressible RANS-LES simulations on fully unstructured grids

    NASA Astrophysics Data System (ADS)

    Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe

    2017-12-01

    A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks, vortical structures and complex geometries.

  1. Correction of geometric distortion in Propeller echo planar imaging using a modified reversed gradient approach

    PubMed Central

    Chang, Hing-Chiu; Chuang, Tzu-Chao; Wang, Fu-Nien; Huang, Teng-Yi; Chung, Hsiao-Wen

    2013-01-01

    Objective This study investigates the application of a modified reversed gradient algorithm to the Propeller-EPI imaging method (periodically rotated overlapping parallel lines with enhanced reconstruction based on echo-planar imaging readout) for corrections of geometric distortions due to the EPI readout. Materials and methods Propeller-EPI acquisition was executed with 360-degree rotational coverage of the k-space, from which the image pairs with opposite phase-encoding gradient polarities were extracted for reversed gradient geometric and intensity corrections. The spatial displacements obtained on a pixel-by-pixel basis were fitted using a two-dimensional polynomial followed by low-pass filtering to assure correction reliability in low-signal regions. Single-shot EPI images were obtained on a phantom, whereas high spatial resolution T2-weighted and diffusion tensor Propeller-EPI data were acquired in vivo from healthy subjects at 3.0 Tesla, to demonstrate the effectiveness of the proposed algorithm. Results Phantom images show success of the smoothed displacement map concept in providing improvements of the geometric corrections at low-signal regions. Human brain images demonstrate prominently superior reconstruction quality of Propeller-EPI images with modified reversed gradient corrections as compared with those obtained without corrections, as evidenced from verification against the distortion-free fast spin-echo images at the same level. Conclusions The modified reversed gradient method is an effective approach to obtain high-resolution Propeller-EPI images with substantially reduced artifacts. PMID:23630654

  2. NADH-fluorescence scattering correction for absolute concentration determination in a liquid tissue phantom using a novel multispectral magnetic-resonance-imaging-compatible needle probe

    NASA Astrophysics Data System (ADS)

    Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias

    2017-07-01

    In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.

  3. Dynamic pressure probe response tests for robust measurements in periodic flows close to probe resonating frequency

    NASA Astrophysics Data System (ADS)

    Ceyhun Şahin, Fatma; Schiffmann, Jürg

    2018-02-01

    A single-hole probe was designed to measure steady and periodic flows with high fluctuation amplitudes and with minimal flow intrusion. Because of its high aspect ratio, estimations showed that the probe resonates at a frequency two orders of magnitude lower than the fast response sensor cut-off frequencies. The high fluctuation amplitudes cause a non-linear behavior of the probe and available models are neither adequate for a quantitative estimation of the resonating frequencies nor for predicting the system damping. Instead, a non-linear data correction procedure based on individual transfer functions defined for each harmonic contribution is introduced for pneumatic probes that allows to extend their operating range beyond the resonating frequencies and linear dynamics. This data correction procedure was assessed on a miniature single-hole probe of 0.35 mm inner diameter which was designed to measure flow speed and direction. For the reliable use of such a probe in periodic flows, its frequency response was reproduced with a siren disk, which allows exciting the probe up to 10 kHz with peak-to-peak amplitudes ranging between 20%-170% of the absolute mean pressure. The effect of the probe interior design on the phase lag and amplitude distortion in periodic flow measurements was investigated on probes with similar inner diameters and different lengths or similar aspect ratios (L/D) and different total interior volumes. The results suggest that while the tube length consistently sets the resonance frequency, the internal total volume affects the non-linear dynamic response in terms of varying gain functions. A detailed analysis of the introduced calibration methodology shows that the goodness of the reconstructed data compared to the reference data is above 75% for fundamental frequencies up to twice the probe resonance frequency. The results clearly suggest that the introduced procedure is adequate to capture non-linear pneumatic probe dynamics and to reproduce time-resolved data far above probe resonant frequency.

  4. A generalized algorithm to design finite field normal basis multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1986-01-01

    Finite field arithmetic logic is central in the implementation of some error-correcting coders and some cryptographic devices. There is a need for good multiplication algorithms which can be easily realized. Massey and Omura recently developed a new multiplication algorithm for finite fields based on a normal basis representation. Using the normal basis representation, the design of the finite field multiplier is simple and regular. The fundamental design of the Massey-Omura multiplier is based on a design of a product function. In this article, a generalized algorithm to locate a normal basis in a field is first presented. Using this normal basis, an algorithm to construct the product function is then developed. This design does not depend on particular characteristics of the generator polynomial of the field.

  5. Scanning optical microscope with long working distance objective

    DOEpatents

    Cloutier, Sylvain G.

    2010-10-19

    A scanning optical microscope, including: a light source to generate a beam of probe light; collimation optics to substantially collimate the probe beam; a probe-result beamsplitter; a long working-distance, infinity-corrected objective; scanning means to scan a beam spot of the focused probe beam on or within a sample; relay optics; and a detector. The collimation optics are disposed in the probe beam. The probe-result beamsplitter is arranged in the optical paths of the probe beam and the resultant light from the sample. The beamsplitter reflects the probe beam into the objective and transmits resultant light. The long working-distance, infinity-corrected objective is also arranged in the optical paths of the probe beam and the resultant light. It focuses the reflected probe beam onto the sample, and collects and substantially collimates the resultant light. The relay optics are arranged to relay the transmitted resultant light from the beamsplitter to the detector.

  6. An image registration based ultrasound probe calibration

    NASA Astrophysics Data System (ADS)

    Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram

    2012-02-01

    Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).

  7. CORFIG- CORRECTOR SURFACE DESIGN SOFTWARE

    NASA Technical Reports Server (NTRS)

    Dantzler, A.

    1994-01-01

    Corrector Surface Design Software, CORFIG, calculates the optimum figure of a corrector surface for an optical system based on real ray traces. CORFIG generates the corrector figure in the form of a spline data point table and/or a list of polynomial coefficients. The number of spline data points as well as the number of coefficients is user specified. First, the optical system's parameters (thickness, radii of curvature, etc.) are entered. CORFIG will trace the outermost axial real ray through the uncorrected system to determine approximate radial limits for all rays. Then, several real rays are traced backwards through the system from the image to the surface that originally followed the object, within these radial limits. At this first surface, the local curvature is adjusted on a small scale to direct the rays toward the object, thus removing any accumulated aberrations. For each ray traced, this adjustment will be different, so that at the end of this process the resultant surface is made up of many local curvatures. The equations that describe these local surfaces, expressed as high order polynomials, are then solved simultaneously to yield the final surface figure, from which data points are extracted. Finally, a spline table or list of polynomial coefficients is extracted from these data points. CORFIG is intended to be used in the late stages of optical design. The system's design must have at least a good paraxial foundation. Preferably, the design should be at a stage where traditional methods of Seidel aberration correction will not bring about the required image spot size specification. CORFIG will read the system parameters of such a design and calculate the optimum figure for the first surface such that all of the original parameters remain unchanged. Depending upon the system, CORFIG can reduce the RMS image spot radius by a factor of 5 to 25. The original parameters (magnification, back focal length, etc.) are maintained because all rays upon which the corrector figure is based are traced within the bounds of the original system's outermost ray. For this reason the original system must have a certain degree of integrity. CORFIG optimizes the corrector surface figure for on-axis images at a single wavelength only. However, it has been demonstrated many times that CORFIG's method also significantly improves the quality of field images and images formed from wavelengths other than the center wavelength. CORFIG is written completely in VAX FORTRAN. It has been implemented on a DEC VAX series computer under VMS with a central memory requirement of 55 K bytes. This program was developed in 1986.

  8. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  9. Framework for reanalysis of publicly available Affymetrix® GeneChip® data sets based on functional regions of interest.

    PubMed

    Saka, Ernur; Harrison, Benjamin J; West, Kirk; Petruska, Jeffrey C; Rouchka, Eric C

    2017-12-06

    Since the introduction of microarrays in 1995, researchers world-wide have used both commercial and custom-designed microarrays for understanding differential expression of transcribed genes. Public databases such as ArrayExpress and the Gene Expression Omnibus (GEO) have made millions of samples readily available. One main drawback to microarray data analysis involves the selection of probes to represent a specific transcript of interest, particularly in light of the fact that transcript-specific knowledge (notably alternative splicing) is dynamic in nature. We therefore developed a framework for reannotating and reassigning probe groups for Affymetrix® GeneChip® technology based on functional regions of interest. This framework addresses three issues of Affymetrix® GeneChip® data analyses: removing nonspecific probes, updating probe target mapping based on the latest genome knowledge and grouping probes into gene, transcript and region-based (UTR, individual exon, CDS) probe sets. Updated gene and transcript probe sets provide more specific analysis results based on current genomic and transcriptomic knowledge. The framework selects unique probes, aligns them to gene annotations and generates a custom Chip Description File (CDF). The analysis reveals only 87% of the Affymetrix® GeneChip® HG-U133 Plus 2 probes uniquely align to the current hg38 human assembly without mismatches. We also tested new mappings on the publicly available data series using rat and human data from GSE48611 and GSE72551 obtained from GEO, and illustrate that functional grouping allows for the subtle detection of regions of interest likely to have phenotypical consequences. Through reanalysis of the publicly available data series GSE48611 and GSE72551, we profiled the contribution of UTR and CDS regions to the gene expression levels globally. The comparison between region and gene based results indicated that the detected expressed genes by gene-based and region-based CDFs show high consistency and regions based results allows us to detection of changes in transcript formation.

  10. DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER

    2009-04-01

    Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less

  11. Highly specific detection of genetic modification events using an enzyme-linked probe hybridization chip.

    PubMed

    Zhang, M Z; Zhang, X F; Chen, X M; Chen, X; Wu, S; Xu, L L

    2015-08-10

    The enzyme-linked probe hybridization chip utilizes a method based on ligase-hybridizing probe chip technology, with the principle of using thio-primers for protection against enzyme digestion, and using lambda DNA exonuclease to cut multiple PCR products obtained from the sample being tested into single-strand chains for hybridization. The 5'-end amino-labeled probe was fixed onto the aldehyde chip, and hybridized with the single-stranded PCR product, followed by addition of a fluorescent-modified probe that was then enzymatically linked with the adjacent, substrate-bound probe in order to achieve highly specific, parallel, and high-throughput detection. Specificity and sensitivity testing demonstrated that enzyme-linked probe hybridization technology could be applied to the specific detection of eight genetic modification events at the same time, with a sensitivity reaching 0.1% and the achievement of accurate, efficient, and stable results.

  12. Peptide Probe for Crystalline Hydroxyapatite: In Situ Detection of Biomineralization

    NASA Astrophysics Data System (ADS)

    Cicerone, Marcus; Becker, Matthew; Simon, Carl; Chatterjee, Kaushik

    2009-03-01

    While cells template mineralization in vitro and in vivo, specific detection strategies that impart chemical and structural information on this process have proven elusive. Recently we have developed an in situ based peptide probe via phage display methods that is specific to crystalline hydroxyapatite (HA). We are using this in fluorescence based assays to characterize mineralization. One application being explored is the screening of tissue engineering scaffolds for their ability to support osteogenesis. Specifically, osteoblasts are being cultured in hydrogel scaffolds possessing property gradients to provide a test bed for the HA peptide probe. Hydrogel properties that support osteogenesis and HA deposition will be identified using the probe to demonstrate its utility in optimizing design of tissue scaffolds.

  13. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.

    PubMed

    Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R

    2008-02-14

    The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.

  15. Flux-corrected transport algorithms for continuous Galerkin methods based on high order Bernstein finite elements

    NASA Astrophysics Data System (ADS)

    Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso

    2017-09-01

    This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.

  16. Orthogonal basis with a conicoid first mode for shape specification of optical surfaces.

    PubMed

    Ferreira, Chelo; López, José L; Navarro, Rafael; Sinusía, Ester Pérez

    2016-03-07

    A rigorous and powerful theoretical framework is proposed to obtain systems of orthogonal functions (or shape modes) to represent optical surfaces. The method is general so it can be applied to different initial shapes and different polynomials. Here we present results for surfaces with circular apertures when the first basis function (mode) is a conicoid. The system for aspheres with rotational symmetry is obtained applying an appropriate change of variables to Legendre polynomials, whereas the system for general freeform case is obtained applying a similar procedure to spherical harmonics. Numerical comparisons with standard systems, such as Forbes and Zernike polynomials, are performed and discussed.

  17. Orbital component extraction by time-variant sinusoidal modeling.

    NASA Astrophysics Data System (ADS)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-04-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on the (Fast) Fourier Transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic makes it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. Here, we circumvent this drawback by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach has been proven useful to characterize audio signals (music and speech), which are non-stationary in nature (Zivanovic and Schoukens, 2010, 2012). Paleoclimate proxy signals and audio signals have in nature similar dynamics; the only difference is the frequency relationship between the different components. A harmonic frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, the latter difference is irrelevant for the problem at hand. Using a sliding window approach, the model captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretation, whereas the latter are estimated by means of linear least-squares. As an output, the model provides the orbital component waveform, either in the depth or time domain. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns can be used to reconstruct changes in accumulation rate, whereas amplitude modulation can be used to reconstruct e.g. eccentricity-modulated precession. The time-variant sinusoidal model is applied to well-established Pleistocene benthic isotope records to evaluate its performance. Zivanovic M. and Schoukens J. (2010) On The Polynomial Approximation for Time-Variant Harmonic Signal Modeling. IEEE Transactions On Audio, Speech, and Language Processing vol. 19, no. 3, pp. 458-467. Doi: 10.1109/TASL.2010.2049673. Zivanovic M. and Schoukens J. (2012) Single and Piecewise Polynomials for Modeling of Pitched Sounds. IEEE Transactions On Audio, Speech, and Language Processing vol. 20, no. 4, pp. 1270-1281. Doi: 10.1109/TASL.2011.2174228.

  18. Explaining variation in tropical plant community composition: influence of environmental and spatial data quality.

    PubMed

    Jones, Mirkka M; Tuomisto, Hanna; Borcard, Daniel; Legendre, Pierre; Clark, David B; Olivas, Paulo C

    2008-03-01

    The degree to which variation in plant community composition (beta-diversity) is predictable from environmental variation, relative to other spatial processes, is of considerable current interest. We addressed this question in Costa Rican rain forest pteridophytes (1,045 plots, 127 species). We also tested the effect of data quality on the results, which has largely been overlooked in earlier studies. To do so, we compared two alternative spatial models [polynomial vs. principal coordinates of neighbour matrices (PCNM)] and ten alternative environmental models (all available environmental variables vs. four subsets, and including their polynomials vs. not). Of the environmental data types, soil chemistry contributed most to explaining pteridophyte community variation, followed in decreasing order of contribution by topography, soil type and forest structure. Environmentally explained variation increased moderately when polynomials of the environmental variables were included. Spatially explained variation increased substantially when the multi-scale PCNM spatial model was used instead of the traditional, broad-scale polynomial spatial model. The best model combination (PCNM spatial model and full environmental model including polynomials) explained 32% of pteridophyte community variation, after correcting for the number of sampling sites and explanatory variables. Overall evidence for environmental control of beta-diversity was strong, and the main floristic gradients detected were correlated with environmental variation at all scales encompassed by the study (c. 100-2,000 m). Depending on model choice, however, total explained variation differed more than fourfold, and the apparent relative importance of space and environment could be reversed. Therefore, we advocate a broader recognition of the impacts that data quality has on analysis results. A general understanding of the relative contributions of spatial and environmental processes to species distributions and beta-diversity requires that methodological artefacts are separated from real ecological differences.

  19. ERRATUM: In vivo evaluation of a neural stem cell-seeded prosthesis In vivo evaluation of a neural stem cell-seeded prosthesis

    NASA Astrophysics Data System (ADS)

    Purcell, E. K.; Seymour, J. P.; Yandamuri, S.; Kipke, D. R.

    2009-08-01

    In the published article, an error was made in figure 5. Specifically, the three-month, NSC-seeded image is a duplicate of the six-week image, and the one-day, probe alone image is a duplicate of the three-month image. The corrected figure is reproduced below. Figure 5 Figure 5. Glial encapsulation of each probe condition over the 3 month time course. Ox-42 labeled microglia and GFAP labeled astrocytes are shown. Images are taken from probes implanted in the same animal at each time point. NSC seeding was associated with reduced non-neuronal density at 1 day post-implantation in comparison to alginate coated probes and at the 1 week time point in comparison to untreated probes (P < 0.001). Glial activation is at its overall peak 1 week after insertion. A thin encapsulation layer surrounds probes at the 6 week and 3 month time points, with NSC-seeded probes having the greatest surrounding non-neuronal density P < 0.001). Interestingly, microglia appeared to have a ramified, or `surveilling', morphology surrounding a neural stem cell-alginate probe initially, whereas activated cells with an amoeboid structure were found near an alginate probe in the same hemisphere of one animal (left panels).

  20. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    PubMed Central

    2012-01-01

    Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791

  1. First Instances of Generalized Expo-Rational Finite Elements on Triangulations

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Zanaty, Peter; Laksa˚, Arne; Bang, Børre

    2011-12-01

    In this communication we consider a construction of simplicial finite elements on triangulated two-dimensional polygonal domains. This construction is, in some sense, dual to the construction of generalized expo-rational B-splines (GERBS). The main result is in the obtaining of new polynomial simplicial patches of the first several lowest possible total polynomial degrees which exhibit Hermite interpolatory properties. The derivation of these results is based on the theory of piecewise polynomial GERBS called Euler Beta-function B-splines. We also provide 3-dimensional visualization of the graphs of the new polynomial simplicial patches and their control polygons.

  2. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    PubMed

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Arrays of nucleic acid probes on biological chips

    DOEpatents

    Chee, Mark; Cronin, Maureen T.; Fodor, Stephen P. A.; Huang, Xiaohua X.; Hubbell, Earl A.; Lipshutz, Robert J.; Lobban, Peter E.; Morris, MacDonald S.; Sheldon, Edward L.

    1998-11-17

    DNA chips containing arrays of oligonucleotide probes can be used to determine whether a target nucleic acid has a nucleotide sequence identical to or different from a specific reference sequence. The array of probes comprises probes exactly complementary to the reference sequence, as well as probes that differ by one or more bases from the exactly complementary probes.

  4. Brain-shift compensation using intraoperative ultrasound and constraint-based biomechanical simulation.

    PubMed

    Morin, Fanny; Courtecuisse, Hadrien; Reinertsen, Ingerid; Le Lann, Florian; Palombi, Olivier; Payan, Yohan; Chabanas, Matthieu

    2017-08-01

    During brain tumor surgery, planning and guidance are based on preoperative images which do not account for brain-shift. However, this deformation is a major source of error in image-guided neurosurgery and affects the accuracy of the procedure. In this paper, we present a constraint-based biomechanical simulation method to compensate for craniotomy-induced brain-shift that integrates the deformations of the blood vessels and cortical surface, using a single intraoperative ultrasound acquisition. Prior to surgery, a patient-specific biomechanical model is built from preoperative images, accounting for the vascular tree in the tumor region and brain soft tissues. Intraoperatively, a navigated ultrasound acquisition is performed directly in contact with the organ. Doppler and B-mode images are recorded simultaneously, enabling the extraction of the blood vessels and probe footprint, respectively. A constraint-based simulation is then executed to register the pre- and intraoperative vascular trees as well as the cortical surface with the probe footprint. Finally, preoperative images are updated to provide the surgeon with images corresponding to the current brain shape for navigation. The robustness of our method is first assessed using sparse and noisy synthetic data. In addition, quantitative results for five clinical cases are provided, first using landmarks set on blood vessels, then based on anatomical structures delineated in medical images. The average distances between paired vessels landmarks ranged from 3.51 to 7.32 (in mm) before compensation. With our method, on average 67% of the brain-shift is corrected (range [1.26; 2.33]) against 57% using one of the closest existing works (range [1.71; 2.84]). Finally, our method is proven to be fully compatible with a surgical workflow in terms of execution times and user interactions. In this paper, a new constraint-based biomechanical simulation method is proposed to compensate for craniotomy-induced brain-shift. While being efficient to correct this deformation, the method is fully integrable in a clinical process. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Optimizing the specificity of nucleic acid hybridization.

    PubMed

    Zhang, David Yu; Chen, Sherry Xi; Yin, Peng

    2012-01-22

    The specific hybridization of complementary sequences is an essential property of nucleic acids, enabling diverse biological and biotechnological reactions and functions. However, the specificity of nucleic acid hybridization is compromised for long strands, except near the melting temperature. Here, we analytically derived the thermodynamic properties of a hybridization probe that would enable near-optimal single-base discrimination and perform robustly across diverse temperature, salt and concentration conditions. We rationally designed 'toehold exchange' probes that approximate these properties, and comprehensively tested them against five different DNA targets and 55 spurious analogues with energetically representative single-base changes (replacements, deletions and insertions). These probes produced discrimination factors between 3 and 100+ (median, 26). Without retuning, our probes function robustly from 10 °C to 37 °C, from 1 mM Mg(2+) to 47 mM Mg(2+), and with nucleic acid concentrations from 1 nM to 5 µM. Experiments with RNA also showed effective single-base change discrimination.

  6. Coherence and diffraction limited resolution in microscopic OCT by a unified approach for the correction of dispersion and aberrations

    NASA Astrophysics Data System (ADS)

    Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.

    2018-03-01

    Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.

  7. Segmented Mirror Telescope Model and Simulation

    DTIC Science & Technology

    2011-06-01

    mirror surface is treated as a grid of masses and springs. The actuators have surface normal forces applied to individual masses. The equation to...are not widely treated in the literature. The required modifications for the wavefront reconstruction algorithm of a circular aperture to correctly...Zernike polynomials, which are particularly suitable to describe the common optical character- izations of astigmatism , coma, defocus and others [9

  8. Fast transform decoding of nonsystematic Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Cheung, K.-M.; Reed, I. S.; Shiozaki, A.

    1989-01-01

    A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips.

  9. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  10. The design and application of fluorophore–gold nanoparticle activatable probes

    PubMed Central

    Swierczewska, Magdalena; Lee, Seulki; Chen, Xiaoyuan

    2013-01-01

    Fluorescence-based assays and detection techniques are among the most highly sensitive and popular biological tests for researchers. To match the needs of research and the clinic, detection limits and specificities need to improve, however. One mechanism is to decrease non-specific background signals, which is most efficiently done by increasing fluorescence quenching abilities. Reports in the literature of theoretical and experimental work have shown that metallic gold surfaces and nanoparticles are ultra-efficient fluorescence quenchers. Based on these findings, subsequent reports have described gold nanoparticle fluorescence-based activatable probes that were designed to increase fluorescence intensity based on a range of stimuli. In this way, these probes can detect and signify assorted biomarkers and changes in environmental conditions. In this review, we explore the various factors and theoretical models that affect gold nanoparticle fluorescence quenching, explore current uses of activatable probes, and propose an engineering approach for future development of fluorescence based gold nanoparticle activatable probes. PMID:21380462

  11. Development of LC-13C NMR

    NASA Technical Reports Server (NTRS)

    Dorn, H. C.; Wang, J. S.; Glass, T. E.

    1986-01-01

    This study involves the development of C-13 nuclear resonance as an on-line detector for liquid chromatography (LC-C-13 NMR) for the chemical characterization of aviation fuels. The initial focus of this study was the development of a high sensitivity flow C-13 NMR probe. Since C-13 NMR sensitivity is of paramount concern, considerable effort during the first year was directed at new NMR probe designs. In particular, various toroid coil designs were examined. In addition, corresponding shim coils for correcting the main magnetic field (B sub 0) homogeneity were examined. Based on these initial probe design studies, an LC-C-13 NMR probe was built and flow C-13 NMR data was obtained for a limited number of samples.

  12. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland W.

    1992-01-01

    The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.

  13. Simple synthesis of carbon-11 labeled styryl dyes as new potential PET RNA-specific, living cell imaging probes.

    PubMed

    Wang, Min; Gao, Mingzhang; Miller, Kathy D; Sledge, George W; Hutchins, Gary D; Zheng, Qi-Huang

    2009-05-01

    A new type of styryl dyes have been developed as RNA-specific, live cell imaging probes for fluorescent microscopy technology to study nuclear structure and function. This study was designed to develop carbon-11 labeled styryl dyes as new probes for biomedical imaging technique positron emission tomography (PET) imaging of RNA in living cells. Precursors (E)-2-(2-(1-(triisopropylsilyl)-1H-indol-3-yl)vinyl)quinoline (2), (E)-2-(2,4,6-trimethoxystyryl)quinoline (3) and (E)-4-(2-(6-methoxyquinolin-2-yl)vinyl)-N,N-diemthylaniline (4), and standards styryl dyes E36 (6), E144 (7) and F22 (9) were synthesized in multiple steps with moderate to high chemical yields. Precursor 2 was labeled by [(11)C]CH(3)OTf, trapped on a cation-exchange CM Sep-Pak cartridge following a quick deprotecting reaction by addition of (n-Bu)(4)NF in THF, and isolated by solid-phase extraction (SPE) purification to provide target tracer [(11)C]E36 ([(11)C]6) in 40-50% radiochemical yields, decay corrected to end of bombardment (EOB), based on [(11)C]CO(2). The target tracers [(11)C]E144 ([(11)C]7) and [(11)C]F22 ([(11)C]9) were prepared by N-[(11)C]methylation of the precursors 3 and 4, respectively, using [(11)C]CH(3)OTf and isolated by SPE method in 50-70% radiochemical yields at EOB. The specific activity of the target tracers [(11)C]6, [(11)C]7 and [(11)C]9 was in a range of 74-111GBq/mumol at the end of synthesis (EOS).

  14. High-Throughput Genotyping of Single Nucleotide Polymorphisms in the Plasmodium falciparum dhfr Gene by Asymmetric PCR and Melt-Curve Analysis▿

    PubMed Central

    Cruz, Rochelle E.; Shokoples, Sandra E.; Manage, Dammika P.; Yanow, Stephanie K.

    2010-01-01

    Mutations within the Plasmodium falciparum dihydrofolate reductase gene (Pfdhfr) contribute to resistance to antimalarials such as sulfadoxine-pyrimethamine (SP). Of particular importance are the single nucleotide polymorphisms (SNPs) within codons 51, 59, 108, and 164 in the Pfdhfr gene that are associated with SP treatment failure. Given that traditional genotyping methods are time-consuming and laborious, we developed an assay that provides the rapid, high-throughput analysis of parasite DNA isolated from clinical samples. This assay is based on asymmetric real-time PCR and melt-curve analysis (MCA) performed on the LightCycler platform. Unlabeled probes specific to each SNP are included in the reaction mixture and hybridize differentially to the mutant and wild-type sequences within the amplicon, generating distinct melting curves. Since the probe is present throughout PCR and MCA, the assay proceeds seamlessly with no further addition of reagents. This assay was validated for analytical sensitivity and specificity using plasmids, purified genomic DNA from reference strains, and parasite cultures. For all four SNPs, correct genotypes were identified with 100 copies of the template. The performance of the assay was evaluated with a blind panel of clinical isolates from travelers with low-level parasitemia. The concordance between our assay and DNA sequencing ranged from 84 to 100% depending on the SNP. We also directly compared our MCA assay to a published TaqMan real-time PCR assay and identified major issues with the specificity of the TaqMan probes. Our assay provides a number of technical improvements that facilitate the high-throughput screening of patient samples to identify SP-resistant malaria. PMID:20631115

  15. [Development of a universal primers PCR-coupled liquid bead array to detect biothreat bacteria].

    PubMed

    Wen, Hai-yan; Wang, Jing; Liu, Heng-chuan; Sun, Xiao-hong; Yang, Yu; Hu, Kong-xin; Shan, Lin-jun

    2009-10-01

    To develop a fast, high-throughput screening method with suspension array technique for simultaneous detection of biothreat bacteria. 16 S rDNA universal primers for Bacillus anthracis, Francisella tularensis, Yersinia pestis, Brucella spp.and Burkholderia pseudomallei were selected to amplify corresponding regions and the genus-specific or species-specific probes were designed. After amplification of chromosomal DNA by 16 S rDNA primers 341A and 519B, the PCR products were detected by suspension array technique. The sensitivity, specificity, reproducibility and detection power were also analyzed. After PCR amplification by 16 S rDNA primers and specific probe hybridization, the target microorganisms could be identified at genus level, cross reaction was recognized in the same genus. The detection sensitivity of the assay was 1.5 pg/microl (Burkholderia pseudomallei), 20 pg/microl (Brucella spp.), 7 pg/microl (Bacillus anthracis), 0.1 pg/microl (Francisella tularensis), and 1.1 pg/microl (Yersinia pestis), respectively. The coefficient of variation for 15 test of different probes was ranged from 5.18% to 17.88%, it showed good reproducibility. The assay could correctly identify Bacillus anthracis and Yersinia pestis strains in simulated white powder samples. The suspension array technique could be served as an opening screening method for biothreat bacteria rapid detection.

  16. Quantum models with energy-dependent potentials solvable in terms of exceptional orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: axgeschu@iun.edu; Department of Physics, Indiana University Northwest, 3400 Broadway, Gary IN 46408; Roy, Pinaki, E-mail: pinaki@isical.ac.in

    We construct energy-dependent potentials for which the Schrödinger equations admit solutions in terms of exceptional orthogonal polynomials. Our method of construction is based on certain point transformations, applied to the equations of exceptional Hermite, Jacobi and Laguerre polynomials. We present several examples of boundary-value problems with energy-dependent potentials that admit a discrete spectrum and the corresponding normalizable solutions in closed form.

  17. Algorithms for Solvents and Spectral Factors of Matrix Polynomials

    DTIC Science & Technology

    1981-01-01

    spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right

  18. Long-time uncertainty propagation using generalized polynomial chaos and flow map composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.

    2014-10-01

    We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less

  19. Estimation of Phase in Fringe Projection Technique Using High-order Instantaneous Moments Based Method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod

    2010-04-01

    For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.

  20. Training apartment upkeep skills to rehabilitation clients: a comparison of task analytic strategies.

    PubMed Central

    Williams, G E; Cuvo, A J

    1986-01-01

    The research was designed to validate procedures to teach apartment upkeep skills to severely handicapped clients with various categorical disabilities. Methodological features of this research included performance comparisons between general and specific task analyses, effect of an impasse correction baseline procedure, social validation of training goals, natural environment assessments and contingencies, as well as long-term follow-up. Subjects were taught to perform upkeep responses on their air conditioner-heating unit, electric range, refrigerator, and electrical appliances within the context of a multiple-probe across subjects experimental design. The results showed acquisition, long-term maintenance, and generalization of the upkeep skills to a nontraining apartment. General task analyses were recommended for assessment and specific task analyses for training. The impasse correction procedure generally did not produce acquisition. PMID:3710947

  1. Airborne laser ranging system for monitoring regional crustal deformation

    NASA Technical Reports Server (NTRS)

    Degnan, J. J.

    1981-01-01

    Alternate approaches for making the atmospheric correction without benefit of a ground-based meteorological network are discussed. These include (1) a two-color channel that determines the atmospheric correction by measuring the time delay induced by dispersion between pulses at two optical frequencies; (2) single-color range measurements supported by an onboard temperature sounder, pressure altimeter readings, and surface measurements by a few existing meteorological facilities; and (3) inclusion of the quadratic polynomial coefficients as variables to be solved for along with target coordinates in the reduction of the single-color range data. It is anticipated that the initial Airborne Laser Ranging System (ALRS) experiments will be carried out in Southern California in a region bounded by Santa Barbara on the norht and the Mexican border on the south. The target area will be bounded by the Pacific Ocean to the west and will extend eastward for approximately 400 km. The unique ability of the ALRS to provide a geodetic 'snapshot' of such a large area will make it a valuable geophysical tool.

  2. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608

  3. Response Surface Modeling Tolerance and Inference Error Risk Specifications: Proposed Industry Standards

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2012-01-01

    This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.

  4. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  5. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  6. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  7. Hydrodynamic description of an unmagnetized plasma with multiple ion species. I. General formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simakov, Andrei N., E-mail: simakov@lanl.gov; Molvig, Kim

    2016-03-15

    A generalization of the Braginskii ion fluid description [S. I. Braginskii, Sov. Phys. - JETP 6, 358 (1958)] to the case of an unmagnetized collisional plasma with multiple ion species is presented. An asymptotic expansion in the ion Knudsen number is used to derive the individual ion species continuity, as well as the total ion mass density, momentum, and energy evolution equations accurate through the second order. Expressions for the individual ion species drift velocities with respect to the center of mass reference frame, as well as for the total ion heat flux and viscosity, which are required to closemore » the fluid equations, are evaluated in terms of the first-order corrections to the lowest order Maxwellian ion velocity distribution functions. A variational formulation for evaluating such corrections and its relation to the plasma entropy are presented. Employing trial functions for the corrections, written in terms of expansions in generalized Laguerre polynomials, and maximizing the resulting functionals produce two systems of linear equations (for “vector” and “tensor” portions of the corrections) for the expansion coefficients. A general matrix formulation of the linear systems as well as expressions for the resulting transport fluxes are presented in forms convenient for numerical implementation. The general formulation is employed in Paper II [A. N. Simakov and K. Molvig, Phys. Plasmas 23, 032116 (2016)] to evaluate the individual ion drift velocities and the total ion heat flux and viscosity for specific cases of two and three ion species plasmas.« less

  8. Correction of aeroheating-induced intensity nonuniformity in infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu

    2016-05-01

    Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.

  9. XAP, a program for deconvolution and analysis of complex X-ray spectra

    USGS Publications Warehouse

    Quick, James E.; Haleby, Abdul Malik

    1989-01-01

    The X-ray analysis program (XAP) is a spectral-deconvolution program written in BASIC and specifically designed to analyze complex spectra produced by energy-dispersive X-ray analytical systems (EDS). XAP compensates for spectrometer drift, utilizes digital filtering to remove background from spectra, and solves for element abundances by least-squares, multiple-regression analysis. Rather than base analyses on only a few channels, broad spectral regions of a sample are reconstructed from standard reference spectra. The effects of this approach are (1) elimination of tedious spectrometer adjustments, (2) removal of background independent of sample composition, and (3) automatic correction for peak overlaps. Although the program was written specifically to operate a KEVEX 7000 X-ray fluorescence analytical system, it could be adapted (with minor modifications) to analyze spectra produced by scanning electron microscopes, electron microprobes, and probes, and X-ray defractometer patterns obtained from whole-rock powders.

  10. Analysis of multicrystal pump–probe data sets. I. Expressions for the RATIO model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, Bertrand; Coppens, Philip

    2014-08-30

    The RATIO method in time-resolved crystallography [Coppenset al.(2009).J. Synchrotron Rad.16, 226–230] was developed for use with Laue pump–probe diffraction data to avoid complex corrections due to wavelength dependence of the intensities. The application of the RATIO method in processing/analysis prior to structure refinement requires an appropriate ratio model for modeling the light response. The assessment of the accuracy of pump–probe time-resolved structure refinements based on the observed ratios was discussed in a previous paper. In the current paper, a detailed ratio model is discussed, taking into account both geometric and thermal light-induced changes.

  11. PNA-COMBO-FISH: From combinatorial probe design in silico to vitality compatible, specific labelling of gene targets in cell nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Patrick; Rößler, Jens; Schwarz-Finsterle, Jutta

    Recently, advantages concerning targeting specificity of PCR constructed oligonucleotide FISH probes in contrast to established FISH probes, e.g. BAC clones, have been demonstrated. These techniques, however, are still using labelling protocols with DNA denaturing steps applying harsh heat treatment with or without further denaturing chemical agents. COMBO-FISH (COMBinatorial Oligonucleotide FISH) allows the design of specific oligonucleotide probe combinations in silico. Thus, being independent from primer libraries or PCR laboratory conditions, the probe sequences extracted by computer sequence data base search can also be synthesized as single stranded PNA-probes (Peptide Nucleic Acid probes). Gene targets can be specifically labelled with atmore » least about 20 PNA-probes obtaining visibly background free specimens. By using appropriately designed triplex forming oligonucleotides, the denaturing procedures can completely be omitted. These results reveal a significant step towards oligonucleotide-FISH maintaining the 3D-nanostructure and even the viability of the cell target. The method is demonstrated with the detection of Her2/neu and GRB7 genes, which are indicators in breast cancer diagnosis and therapy. - Highlights: • Denaturation free protocols preserve 3D architecture of chromosomes and nuclei. • Labelling sets are determined in silico for duplex and triplex binding. • Probes are produced chemically with freely chosen backbones and base variants. • Peptide nucleic acid backbones reduce hindering charge interactions. • Intercalating side chains stabilize binding of short oligonucleotides.« less

  12. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  14. Measurement of electron density using reactance cutoff probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, K. H.; Seo, B. H.; Kim, J. H.

    2016-05-15

    This paper proposes a new measurement method of electron density using the reactance spectrum of the plasma in the cutoff probe system instead of the transmission spectrum. The highly accurate reactance spectrum of the plasma-cutoff probe system, as expected from previous circuit simulations [Kim et al., Appl. Phys. Lett. 99, 131502 (2011)], was measured using the full two-port error correction and automatic port extension methods of the network analyzer. The electron density can be obtained from the analysis of the measured reactance spectrum, based on circuit modeling. According to the circuit simulation results, the reactance cutoff probe can measure themore » electron density more precisely than the previous cutoff probe at low densities or at higher pressure. The obtained results for the electron density are presented and discussed for a wide range of experimental conditions, and this method is compared with previous methods (a cutoff probe using the transmission spectrum and a single Langmuir probe).« less

  15. Baecklund transformation, Lax pair, and solutions for the Caudrey-Dodd-Gibbon equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu Qixing; Sun Kun; Jiang Yan

    2011-01-15

    By using Bell polynomials and symbolic computation, we investigate the Caudrey-Dodd-Gibbon equation analytically. Through a generalization of Bells polynomials, its bilinear form is derived, based on which, the periodic wave solution and soliton solutions are presented. And the soliton solutions with graphic analysis are also given. Furthermore, Baecklund transformation and Lax pair are derived via the Bells exponential polynomials. Finally, the Ablowitz-Kaup-Newell-Segur system is constructed.

  16. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  17. Spectral karyotyping (SKY) analysis of heritable effects of radiation-induced malignant transformation

    NASA Astrophysics Data System (ADS)

    Zitzelsberger, Horst; Fung, Jingly; Janish, C.; McNamara, George; Bryant, P. E.; Riches, A. C.; Weier, Heinz-Ulli G.

    1999-05-01

    Radiocarcinogenesis is widely recognized as occupational, environmental and therapeutical hazard, but the underlying mechanisms and cellular targets have not yet been identified. We applied SKY to study chromosomal rearrangements leading to malignant transformation of irradiated thyroid epithelial cells. SKY is a recently developed technique to detect translocations involving non-homologous based on unique staining of all 24 human chromosomes by hybridization with a mixture of whole chromosome painting probes. A tuneable interferometer mounted on a fluorescence microscope in front of a CCD camera allows to record the 400 nm - 1000 nm fluorescence spectrum for each pixel in the image. After background correction, spectra recorded for each pixel are compared to reference spectra stored previously for each chromosome-specific probe. Thus, pixel spectra can be associated with specific chromosomes and displayed in 'classification' colors, which are defined so that even small translocations become readily discernible. SKY analysis was performed on several radiation-transformed cell lines. Line S48T was generated from a primary tumor of a child exposed to elevated levels of radiation following the Chernobyl nuclear accident. Subclones were generated from the human thyroid epithelial cell line (HTori-3) by exposure to gamma or alpha irradiation. SKY analysis revealed multiple translocations and, combined with G-banding, allowed the definition of targets for positional cloning of tumor related genes.

  18. Target-cancer cell specific activatable fluorescence imaging Probes: Rational Design and in vivo Applications

    PubMed Central

    Kobayashi, Hisataka; Choyke, Peter L.

    2010-01-01

    CONSPECTUS Conventional imaging methods, such as angiography, computed tomography, magnetic resonance imaging and radionuclide imaging, rely on contrast agents (iodine, gadolinium, radioisotopes) that are “always on”. While these agents have proven clinically useful, they are not sufficiently sensitive because of the inadequate target to background ratio. A unique aspect of optical imaging is that fluorescence probes can be designed to be activatable, i.e. only “turned on” under certain conditions. These probes can be designed to emit signal only after binding a target tissue, greatly increasing sensitivity and specificity in the detection of disease. There are two basic types of activatable fluorescence probes; 1) conventional enzymatically activatable probes, which exist in the quenched state until activated by enzymatic cleavage mostly outside of the cells, and 2) newly designed target-cell specific activatable probes, which are quenched until activated in targeted cells by endolysosomal processing that results when the probe binds specific cell-surface receptors and is subsequently internalized. Herein, we present a review of the rational design and in vivo applications of target-cell specific activatable probes. Designing these probes based on their photo-chemical (e.g. activation strategy), pharmacological (e.g. biodistribution), and biological (e.g. target specificity) properties has recently allowed the rational design and synthesis of target-cell specific activatable fluorescence imaging probes, which can be conjugated to a wide variety of targeting molecules. Several different photo-chemical mechanisms have been utilized, each of which offers a unique capability for probe design. These include: self-quenching, homo- and hetero-fluorescence resonance energy transfer (FRET), H-dimer formation and photon-induced electron transfer (PeT). In addition, the repertoire is further expanded by the option for reversibility or irreversibility of the signal emitted using the aforementioned mechanisms. Given the wide range of photochemical mechanisms and properties, target-cell specific activatable probes possess considerable flexibility and can be adapted to specific diagnostic needs. Herein, we summarize the chemical, pharmacological, and biological basis of target-cell specific activatable imaging probes and discuss methods to successfully design such target-cell specific activatable probes for in vivo cancer imaging. PMID:21062101

  19. Fluorescent probes for nucleic Acid visualization in fixed and live cells.

    PubMed

    Boutorine, Alexandre S; Novopashina, Darya S; Krasheninina, Olga A; Nozeret, Karine; Venyaminova, Alya G

    2013-12-11

    This review analyses the literature concerning non-fluorescent and fluorescent probes for nucleic acid imaging in fixed and living cells from the point of view of their suitability for imaging intracellular native RNA and DNA. Attention is mainly paid to fluorescent probes for fluorescence microscopy imaging. Requirements for the target-binding part and the fluorophore making up the probe are formulated. In the case of native double-stranded DNA, structure-specific and sequence-specific probes are discussed. Among the latest, three classes of dsDNA-targeting molecules are described: (i) sequence-specific peptides and proteins; (ii) triplex-forming oligonucleotides and (iii) polyamide oligo(N-methylpyrrole/N-methylimidazole) minor groove binders. Polyamides seem to be the most promising targeting agents for fluorescent probe design, however, some technical problems remain to be solved, such as the relatively low sequence specificity and the high background fluorescence inside the cells. Several examples of fluorescent probe applications for DNA imaging in fixed and living cells are cited. In the case of intracellular RNA, only modified oligonucleotides can provide such sequence-specific imaging. Several approaches for designing fluorescent probes are considered: linear fluorescent probes based on modified oligonucleotide analogs, molecular beacons, binary fluorescent probes and template-directed reactions with fluorescence probe formation, FRET donor-acceptor pairs, pyrene excimers, aptamers and others. The suitability of all these methods for living cell applications is discussed.

  20. Detection and discrimination of orthopoxviruses using microarrays of immobilized oligonucleotides.

    PubMed

    Laassri, Majid; Chizhikov, Vladimir; Mikheev, Maxim; Shchelkunov, Sergei; Chumakov, Konstantin

    2003-09-01

    Variola virus (VARV), causing smallpox, is a potential biological weapon. Methods to detect VARV rapidly and to differentiate it from other viruses causing similar clinical syndromes are needed urgently. We have developed a new microarray-based method that detects simultaneously and discriminates four orthopoxvirus (OPV) species pathogenic for humans (variola, monkeypox, cowpox, and vaccinia viruses) and distinguishes them from chickenpox virus (varicella-zoster virus or VZV). The OPV gene C23L/B29R, encoding the CC-chemokine binding protein, was sequenced for 41 strains of seven species of orthopox viruses obtained from different geographical regions. Those C23L/B29R sequences and the ORF 62 sequences from 13 strains of VZV (selected from GenBank) were used to design oligonucleotide probes that were immobilized on an aldehyde-coated glass surface (a total of 57 probes). The microchip contained several unique 13-21 bases long oligonucleotide probes specific to each virus species to ensure redundancy and robustness of the assay. A region approximately 1100 bases long was amplified from samples of viral DNA and fluorescently labeled with Cy5-modified dNTPs, and single-stranded DNA was prepared by strand separation. Hybridization was carried out under plastic coverslips, resulting in a fluorescent pattern that was quantified using a confocal laser scanner. 49 known and blinded samples of OPV DNA, representing different OPV species, and two VZV strains were tested. The oligonucleotide microarray hybridization technique identified reliably and correctly all samples. This new procedure takes only 3 h, and it can be used for parallel testing of multiple samples.

  1. A class of reduced-order models in the theory of waves and stability.

    PubMed

    Chapman, C J; Sorokin, S V

    2016-02-01

    This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.

  2. Species-specific identification of Dekkera/Brettanomyces yeasts by fluorescently labeled DNA probes targeting the 26S rRNA.

    PubMed

    Röder, Christoph; König, Helmut; Fröhlich, Jürgen

    2007-09-01

    Sequencing of the complete 26S rRNA genes of all Dekkera/Brettanomyces species colonizing different beverages revealed the potential for a specific primer and probe design to support diagnostic PCR approaches and FISH. By analysis of the complete 26S rRNA genes of all five currently known Dekkera/Brettanomyces species (Dekkera bruxellensis, D. anomala, Brettanomyces custersianus, B. nanus and B. naardenensis), several regions with high nucleotide sequence variability yet distinct from the D1/D2 domains were identified. FISH species-specific probes targeting the 26S rRNA gene's most variable regions were designed. Accessibility of probe targets for hybridization was facilitated by the construction of partially complementary 'side'-labeled probes, based on secondary structure models of the rRNA sequences. The specificity and routine applicability of the FISH-based method for yeast identification were tested by analyzing different wine isolates. Investigation of the prevalence of Dekkera/Brettanomyces yeasts in the German viticultural regions Wonnegau, Nierstein and Bingen (Rhinehesse, Rhineland-Palatinate) resulted in the isolation of 37 D. bruxellensis strains from 291 wine samples.

  3. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  4. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  5. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  6. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  7. Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.

    PubMed

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  8. Aberration correction results in the IBM STEM instrument.

    PubMed

    Batson, P E

    2003-09-01

    Results from the installation of aberration correction in the IBM 120 kV STEM argue that a sub-angstrom probe size has been achieved. Results and the experimental methods used to obtain them are described here. Some post-experiment processing is necessary to demonstrate the probe size of about 0.078 nm. While the promise of aberration correction is demonstrated, we remain at the very threshold of practicality, given the very stringent stability requirements.

  9. Detection of cystic fibrosis mutations in a GeneChip{trademark} assay format

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyada, C.G.; Cronin, M.T.; Kim, S.M.

    1994-09-01

    We are developing assays for the detection of cystic fibrosis mutations based on DNA hybridization. A DNA sample is amplified by PCR, labeled by incorporating a fluorescein-tagged dNTP, enzymatically treated to produce smaller fragments and hybridized to a series of short (13-16 bases) oligonucleotides synthesized on a glass surface via photolithography. The hybrids are detected by eqifluorescence and mutations are identified by the specific pattern of hybridization. In a GeneChip assay, the chip surface is composed of a series of subarrays, each being specific for a particular mutation. Each subarray is further subdivided into a series of probes (40 total),more » half based on the mutant sequence and the remainder based on the wild-type sequence. For each of the subarrays, there is a redundancy in the number of probes that should hybridize to either a wild-type or a mutant target. The multiple probe strategy provides sequence information for a short five base region overlapping the mutation site. In addition, homozygous wild-type and mutant as well as heterozygous samples are each identified by a specific pattern of hybridization. The small size of each probe feature (250 x 250 {mu}m{sup 2}) permits the inclusion of additional probes required to generate sequence information by hybridization.« less

  10. Magnetic Gold Nanoparticle-Labeled Heparanase Monoclonal Antibody and its Subsequent Application for Tumor Magnetic Resonance Imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Jie, Meng-Meng; Yang, Min; Tang, Li; Chen, Si-Yuan; Sun, Xue-Mei; Tang, Bo; Yang, Shi-Ming

    2018-04-01

    Heparanase (HPA) is ubiquitously expressed in various metastatic malignant tumors; previous studies have demonstrated that HPA was a potential tumor-associated antigen (TAA) for tumor immunotherapy. We sought to evaluate the feasibility of HPA as a common TAA for magnetic resonance imaging (MRI) of tumor metastasis and its potential application in tumor molecular imaging. We prepared a targeted probe based on magnetic gold nanoparticles coupled with an anti-HPA antibody for the specific detection of HPA by MRI. The specificity of the targeted probe was validated in vitro by incubation of the probe with various tumor cells, and the probe was able to selectively detect HPA (+) cells. We found the probes displayed significantly reduced signal intensity in several tumor cells, and the signal intensity decreased significantly after the targeted probe was injected in tumor-bearing nude mice. In the study, we demonstrated that the HPA&GoldMag probe had excellent physical and chemical properties and immune activities and could specifically target many tumor cell tissues both in vitro and in vivo. This may provide an experimental base for molecular imaging of tumor highly expressing heparanase using HPA mAbs.

  11. Overestimation of Mach number due to probe shadow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosselin, J. J.; Thakur, S. C.; Tynan, G. R.

    2016-07-15

    Comparisons of the plasma ion flow speed measurements from Mach probes and laser induced fluorescence were performed in the Controlled Shear Decorrelation Experiment. We show the presence of the probe causes a low density geometric shadow downstream of the probe that affects the current density collected by the probe in collisional plasmas if the ion-neutral mean free path is shorter than the probe shadow length, L{sub g} = w{sup 2} V{sub drift}/D{sub ⊥}, resulting in erroneous Mach numbers. We then present a simple correction term that provides the corrected Mach number from probe data when the sound speed, ion-neutral mean free path,more » and perpendicular diffusion coefficient of the plasma are known. The probe shadow effect must be taken into account whenever the ion-neutral mean free path is on the order of the probe shadow length in linear devices and the open-field line region of fusion devices.« less

  12. A resilient domain decomposition polynomial chaos solver for uncertain elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Mycek, Paul; Contreras, Andres; Le Maître, Olivier; Sargsyan, Khachik; Rizzi, Francesco; Morris, Karla; Safta, Cosmin; Debusschere, Bert; Knio, Omar

    2017-07-01

    A resilient method is developed for the solution of uncertain elliptic PDEs on extreme scale platforms. The method is based on a hybrid domain decomposition, polynomial chaos (PC) framework that is designed to address soft faults. Specifically, parallel and independent solves of multiple deterministic local problems are used to define PC representations of local Dirichlet boundary-to-boundary maps that are used to reconstruct the global solution. A LAD-lasso type regression is developed for this purpose. The performance of the resulting algorithm is tested on an elliptic equation with an uncertain diffusivity field. Different test cases are considered in order to analyze the impacts of correlation structure of the uncertain diffusivity field, the stochastic resolution, as well as the probability of soft faults. In particular, the computations demonstrate that, provided sufficiently many samples are generated, the method effectively overcomes the occurrence of soft faults.

  13. A Probe for Measuring Moisture Content in Dead Roundwood

    Treesearch

    Richard W. Blank; John S. Frost; James E. Eenigenburg

    1983-01-01

    This paper reports field test results of a wood moisture probe''s accuracy in measuring fuel moisture content of dead roundwood. Probe measurements, corrected for temperature, correlated well with observed fuel moistures of 1-inch dead jack pine branchwood.

  14. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  15. Activity-Based Probes for Isoenzyme- and Site-Specific Functional Characterization of Glutathione S -Transferases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoddard, Ethan G.; Killinger, Bryan J.; Nair, Reji N.

    Glutathione S-transferases (GSTs) comprise a highly diverse family of phase II drug metabolizing enzymes whose shared function is the conjugation of reduced glutathione to various endo- and xenobiotics. Although the conglomerate activity of these enzymes can be measured by colorimetric assays, measurement of the individual contribution from specific isoforms and their contribution to the detoxification of xenobiotics in complex biological samples has not been possible. For this reason, we have developed two activity-based probes that characterize active glutathione transferases in mammalian tissues. The GST active site is comprised of a glutathione binding “G site” and a distinct substrate binding “Hmore » site”. Therefore, we developed (1) a glutathione-based photoaffinity probe (GSH-ABP) to target the “G site”, and (2) a probe designed to mimic a substrate molecule and show “H site” activity (GST-ABP). The GSH-ABP features a photoreactive moiety for UV-induced covalent binding to GSTs and glutathione-binding enzymes. The GST-ABP is a derivative of a known mechanism-based GST inhibitor that binds within the active site and inhibits GST activity. Validation of probe targets and “G” and “H” site specificity was carried out using a series of competitors in liver homogenates. Herein, we present robust tools for the novel characterization of enzyme- and active site-specific GST activity in mammalian model systems.« less

  16. A Fast lattice-based polynomial digital signature system for m-commerce

    NASA Astrophysics Data System (ADS)

    Wei, Xinzhou; Leung, Lin; Anshel, Michael

    2003-01-01

    The privacy and data integrity are not guaranteed in current wireless communications due to the security hole inside the Wireless Application Protocol (WAP) version 1.2 gateway. One of the remedies is to provide an end-to-end security in m-commerce by applying application level security on top of current WAP1.2. The traditional security technologies like RSA and ECC applied on enterprise's server are not practical for wireless devices because wireless devices have relatively weak computation power and limited memory compared with server. In this paper, we developed a lattice based polynomial digital signature system based on NTRU's Polynomial Authentication and Signature Scheme (PASS), which enabled the feasibility of applying high-level security on both server and wireless device sides.

  17. Recent Progress in Fluorescent Imaging Probes

    PubMed Central

    Pak, Yen Leng; Swamy, K. M. K.; Yoon, Juyoung

    2015-01-01

    Due to the simplicity and low detection limit, especially the bioimaging ability for cells, fluorescence probes serve as unique detection methods. With the aid of molecular recognition and specific organic reactions, research on fluorescent imaging probes has blossomed during the last decade. Especially, reaction based fluorescent probes have been proven to be highly selective for specific analytes. This review highlights our recent progress on fluorescent imaging probes for biologically important species, such as biothiols, reactive oxygen species, reactive nitrogen species, metal ions including Zn2+, Hg2+, Cu2+ and Au3+, and anions including cyanide and adenosine triphosphate (ATP). PMID:26402684

  18. Recent Progress in Fluorescent Imaging Probes.

    PubMed

    Pak, Yen Leng; Swamy, K M K; Yoon, Juyoung

    2015-09-22

    Due to the simplicity and low detection limit, especially the bioimaging ability for cells, fluorescence probes serve as unique detection methods. With the aid of molecular recognition and specific organic reactions, research on fluorescent imaging probes has blossomed during the last decade. Especially, reaction based fluorescent probes have been proven to be highly selective for specific analytes. This review highlights our recent progress on fluorescent imaging probes for biologically important species, such as biothiols, reactive oxygen species, reactive nitrogen species, metal ions including Zn(2+), Hg(2+), Cu(2+) and Au(3+), and anions including cyanide and adenosine triphosphate (ATP).

  19. Continual in situ monitoring of pore water stable isotopes in the subsurface

    NASA Astrophysics Data System (ADS)

    Volkmann, T. H. M.; Weiler, M.

    2014-05-01

    Stable isotope signatures provide an integral fingerprint of origin, flow paths, transport processes, and residence times of water in the environment. However, the full potential of stable isotopes to quantitatively characterize subsurface water dynamics is yet unfolded due to the difficulty in obtaining extensive, detailed, and repeated measurements of pore water in the unsaturated and saturated zone. This paper presents a functional and cost-efficient system for non-destructive continual in situ monitoring of pore water stable isotope signatures with high resolution. Automatic controllable valve arrays are used to continuously extract diluted water vapor in soil air via a branching network of small microporous probes into a commercial laser-based isotope analyzer. Normalized liquid-phase isotope signatures are then obtained based on a specific on-site calibration approach along with basic corrections for instrument bias and temperature dependent isotopic fractionation. The system was applied to sample depth profiles on three experimental plots with varied vegetation cover in southwest Germany. Two methods (i.e., based on advective versus diffusive vapor extraction) and two modes of sampling (i.e., using multiple permanently installed probes versus a single repeatedly inserted probe) were tested and compared. The results show that the isotope distribution along natural profiles could be resolved with sufficiently high accuracy and precision at sampling intervals of less than four minutes. The presented in situ approaches may thereby be used interchangeably with each other and with concurrent laboratory-based direct equilibration measurements of destructively collected samples. It is thus found that the introduced sampling techniques provide powerful tools towards a detailed quantitative understanding of dynamic and heterogeneous shallow subsurface and vadose zone processes.

  20. Distortion theorems for polynomials on a circle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubinin, V N

    2000-12-31

    Inequalities for the derivatives with respect to {phi}=arg z the functions ReP(z), |P(z)|{sup 2} and arg P(z) are established for an algebraic polynomial P(z) at points on the circle |z|=1. These estimates depend, in particular, on the constant term and the leading coefficient of the polynomial P(z) and improve the classical Bernstein and Turan inequalities. The method of proof is based on the techniques of generalized reduced moduli.

  1. Design and development of thin quartz glass WFXT polynomial mirror shells by direct polishing

    NASA Astrophysics Data System (ADS)

    Proserpio, L.; Campana, S.; Citterio, O.; Civitani, M.; Combrinck, H.; Conconi, P.; Cotroneo, V.; Freeman, R.; Langstrof, P.; Mattaini, E.; Morton, R.; Oberle, B.; Pareschi, G.; Parodi, G.; Pels, C.; Schenk, C.; Stock, R.; Tagliaferri, G.

    2010-07-01

    The Wide Field X-ray Telescope (WFXT) is a medium class mission for X-ray surveys of the sky with an unprecedented area and sensitivity. In order to meet the effective area requirement, the design of the optical system is based on very thin mirror shells, with thicknesses in the 1-2 mm range. In order to get the desired angular resolution (10 arcsec requirement, 5 arcsec goal) across the entire 1x1 degree FOV (Field Of View), the design of the optical system is based on nested modified grazing incidence Wolter-I mirrors realized with polynomial profiles, focal plane curvature and plate scale corrections. This design guarantees an increased angular resolution at large off-axis angle with respect to the normally used Wolter I configuration, making WFXT ideal for survey purposes. The WFXT X-ray Telescope Assembly is composed by three identical mirror modules of 78 nested shells each, with diameter up to 1.1 m. The epoxy replication process with SiC shells has already been proved to be a valuable technology to meet the angular resolution requirement of 10 arcsec. To further mature the telescope manufacturing technology and to achieve the goal of 5 arcsec, a deterministic direct polishing method is under investigation. The direct polishing method has already been used for past missions (as Einstein, Rosat, Chandra): the technological challenge now is to apply it for almost ten times thinner shells. Under investigation is quartz glass (fused silica), a well-known material with good thermo-mechanical and polishability characteristics that could meet our goal in terms of mass and stiffness, with significant cost and time saving with respect to SiC. Our approach is based on two main steps: first quartz glass tubes available on the market are grinded to conical profiles, and second the obtained shells are polished to the required polynomial profiles by CNC (Computer Numerical Control) polishing machine. In this paper, the first results of the direct grinding and polishing of prototypes shells made by quartz glass with low thickness, representative of the WFXT optical design, are presented.

  2. Optimal approach to quantum communication using dynamic programming.

    PubMed

    Jiang, Liang; Taylor, Jacob M; Khaneja, Navin; Lukin, Mikhail D

    2007-10-30

    Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states.

  3. Growth and adhesion properties of monosodium urate monohydrate (MSU) crystals

    NASA Astrophysics Data System (ADS)

    Perrin, Clare M.

    The presence of monosodium urate monohydrate (MSU) crystals in the synovial fluid has long been associated with the joint disease gout. To elucidate the molecular level growth mechanism and adhesive properties of MSU crystals, atomic force microscopy (AFM), scanning electron microscopy, and dynamic light scattering (DLS) techniques were employed in the characterization of the (010) and (1-10) faces of MSU, as well as physiologically relevant solutions supersaturated with urate. Topographical AFM imaging of both MSU (010) and (1-10) revealed the presence of crystalline layers of urate arranged into v-shaped features of varying height. Growth rates were measured for both monolayers (elementary steps) and multiple layers (macrosteps) on both crystal faces under a wide range of urate supersaturation in physiologically relevant solutions. Step velocities for monolayers and multiple layers displayed a second order polynomial dependence on urate supersaturation on MSU (010) and (1-10), with step velocities on (1-10) generally half of those measured on MSU (010) in corresponding growth conditions. Perpendicular step velocities on MSU (010) were obtained and also showed a second order polynomial dependence of step velocity with respect to urate supersaturation, which implies a 2D-island nucleation growth mechanism for MSU (010). Extensive topographical imaging of MSU (010) showed island adsorption from urate growth solutions under all urate solution concentrations investigated, lending further support for the determined growth mechanism. Island sizes derived from DLS experiments on growth solutions were in agreement with those measured on MSU (010) topographical images. Chemical force microscopy (CFM) was utilized to characterize the adhesive properties of MSU (010) and (1-10). AFM probes functionalized with amino acid derivatives and bio-macromolecules found in the synovial fluid were brought into contact with both crystal faces and adhesion forces were tabulated into histograms for comparison. AFM probes functionalized with -COO-, -CH3, and -OH functionalities displayed similar adhesion force with both crystal surfaces of MSU, while adhesion force on (1-10) was three times greater than (010) for -NH2+ probes. For AFM probes functionalized with bovine serum albumin, adhesion force was three times greater on MSU (1-10) than (010), most likely due to the more ionic nature of (1-10).

  4. Ionospheric Correction of InSAR for Accurate Ice Motion Mapping at High Latitudes

    NASA Astrophysics Data System (ADS)

    Liao, H.; Meyer, F. J.

    2016-12-01

    Monitoring the motion of the large ice sheets is of great importance for determining ice mass balance and its contribution to sea level rise. Recently the first comprehensive ice motion of the Greenland and the Antarctica have been generated with InSAR. However, these studies have indicated that the performance of InSAR-based ice motion mapping is limited by the presence of the ionosphere. This is particularly true at high latitudes and for low-frequency SAR data. Filter-based and empirical methods (e.g., removing polynomials), which have often been used to mitigate ionospheric effects, are often ineffective in these areas due to the typically strong spatial variability of ionospheric phase delay in high latitudes and due to the risk of removing true deformation signals from the observations. In this study, we will first present an outline of our split-spectrum InSAR-based ionospheric correction approach and particularly highlight how our method improves upon published techniques, such as the multiple sub-band approach to boost estimation accuracy as well as advanced error correction and filtering algorithms. We applied our work flow to a large number of ionosphere-affected dataset over the large ice sheets to estimate the benefit of ionospheric correction on ice motion mapping accuracy. Appropriate test sites over Greenland and the Antarctic have been chosen through cooperation with authors (UW, Ian Joughin) of previous ice motion studies. To demonstrate the magnitude of ionospheric noise and to showcase the performance of ionospheric correction, we will show examples of ionospheric-affected InSAR data and our ionosphere corrected result for comparison in visual. We also compared the corrected phase data to known ice velocity fields quantitatively for the analyzed areas from experts in ice velocity mapping. From our studies we found that ionospheric correction significantly reduces biases in ice velocity estimates and boosts accuracy by a factor that depends on a set of system (range bandwidth, temporal and spatial baseline) and processing parameters (e.g., filtering strength and sub-band configuration). A case study in Greenland is attached below.

  5. Optimizing the specificity of nucleic acid hybridization

    PubMed Central

    Zhang, David Yu; Chen, Sherry Xi; Yin, Peng

    2014-01-01

    The specific hybridization of complementary sequences is an essential property of nucleic acids, enabling diverse biological and biotechnological reactions and functions. However, the specificity of nucleic acid hybridization is compromised for long strands, except near the melting temperature. Here, we analytically derived the thermodynamic properties of a hybridization probe that would enable near-optimal single-base discrimination and perform robustly across diverse temperature, salt and concentration conditions. We rationally designed ‘toehold exchange’ probes that approximate these properties, and comprehensively tested them against five different DNA targets and 55 spurious analogues with energetically representative single-base changes (replacements, deletions and insertions). These probes produced discrimination factors between 3 and 100+ (median, 26). Without retuning, our probes function robustly from 10 °C to 37 °C, from 1 mM Mg2+ to 47 mM Mg2+, and with nucleic acid concentrations from 1 nM to 5 μM. Experiments with RNA also showed effective single-base change discrimination. PMID:22354435

  6. Time resolved optical system for an early detection of prostate tumor

    NASA Astrophysics Data System (ADS)

    Hervé, Lionel; Laidevant, Aurélie; Debourdeau, Mathieu; Boutet, Jérôme; Dinten, Jean-Marc

    2011-02-01

    We developed an endorectal time-resolved optical probe aiming at an early detection of prostate tumors targeted by fluorescent markers. Optical fibers are embedded inside a clinical available ultrasound endorectal probe. Excitation light is driven sequentially from a femtosecond laser (775 nm) into 6 source fibers. 4 detection fibers collect the medium responses at the excitation and fluorescence wavelength (850 nm) by the mean of 4 photomultipliers associated with a 4 channel time-correlated single photon counting card. We also developed the method to process the experimental data. This involves the numerical computation of the forward model, the creation of robust features which are automatically correctly from numerous experimental possible biases and the reconstruction of the inclusion by using the intensity and mean time of these features. To evaluate our system performance, we acquired measurements of a 40 μL ICG inclusion (10 μmol.L-1) at various lateral and depth locations in a phantom. Analysis of results showed we correctly reconstructed the fluorophore for the lateral positions (16 mm range) and for a distance to the probe going up to 1.5 cm. Precision of localization was found to be around 1 mm which complies well with precision specifications needed for the clinical application.

  7. Recovery and radiation corrections and time constants of several sizes of shielded and unshielded thermocouple probes for measuring gas temperature

    NASA Technical Reports Server (NTRS)

    Glawe, G. E.; Holanda, R.; Krause, L. N.

    1978-01-01

    Performance characteristics were experimentally determined for several sizes of a shielded and unshielded thermocouple probe design. The probes are of swaged construction and were made of type K wire with a stainless steel sheath and shield and MgO insulation. The wire sizes ranged from 0.03- to 1.02-mm diameter for the unshielded design and from 0.16- to 0.81-mm diameter for the shielded design. The probes were tested through a Mach number range of 0.2 to 0.9, through a temperature range of room ambient to 1420 K, and through a total-pressure range of 0.03 to 0.2.2 MPa (0.3 to 22 atm). Tables and graphs are presented to aid in selecting a particular type and size. Recovery corrections, radiation corrections, and time constants were determined.

  8. Vector-valued Jack polynomials and wavefunctions on the torus

    NASA Astrophysics Data System (ADS)

    Dunkl, Charles F.

    2017-06-01

    The Hamiltonian of the quantum Calogero-Sutherland model of N identical particles on the circle with 1/r 2 interactions has eigenfunctions consisting of Jack polynomials times the base state. By use of the generalized Jack polynomials taking values in modules of the symmetric group and the matrix solution of a system of linear differential equations one constructs novel eigenfunctions of the Hamiltonian. Like the usual wavefunctions each eigenfunction determines a symmetric probability density on the N-torus. The construction applies to any irreducible representation of the symmetric group. The methods depend on the theory of generalized Jack polynomials due to Griffeth, and the Yang-Baxter graph approach of Luque and the author.

  9. Phase demodulation method from a single fringe pattern based on correlation with a polynomial form.

    PubMed

    Robin, Eric; Valle, Valéry; Brémand, Fabrice

    2005-12-01

    The method presented extracts the demodulated phase from only one fringe pattern. Locally, this method approaches the fringe pattern morphology with the help of a mathematical model. The degree of similarity between the mathematical model and the real fringe is estimated by minimizing a correlation function. To use an optimization process, we have chosen a polynomial form such as a mathematical model. However, the use of a polynomial form induces an identification procedure with the purpose of retrieving the demodulated phase. This method, polynomial modulated phase correlation, is tested on several examples. Its performance, in terms of speed and precision, is presented on very noised fringe patterns.

  10. Combining freeform optics and curved detectors for wide field imaging: a polynomial approach over squared aperture.

    PubMed

    Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe

    2017-06-26

    In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.

  11. Recurrence approach and higher order polynomial algebras for superintegrable monopole systems

    NASA Astrophysics Data System (ADS)

    Hoque, Md Fazlul; Marquette, Ian; Zhang, Yao-Zhong

    2018-05-01

    We revisit the MIC-harmonic oscillator in flat space with monopole interaction and derive the polynomial algebra satisfied by the integrals of motion and its energy spectrum using the ad hoc recurrence approach. We introduce a superintegrable monopole system in a generalized Taub-Newman-Unti-Tamburino (NUT) space. The Schrödinger equation of this model is solved in spherical coordinates in the framework of Stäckel transformation. It is shown that wave functions of the quantum system can be expressed in terms of the product of Laguerre and Jacobi polynomials. We construct ladder and shift operators based on the corresponding wave functions and obtain the recurrence formulas. By applying these recurrence relations, we construct higher order algebraically independent integrals of motion. We show that the integrals form a polynomial algebra. We construct the structure functions of the polynomial algebra and obtain the degenerate energy spectra of the model.

  12. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  13. Using Tutte polynomials to analyze the structure of the benzodiazepines

    NASA Astrophysics Data System (ADS)

    Cadavid Muñoz, Juan José

    2014-05-01

    Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.

  14. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  15. A DNA microarray-based assay to detect dual infection with two dengue virus serotypes.

    PubMed

    Díaz-Badillo, Alvaro; Muñoz, María de Lourdes; Perez-Ramirez, Gerardo; Altuzar, Victor; Burgueño, Juan; Mendoza-Alvarez, Julio G; Martínez-Muñoz, Jorge P; Cisneros, Alejandro; Navarrete-Espinosa, Joel; Sanchez-Sinencio, Feliciano

    2014-04-25

    Here; we have described and tested a microarray based-method for the screening of dengue virus (DENV) serotypes. This DNA microarray assay is specific and sensitive and can detect dual infections with two dengue virus serotypes and single-serotype infections. Other methodologies may underestimate samples containing more than one serotype. This technology can be used to discriminate between the four DENV serotypes. Single-stranded DNA targets were covalently attached to glass slides and hybridised with specific labelled probes. DENV isolates and dengue samples were used to evaluate microarray performance. Our results demonstrate that the probes hybridized specifically to DENV serotypes; with no detection of unspecific signals. This finding provides evidence that specific probes can effectively identify single and double infections in DENV samples.

  16. A DNA Microarray-Based Assay to Detect Dual Infection with Two Dengue Virus Serotypes

    PubMed Central

    Díaz-Badillo, Alvaro; de Lourdes Muñoz, María; Perez-Ramirez, Gerardo; Altuzar, Victor; Burgueño, Juan; Mendoza-Alvarez, Julio G.; Martínez-Muñoz, Jorge P.; Cisneros, Alejandro; Navarrete-Espinosa, Joel; Sanchez-Sinencio, Feliciano

    2014-01-01

    Here; we have described and tested a microarray based-method for the screening of dengue virus (DENV) serotypes. This DNA microarray assay is specific and sensitive and can detect dual infections with two dengue virus serotypes and single-serotype infections. Other methodologies may underestimate samples containing more than one serotype. This technology can be used to discriminate between the four DENV serotypes. Single-stranded DNA targets were covalently attached to glass slides and hybridised with specific labelled probes. DENV isolates and dengue samples were used to evaluate microarray performance. Our results demonstrate that the probes hybridized specifically to DENV serotypes; with no detection of unspecific signals. This finding provides evidence that specific probes can effectively identify single and double infections in DENV samples. PMID:24776933

  17. Robust distortion correction of endoscope

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Nie, Sixiang; Soto-Thompson, Marcelo; Chen, Chao-I.; A-Rahim, Yousif I.

    2008-03-01

    Endoscopic images suffer from a fundamental spatial distortion due to the wide angle design of the endoscope lens. This barrel-type distortion is an obstacle for subsequent Computer Aided Diagnosis (CAD) algorithms and should be corrected. Various methods and research models for the barrel-type distortion correction have been proposed and studied. For industrial applications, a stable, robust method with high accuracy is required to calibrate the different types of endoscopes in an easy of use way. The correction area shall be large enough to cover all the regions that the physicians need to see. In this paper, we present our endoscope distortion correction procedure which includes data acquisition, distortion center estimation, distortion coefficients calculation, and look-up table (LUT) generation. We investigate different polynomial models used for modeling the distortion and propose a new one which provides correction results with better visual quality. The method has been verified with four types of colonoscopes. The correction procedure is currently being applied on human subject data and the coefficients are being utilized in a subsequent 3D reconstruction project of colon.

  18. Radius of curvature measurement of spherical smooth surfaces by multiple-beam interferometry in reflection

    NASA Astrophysics Data System (ADS)

    Abdelsalam, D. G.; Shaalan, M. S.; Eloker, M. M.; Kim, Daesuk

    2010-06-01

    In this paper a method is presented to accurately measure the radius of curvature of different types of curved surfaces of different radii of curvatures of 38 000,18 000 and 8000 mm using multiple-beam interference fringes in reflection. The images captured by the digital detector were corrected by flat fielding method. The corrected images were analyzed and the form of the surfaces was obtained. A 3D profile for the three types of surfaces was obtained using Zernike polynomial fitting. Some sources of uncertainty in measurement were calculated by means of ray tracing simulations and the uncertainty budget was estimated within λ/40.

  19. Detecting Biological Warfare Agents

    PubMed Central

    Song, Linan; Ahn, Soohyoun

    2005-01-01

    We developed a fiber-optic, microsphere-based, high-density array composed of 18 species-specific probe microsensors to identify biological warfare agents. We simultaneously identified multiple biological warfare agents in environmental samples by looking at specific probe responses after hybridization and response patterns of the multiplexed array. PMID:16318712

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  1. Decoding fMRI Signatures of Real-world Autobiographical Memory Retrieval.

    PubMed

    Rissman, Jesse; Chow, Tiffany E; Reggente, Nicco; Wagner, Anthony D

    2016-04-01

    Extant neuroimaging data implicate frontoparietal and medial-temporal lobe regions in episodic retrieval, and the specific pattern of activity within and across these regions is diagnostic of an individual's subjective mnemonic experience. For example, in laboratory-based paradigms, memories for recently encoded faces can be accurately decoded from single-trial fMRI patterns [Uncapher, M. R., Boyd-Meredith, J. T., Chow, T. E., Rissman, J., & Wagner, A. D. Goal-directed modulation of neural memory patterns: Implications for fMRI-based memory detection. Journal of Neuroscience, 35, 8531-8545, 2015; Rissman, J., Greely, H. T., & Wagner, A. D. Detecting individual memories through the neural decoding of memory states and past experience. Proceedings of the National Academy of Sciences, U.S.A., 107, 9849-9854, 2010]. Here, we investigated the neural patterns underlying memory for real-world autobiographical events, probed at 1- to 3-week retention intervals as well as whether distinct patterns are associated with different subjective memory states. For 3 weeks, participants (n = 16) wore digital cameras that captured photographs of their daily activities. One week later, they were scanned while making memory judgments about sequences of photos depicting events from their own lives or events captured by the cameras of others. Whole-brain multivoxel pattern analysis achieved near-perfect accuracy at distinguishing correctly recognized events from correctly rejected novel events, and decoding performance did not significantly vary with retention interval. Multivoxel pattern classifiers also differentiated recollection from familiarity and reliably decoded the subjective strength of recollection, of familiarity, or of novelty. Classification-based brain maps revealed dissociable neural signatures of these mnemonic states, with activity patterns in hippocampus, medial PFC, and ventral parietal cortex being particularly diagnostic of recollection. Finally, a classifier trained on previously acquired laboratory-based memory data achieved reliable decoding of autobiographical memory states. We discuss the implications for neuroscientific accounts of episodic retrieval and comment on the potential forensic use of fMRI for probing experiential knowledge.

  2. A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer

    2018-04-01

    In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.

  3. Demographically Corrected Normative Standards for the Spanish Language Version of the NIH Toolbox Cognition Battery.

    PubMed

    Casaletto, Kaitlin B; Umlauf, Anya; Marquine, Maria; Beaumont, Jennifer L; Mungas, Daniel; Gershon, Richard; Slotkin, Jerry; Akshoomoff, Natacha; Heaton, Robert K

    2016-03-01

    Hispanics are the fastest growing ethnicity in the United States, yet there are limited well-validated neuropsychological tools in Spanish, and an even greater paucity of normative standards representing this population. The Spanish NIH Toolbox Cognition Battery (NIHTB-CB) is a novel neurocognitive screener; however, the original norms were developed combining Spanish- and English-versions of the battery. We developed normative standards for the Spanish NIHTB-CB, fully adjusting for demographic variables and based entirely on a Spanish-speaking sample. A total of 408 Spanish-speaking neurologically healthy adults (ages 18-85 years) and 496 children (ages 3-7 years) completed the NIH Toolbox norming project. We developed three types of scores: uncorrected based on the entire Spanish-speaking cohort, age-corrected, and fully demographically corrected (age, education, sex) scores for each of the seven NIHTB-CB tests and three composites (Fluid, Crystallized, Total Composites). Corrected scores were developed using polynomial regression models. Demographic factors demonstrated medium-to-large effects on uncorrected NIHTB-CB scores in a pattern that differed from that observed on the English NIHTB-CB. For example, in Spanish-speaking adults, education was more strongly associated with Fluid scores, but showed the strongest association with Crystallized scores among English-speaking adults. Demographic factors were no longer associated with fully corrected scores. The original norms were not successful in eliminating demographic effects, overestimating children's performances, and underestimating adults' performances on the Spanish NIHTB-CB. The disparate pattern of demographic associations on the Spanish versus English NIHTB-CB supports the need for distinct normative standards developed separately for each population. Fully adjusted scores presented here will aid in more accurately characterizing acquired brain dysfunction among U.S. Spanish-speakers.

  4. New separated polynomial solutions to the Zernike system on the unit disk and interbasis expansion.

    PubMed

    Pogosyan, George S; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation proposed by Frits Zernike to obtain a basis of polynomial orthogonal solutions on the unit disk to classify wavefront aberrations in circular pupils is shown to have a set of new orthonormal solution bases involving Legendre and Gegenbauer polynomials in nonorthogonal coordinates, close to Cartesian ones. We find the overlaps between the original Zernike basis and a representative of the new set, which turn out to be Clebsch-Gordan coefficients.

  5. Distortion theorems for polynomials on a circle

    NASA Astrophysics Data System (ADS)

    Dubinin, V. N.

    2000-12-01

    Inequalities for the derivatives with respect to \\varphi=\\arg z the functions \\operatorname{Re}P(z), \\vert P(z)\\vert^2 and \\arg P(z) are established for an algebraic polynomial P(z) at points on the circle \\vert z\\vert=1. These estimates depend, in particular, on the constant term and the leading coefficient of the polynomial P(z) and improve the classical Bernstein and Turan inequalities. The method of proof is based on the techniques of generalized reduced moduli.

  6. Polynomial expansions of single-mode motions around equilibrium points in the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Lei, Hanlun; Xu, Bo; Circi, Christian

    2018-05-01

    In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.

  7. Orbifold E-functions of dual invertible polynomials

    NASA Astrophysics Data System (ADS)

    Ebeling, Wolfgang; Gusein-Zade, Sabir M.; Takahashi, Atsushi

    2016-08-01

    An invertible polynomial is a weighted homogeneous polynomial with the number of monomials coinciding with the number of variables and such that the weights of the variables and the quasi-degree are well defined. In the framework of the search for mirror symmetric orbifold Landau-Ginzburg models, P. Berglund and M. Henningson considered a pair (f , G) consisting of an invertible polynomial f and an abelian group G of its symmetries together with a dual pair (f ˜ , G ˜) . We consider the so-called orbifold E-function of such a pair (f , G) which is a generating function for the exponents of the monodromy action on an orbifold version of the mixed Hodge structure on the Milnor fibre of f. We prove that the orbifold E-functions of Berglund-Henningson dual pairs coincide up to a sign depending on the number of variables and a simple change of variables. The proof is based on a relation between monomials (say, elements of a monomial basis of the Milnor algebra of an invertible polynomial) and elements of the whole symmetry group of the dual polynomial.

  8. Electron collection theory for a D-region subsonic blunt electrostatic probe

    NASA Technical Reports Server (NTRS)

    Wai-Kwong Lai, T.

    1974-01-01

    Blunt probe theory for subsonic flow in a weakly ionized and collisional gas is reviewed, and an electron collection theory for the relatively unexplored case, Deybye length approximately 1, which occurs in the lower ionosphere (D-region), is developed. It is found that the dimensionless Debye length is no longer an electric field screening parameter, and the space charge field effect can be negelected. For ion collection, Hoult-Sonin theory is recognized as a correct description of the thin, ion density-perturbed layer adjacent the blunt probe surface. The large volume with electron density perturbed by a positively biased probe renders the usual thin boundary layer analysis inapplicable. Theories relating free stream conditions to the electron collection rate for both stationary and moving blunt probes are obtained. A model based on experimental nonlinear electron drift velocity data is proposed. For a subsonically moving probe, it is found that the perturbed region can be divided into four regions with distinct collection mechanisms.

  9. Quadratically Convergent Method for Simultaneously Approaching the Roots of Polynomial Solutions of a Class of Differential Equations

    NASA Astrophysics Data System (ADS)

    Recchioni, Maria Cristina

    2001-12-01

    This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.

  10. A PCR-Based Method for RNA Probes and Applications in Neuroscience.

    PubMed

    Hua, Ruifang; Yu, Shanshan; Liu, Mugen; Li, Haohong

    2018-01-01

    In situ hybridization (ISH) is a powerful technique that is used to detect the localization of specific nucleic acid sequences for understanding the organization, regulation, and function of genes. However, in most cases, RNA probes are obtained by in vitro transcription from plasmids containing specific promoter elements and mRNA-specific cDNA. Probes originating from plasmid vectors are time-consuming and not suitable for the rapid gene mapping. Here, we introduce a simplified method to prepare digoxigenin (DIG)-labeled non-radioactive RNA probes based on polymerase chain reaction (PCR) amplification and applications in free-floating mouse brain sections. Employing a transgenic reporter line, we investigate the expression of the somatostatin (SST) mRNA in the adult mouse brain. The method can be applied to identify the colocalization of SST mRNA and proteins including corticotrophin-releasing hormone (CRH) and protein kinase C delta type (PKC-δ) using double immunofluorescence, which is useful for understanding the organization of complex brain nuclei. Moreover, the method can also be incorporated with retrograde tracing to visualize the functional connection in the neural circuitry. Briefly, the PCR-based method for non-radioactive RNA probes is a useful tool that can be substantially utilized in neuroscience studies.

  11. Specific 16S ribosomal RNA targeted oligonucleotide probe against Clavibacter michiganensis subsp. sepedonicus.

    PubMed

    Mirza, M S; Rademaker, J L; Janse, J D; Akkermans, A D

    1993-11-01

    In this article we report on the polymerase chain reaction amplification of a partial 16S rRNA gene from the plant pathogenic bacterium Clavibacter michiganensis subsp. sepedonicus. A partial sequence (about 400 base pairs) of the gene was determined that covered two variable regions important for oligonucleotide probe development. A specific 24mer oligonucleotide probe targeted against the V6 region of 16S rRNA was designed. Specificity of the probe was determined using dot blot hybridization. Under stringent conditions (60 degrees C), the probe hybridized with all 16 Cl. michiganensis subsp. sepedonicus strains tested. Hybridization did not occur with 32 plant pathogenic and saprophytic bacteria used as controls under the same conditions. Under less stringent conditions (55 degrees C) the related Clavibacter michiganensis subsp. insidiosus, Clavibacter michiganensis subsp. nebraskensis, and Clavibacter michiganensis subsp. tesselarius also showed hybridization. At even lower stringency (40 degrees C), all Cl. michiganensis subspecies tested including Clavibacter michiganensis subsp. michiganensis showed hybridization signal, suggesting that under these conditions the probe may be used as a species-specific probe for Cl. michiganensis.

  12. Miniature electrically tunable rotary dual-focus lenses

    NASA Astrophysics Data System (ADS)

    Zou, Yongchao; Zhang, Wei; Lin, Tong; Chau, Fook Siong; Zhou, Guangya

    2016-03-01

    The emerging dual-focus lenses are drawing increasing attention recently due to their wide applications in both academia and industries, including laser cutting systems, microscopy systems, and interferometer-based surface profilers. In this paper, a miniature electrically tunable rotary dual-focus lens is developed. Such a lens consists of two optical elements, each having an optical flat surface and one freeform surface. The two freeform surfaces are initialized with the governing equation Ar2θ (A is the constant to be determined, r and θ denote the radii and angles in the polar coordinate system) and then optimized by ray tracing technique with additional Zernike polynomial terms for aberration correction. The freeform surfaces are achieved by a single-point diamond turning technique and then a PDMS-based replication process is utilized to materialize the final lens elements. To drive the two coaxial elements to rotate independently, two MEMS thermal rotary actuators are developed and fabricated by a standard MUMPs process. The experimental results show that the MEMS thermal actuator provides a maximum rotation angle of about 8.2 degrees with an input DC voltage of 6.5 V, leading to a wide tuning range for both the two focal lengths of the lens. Specifically, one focal length can be tuned from about 30 mm to 20 mm while the other one can be adjusted from about 30 mm to 60 mm.

  13. A Bayesian nonparametric approach to dynamical noise reduction

    NASA Astrophysics Data System (ADS)

    Kaloudis, Konstantinos; Hatjispyros, Spyridon J.

    2018-06-01

    We propose a Bayesian nonparametric approach for the noise reduction of a given chaotic time series contaminated by dynamical noise, based on Markov Chain Monte Carlo methods. The underlying unknown noise process (possibly) exhibits heavy tailed behavior. We introduce the Dynamic Noise Reduction Replicator model with which we reconstruct the unknown dynamic equations and in parallel we replicate the dynamics under reduced noise level dynamical perturbations. The dynamic noise reduction procedure is demonstrated specifically in the case of polynomial maps. Simulations based on synthetic time series are presented.

  14. Nonpeptide-Based Small-Molecule Probe for Fluorogenic and Chromogenic Detection of Chymotrypsin.

    PubMed

    Wu, Lei; Yang, Shu-Hou; Xiong, Hao; Yang, Jia-Qian; Guo, Jun; Yang, Wen-Chao; Yang, Guang-Fu

    2017-03-21

    We report herein a nonpeptide-based small-molecule probe for fluorogenic and chromogenic detection of chymotrypsin, as well as the primary application for this probe. This probe was rationally designed by mimicking the peptide substrate and optimized by adjusting the recognition group. The refined probe 2 exhibits good specificity toward chymotrypsin, producing about 25-fold higher enhancement in both the fluorescence intensity and absorbance upon the catalysis by chymotrypsin. Compared with the most widely used peptide substrate (AMC-FPAA-Suc) of chymotrypsin, probe 2 shows about 5-fold higher binding affinity and comparable catalytical efficiency against chymotrypsin. Furthermore, it was successfully applied for the inhibitor characterization. To the best of our knowledge, probe 2 is the first nonpeptide-based small-molecule probe for chymotrypsin, with the advantages of simple structure and high sensitivity compared to the widely used peptide-based substrates. This small-molecule probe is expected to be a useful molecular tool for drug discovery and chymotrypsin-related disease diagnosis.

  15. Exploratory analysis of normative performance on the UCSD Performance-Based Skills Assessment-Brief.

    PubMed

    Vella, Lea; Patterson, Thomas L; Harvey, Philip D; McClure, Margaret McNamara; Mausbach, Brent T; Taylor, Michael J; Twamley, Elizabeth W

    2017-10-01

    The UCSD Performance-Based Skills Assessment (UPSA) is a performance-based measure of functional capacity. The brief, two-domain (finance and communication ability) version of the assessment (UPSA-B) is now widely used in both clinical research and treatment trials. To date, research has not examined possible demographic-UPSA-B relationships within a non-psychiatric population. We aimed to produce and describe preliminary normative scores for the UPSA-B over a full range of ages and educational attainment. The finance and communication subscales of the UPSA were administered to 190 healthy participants in the context of three separate studies. These data were combined to examine the effects of age, sex, and educational attainment on the UPSA-B domain and total scores. Fractional polynomial regression was used to compute demographically-corrected T-scores for the UPSA-B total score, and percentile rank conversion was used for the two subscales. Age and education both had significant non-linear effects on the UPSA-B total score. The finance subscale was significantly related to both gender and years of education, whereas the communication subscale was not significantly related to any of the demographic characteristics. Demographically corrected T-scores and percentile ranks for UPSA-B scores are now available for use in clinical research. Published by Elsevier B.V.

  16. A computerized Langmuir probe system

    NASA Astrophysics Data System (ADS)

    Pilling, L. S.; Bydder, E. L.; Carnegie, D. A.

    2003-07-01

    For low pressure plasmas it is important to record entire single or double Langmuir probe characteristics accurately. For plasmas with a depleted high energy tail, the accuracy of the recorded ion current plays a critical role in determining the electron temperature. Even for high density Maxwellian distributions, it is necessary to accurately model the ion current to obtain the correct electron density. Since the electron and ion current saturation values are, at best, orders of magnitude apart, a single current sensing resistor cannot provide the required resolution to accurately record these values. We present an automated, personal computer based data acquisition system for the determination of fundamental plasma properties in low pressure plasmas. The system is designed for single and double Langmuir probes, whose characteristics can be recorded over a bias voltage range of ±70 V with 12 bit resolution. The current flowing through the probes can be recorded within the range of 5 nA-100 mA. The use of a transimpedance amplifier for current sensing eliminates the requirement for traditional current sensing resistors and hence the need to correct the raw data. The large current recording range is realized through the use of a real time gain switching system in the negative feedback loop of the transimpedance amplifier.

  17. A 2 epoch proper motion catalogue from the UKIDSS Large Area Survey

    NASA Astrophysics Data System (ADS)

    Smith, Leigh; Lucas, Phil; Burningham, Ben; Jones, Hugh; Pinfield, David; Smart, Ricky; Andrei, Alexandre

    2013-04-01

    The UKIDSS Large Area Survey (LAS) began in 2005, with the start of the UKIDSS program as a 7 year effort to survey roughly 4000 square degrees at high galactic latitudes in Y, J, H and K bands. The survey also included a significant quantity of 2-epoch J band observations, with epoch baselines ranging from 2 to 7 years. We present a proper motion catalogue for the 1500 square degrees of the 2 epoch LAS data, which includes some 800,000 sources with motions detected above the 5σ level. We developed a bespoke proper motion pipeline which applies a source-unique second order polynomial transformation to UKIDSS array coordinates of each source to counter potential local non-uniformity in the focal plane. Our catalogue agrees well with the proper motion data supplied in the current WFCAM Science Archive (WSA) DR9 catalogue where there is overlap, and in various optical catalogues, but it benefits from some improvements. One improvement is that we provide absolute proper motions, using LAS galaxies for the relative to absolute correction. Also, by using unique, local, 2nd order polynomial tranformations, as opposed to the linear transformations in the WSA, we correct better for any local distortions in the focal plane, not including the radial distortion that is removed by their pipeline.

  18. Development and evaluation of probe based real time loop mediated isothermal amplification for Salmonella: A new tool for DNA quantification.

    PubMed

    Mashooq, Mohmad; Kumar, Deepak; Niranjan, Ankush Kiran; Agarwal, Rajesh Kumar; Rathore, Rajesh

    2016-07-01

    A one step, single tube, accelerated probe based real time loop mediated isothermal amplification (RT LAMP) assay was developed for detecting the invasion gene (InvA) of Salmonella. The probe based RT LAMP is a novel method of gene amplification that amplifies nucleic acid with high specificity and rapidity under isothermal conditions with a set of six primers. The whole procedure is very simple and rapid, and amplification can be obtained in 20min. Detection of gene amplification was accomplished by amplification curve, turbidity and addition of DNA binding dye at the end of the reaction results in colour difference and can be visualized under normal day light and in UV. The sensitivity of developed assay was found 10 fold higher than taqman based qPCR. The specificity of the RT LAMP assay was validated by the absence of any cross reaction with other members of enterobacteriaceae family and other gram negative bacteria. These results indicate that the probe based RT LAMP assay is extremely rapid, cost effective, highly specific and sensitivity and has potential usefulness for rapid Salmonella surveillance. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  19. Gold nanoparticle-based probes for the colorimetric detection of Mycobacterium avium subspecies paratuberculosis DNA.

    PubMed

    Ganareal, Thenor Aristotile Charles S; Balbin, Michelle M; Monserate, Juvy J; Salazar, Joel R; Mingala, Claro N

    2018-02-12

    Gold nanoparticle (AuNP) is considered to be the most stable metal nanoparticle having the ability to be functionalized with biomolecules. Recently, AuNP-based DNA detection methods captured the interest of researchers worldwide. Paratuberculosis or Johne's disease, a chronic gastroenteritis in ruminants caused by Mycobacterium avium subsp. paratuberculosis (MAP), was found to have negative effect in the livestock industry. In this study, AuNP-based probes were evaluated for the specific and sensitive detection of MAP DNA. AuNP-based probe was produced by functionalization of AuNPs with thiol-modified oligonucleotide and was confirmed by Fourier-Transform Infrared (FTIR) spectroscopy. UV-Vis spectroscopy and Scanning Electron Microscopy (SEM) were used to characterize AuNPs. DNA detection was done by hybridization of 10 μL of DNA with 5 μL of probe at 63 °C for 10 min and addition of 3 μL salt solution. The method was specific to MAP with detection limit of 103 ng. UV-Vis and SEM showed dispersion and aggregation of the AuNPs for the positive and negative results, respectively, with no observed particle growth. This study therefore reports an AuNP-based probes which can be used for the specific and sensitive detection of MAP DNA. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. PNA-COMBO-FISH: From combinatorial probe design in silico to vitality compatible, specific labelling of gene targets in cell nuclei.

    PubMed

    Müller, Patrick; Rößler, Jens; Schwarz-Finsterle, Jutta; Schmitt, Eberhard; Hausmann, Michael

    2016-07-01

    Recently, advantages concerning targeting specificity of PCR constructed oligonucleotide FISH probes in contrast to established FISH probes, e.g. BAC clones, have been demonstrated. These techniques, however, are still using labelling protocols with DNA denaturing steps applying harsh heat treatment with or without further denaturing chemical agents. COMBO-FISH (COMBinatorial Oligonucleotide FISH) allows the design of specific oligonucleotide probe combinations in silico. Thus, being independent from primer libraries or PCR laboratory conditions, the probe sequences extracted by computer sequence data base search can also be synthesized as single stranded PNA-probes (Peptide Nucleic Acid probes) or TINA-DNA (Twisted Intercalating Nucleic Acids). Gene targets can be specifically labelled with at least about 20 probes obtaining visibly background free specimens. By using appropriately designed triplex forming oligonucleotides, the denaturing procedures can completely be omitted. These results reveal a significant step towards oligonucleotide-FISH maintaining the 3d-nanostructure and even the viability of the cell target. The method is demonstrated with the detection of Her2/neu and GRB7 genes, which are indicators in breast cancer diagnosis and therapy. Copyright © 2016. Published by Elsevier Inc.

  1. A cancer cell-specific fluorescent probe for imaging Cu2 + in living cancer cells

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Dong, Baoli; Kong, Xiuqi; Song, Xuezhen; Zhang, Nan; Lin, Weiying

    2017-07-01

    Monitoring copper level in cancer cells is important for the further understanding of its roles in the cell proliferation, and also could afford novel copper-based strategy for the cancer therapy. Herein, we have developed a novel cancer cell-specific fluorescent probe for the detecting Cu2 + in living cancer cells. The probe employed biotin as the cancer cell-specific group. Before the treatment of Cu2 +, the probe showed nearly no fluorescence. However, the probe can display strong fluorescence at 581 nm in response to Cu2 +. The probe exhibited excellent sensitivity and high selectivity for Cu2 + over the other relative species. Under the guidance of biotin group, could be successfully used for detecting Cu2 + in living cancer cells. We expect that this design strategy could be further applied for detection of the other important biomolecules in living cancer cells.

  2. Method and system for fiber optic determination of gas concentrations in liquid receptacles

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet (Inventor)

    2008-01-01

    A system for determining gas compositions includes a probe, inserted into a source of gaseous material, the probe having a gas permeable sensor tip and being capable of sending and receiving light to and from the gaseous material, a sensor body, connected to the probe, situated outside of the source and a fiber bundle, connected to the sensor body and communicating light to and from the probe. The system also includes a laser source, connected to one portion of the fiber bundle and providing laser light to the fiber bundle and the probe a Raman spectrograph, connected to another portion of the fiber bundle, receiving light from the probe and filtering the received light into specific channels and a data processing unit, receiving and analyzing the received light in the specific channels and outputting concentration of specific gas species in the gaseous material based on the analyzed received light.

  3. Probing the space of toric quiver theories

    NASA Astrophysics Data System (ADS)

    Hewlett, Joseph; He, Yang-Hui

    2010-03-01

    We demonstrate a practical and efficient method for generating toric Calabi-Yau quiver theories, applicable to both D3 and M2 brane world-volume physics. A new analytic method is presented at low order parametres and an algorithm for the general case is developed which has polynomial complexity in the number of edges in the quiver. Using this algorithm, carefully implemented, we classify the quiver diagram and assign possible superpotentials for various small values of the number of edges and nodes. We examine some preliminary statistics on this space of toric quiver theories.

  4. Computer-Based Instruction for Purchasing Skills

    ERIC Educational Resources Information Center

    Ayres, Kevin M.; Langone, John; Boon, Richard T.; Norman, Audrey

    2006-01-01

    The purpose of this study was to investigate use of computers and video technologies to teach students to correctly make purchases in a community grocery store using the dollar plus purchasing strategy. Four middle school students diagnosed with intellectual disabilities participated in this study. A multiple probe across participants research…

  5. Intraoperative magnetic tracker calibration using a magneto-optic hybrid tracker for 3-D ultrasound-based navigation in laparoscopic surgery.

    PubMed

    Nakamoto, Masahiko; Nakada, Kazuhisa; Sato, Yoshinobu; Konishi, Kozo; Hashizume, Makoto; Tamura, Shinichi

    2008-02-01

    This paper describes a ultrasound (3-D US) system that aims to achieve augmented reality (AR) visualization during laparoscopic surgery, especially for the liver. To acquire 3-D US data of the liver, the tip of a laparoscopic ultrasound probe is tracked inside the abdominal cavity using a magnetic tracker. The accuracy of magnetic trackers, however, is greatly affected by magnetic field distortion that results from the close proximity of metal objects and electronic equipment, which is usually unavoidable in the operating room. In this paper, we describe a calibration method for intraoperative magnetic distortion that can be applied to laparoscopic 3-D US data acquisition; we evaluate the accuracy and feasibility of the method by in vitro and in vivo experiments. Although calibration data can be acquired freehand using a magneto-optic hybrid tracker, there are two problems associated with this method--error caused by the time delay between measurements of the optical and magnetic trackers, and instability of the calibration accuracy that results from the uniformity and density of calibration data. A temporal calibration procedure is developed to estimate the time delay, which is then integrated into the calibration, and a distortion model is formulated by zeroth-degree to fourth-degree polynomial fitting to the calibration data. In the in vivo experiment using a pig, the positional error caused by magnetic distortion was reduced from 44.1 to 2.9 mm. The standard deviation of corrected target positions was less than 1.0 mm. Freehand acquisition of calibration data was performed smoothly using a magneto-optic hybrid sampling tool through a trocar under guidance by realtime 3-D monitoring of the tool trajectory; data acquisition time was less than 2 min. The present study suggests that our proposed method could correct for magnetic field distortion inside the patient's abdomen during a laparoscopic procedure within a clinically permissible period of time, as well as enabling an accurate 3-D US reconstruction to be obtained that can be superimposed onto live endoscopic images.

  6. Detection of iron-depositing Pedomicrobium species in native biofilms from the Odertal National Park by a new, specific FISH probe.

    PubMed

    Braun, Burga; Richert, Inga; Szewzyk, Ulrich

    2009-10-01

    Iron-depositing bacteria play an important role in technical water systems (water wells, distribution systems) due to their intense deposition of iron oxides and resulting clogging effects. Pedomicrobium is known as iron- and manganese-oxidizing and accumulating bacterium. The ability to detect and quantify members of this species in biofilm communities is therefore desirable. In this study the fluorescence in situ hybridization (FISH) method was used to detect Pedomicrobium in iron and manganese incrusted biofilms. Based on comparative sequence analysis, we designed and evaluated a specific oligonucleotide probe (Pedo 1250) complementary to the hypervariable region 8 of the 16S rRNA gene for Pedomicrobium. Probe specificities were tested against 3 different strains of Pedomicrobium and Sphingobium yanoikuyae as non-target organism. Using optimized conditions the probe hybridized with all tested strains of Pedomicrobium with an efficiency of 80%. The non-target organism showed no hybridization signals. The new FISH probe was applied successfully for the in situ detection of Pedomicrobium in different native, iron-depositing biofilms. The hybridization results of native bioflims using probe Pedo_1250 agreed with the results of the morphological structure of Pedomicrobium bioflims based on scanning electron microscopy.

  7. Colorful protein-based fluorescent probes for collagen imaging.

    PubMed

    Aper, Stijn J A; van Spreeuwel, Ariane C C; van Turnhout, Mark C; van der Linden, Ardjan J; Pieters, Pascal A; van der Zon, Nick L L; de la Rambelje, Sander L; Bouten, Carlijn V C; Merkx, Maarten

    2014-01-01

    Real-time visualization of collagen is important in studies on tissue formation and remodeling in the research fields of developmental biology and tissue engineering. Our group has previously reported on a fluorescent probe for the specific imaging of collagen in live tissue in situ, consisting of the native collagen binding protein CNA35 labeled with fluorescent dye Oregon Green 488 (CNA35-OG488). The CNA35-OG488 probe has become widely used for collagen imaging. To allow for the use of CNA35-based probes in a broader range of applications, we here present a toolbox of six genetically-encoded collagen probes which are fusions of CNA35 to fluorescent proteins that span the visible spectrum: mTurquoise2, EGFP, mAmetrine, LSSmOrange, tdTomato and mCherry. While CNA35-OG488 requires a chemical conjugation step for labeling with the fluorescent dye, these protein-based probes can be easily produced in high yields by expression in E. coli and purified in one step using Ni2+-affinity chromatography. The probes all bind specifically to collagen, both in vitro and in porcine pericardial tissue. Some first applications of the probes are shown in multicolor imaging of engineered tissue and two-photon imaging of collagen in human skin. The fully-genetic encoding of the new probes makes them easily accessible to all scientists interested in collagen formation and remodeling.

  8. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  9. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Al-Rousan, M.

    2005-12-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevast'yanov, E A; Sadekova, E Kh

    The Bulgarian mathematicians Sendov, Popov, and Boyanov have well-known results on the asymptotic behaviour of the least deviations of 2{pi}-periodic functions in the classes H{sup {omega}} from trigonometric polynomials in the Hausdorff metric. However, the asymptotics they give are not adequate to detect a difference in, for example, the rate of approximation of functions f whose moduli of continuity {omega}(f;{delta}) differ by factors of the form (log(1/{delta})){sup {beta}}. Furthermore, a more detailed determination of the asymptotic behaviour by traditional methods becomes very difficult. This paper develops an approach based on using trigonometric snakes as approximating polynomials. The snakes of ordermore » n inscribed in the Minkowski {delta}-neighbourhood of the graph of the approximated function f provide, in a number of cases, the best approximation for f (for the appropriate choice of {delta}). The choice of {delta} depends on n and f and is based on constructing polynomial kernels adjusted to the Hausdorff metric and polynomials with special oscillatory properties. Bibliography: 19 titles.« less

  11. Specificity tests of an oligonucleotide probe against food-outbreak salmonella for biosensor detection

    NASA Astrophysics Data System (ADS)

    Chen, I.-H.; Horikawa, S.; Xi, J.; Wikle, H. C.; Barbaree, J. M.; Chin, B. A.

    2017-05-01

    Phage based magneto-elastic (ME) biosensors have been shown to be able to rapidly detect Salmonella in various food systems to serve food pathogen monitoring purposes. In this ME biosensor platform, the free-standing strip-shaped magneto-elastic sensor is the transducer and the phage probe that recognizes Salmonella in food serves as the bio-recognition element. According to Sorokulova et al. at 2005, a developed oligonucleotide probe E2 was reported to have high specificity to Salmonella enterica Typhimurium. In the report, the specificity tests were focused in most of Enterobacterace groups outside of Salmonella family. Here, to understand the specificity of phage E2 to different Salmonella enterica serotypes within Salmonella Family, we further tested the specificity of the phage probe to thirty-two Salmonella serotypes that were present in the major foodborne outbreaks during the past ten years (according to Centers for Disease Control and Prevention). The tests were conducted through an Enzyme linked Immunosorbent Assay (ELISA) format. This assay can mimic probe immobilized conditions on the magnetoelastic biosensor platform and also enable to study the binding specificity of oligonucleotide probes toward different Salmonella while avoiding phage/ sensor lot variations. Test results confirmed that this oligonucleotide probe E2 was high specific to Salmonella Typhimurium cells but showed cross reactivity to Salmonella Tennessee and four other serotypes among the thirty-two tested Salmonella serotypes.

  12. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  13. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE PAGES

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    2015-12-10

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  14. Clearing the waters: Evaluating the need for site-specific field fluorescence corrections based on turbidity measurements

    USGS Publications Warehouse

    Saraceno, John F.; Shanley, James B.; Downing, Bryan D.; Pellerin, Brian A.

    2017-01-01

    In situ fluorescent dissolved organic matter (fDOM) measurements have gained increasing popularity as a proxy for dissolved organic carbon (DOC) concentrations in streams. One challenge to accurate fDOM measurements in many streams is light attenuation due to suspended particles. Downing et al. (2012) evaluated the need for corrections to compensate for particle interference on fDOM measurements using a single sediment standard in a laboratory study. The application of those results to a large river improved unfiltered field fDOM accuracy. We tested the same correction equation in a headwater tropical stream and found that it overcompensated fDOM when turbidity exceeded ∼300 formazin nephelometric units (FNU). Therefore, we developed a site-specific, field-based fDOM correction equation through paired in situ fDOM measurements of filtered and unfiltered streamwater. The site-specific correction increased fDOM accuracy up to a turbidity as high as 700 FNU, the maximum observed in this study. The difference in performance between the laboratory-based correction equation of Downing et al. (2012) and our site-specific, field-based correction equation likely arises from differences in particle size distribution between the sediment standard used in the lab (silt) and that observed in our study (fine to medium sand), particularly during high flows. Therefore, a particle interference correction equation based on a single sediment type may not be ideal when field sediment size is significantly different. Given that field fDOM corrections for particle interference under turbid conditions are a critical component in generating accurate DOC estimates, we describe a way to develop site-specific corrections.

  15. Method for reducing measurement errors of a Langmuir probe with a protective RF shield

    NASA Astrophysics Data System (ADS)

    Riaby, V.; Masherov, P.; Savinov, V.; Yakunin, V.

    2018-04-01

    Probe measurements were conducted in the middle cross-section of an inductive, low-pressure xenon plasma using a straight cylindrical Langmuir probe with a bare metal shield that protected the probe from radio frequency interference. As a result, reliable radial distributions of the plasma parameters were obtained. Subsequent analyses of these measurements revealed that the electron energy distribution function (EEDF) deviated substantially from the Maxwellian functions and that this deviation depended on the length of the probe shield. To evaluate the shield's influence on the measurement results, in addition to the probe (which was moved radially as its shield length varied in the range of lsh1 = lmax-0), an additional L-shaped probe was inserted at a different location. This probe was moved differently from the first probe and provided confirmational measurements in the common special position where lsh1 = 0 and lsh2 ≠ 0. In this position, the second shield decreased all the plasma parameters. A comparison of the probe datasets identified the principles of the relationships between measurement errors and EEDF distortions caused by the bare probe shields. This dependence was used to correct the measurements performed using the first probe by eliminating the influence of its shield. Physical analyses based on earlier studies showed that these peculiarities are caused by a short-circuited double-probe effect that occurs in bare metal probe protective shields.

  16. Design and verification of a pangenome microarray oligonucleotide probe set for Dehalococcoides spp.

    PubMed

    Hug, Laura A; Salehi, Maryam; Nuin, Paulo; Tillier, Elisabeth R; Edwards, Elizabeth A

    2011-08-01

    Dehalococcoides spp. are an industrially relevant group of Chloroflexi bacteria capable of reductively dechlorinating contaminants in groundwater environments. Existing Dehalococcoides genomes revealed a high level of sequence identity within this group, including 98 to 100% 16S rRNA sequence identity between strains with diverse substrate specificities. Common molecular techniques for identification of microbial populations are often not applicable for distinguishing Dehalococcoides strains. Here we describe an oligonucleotide microarray probe set designed based on clustered Dehalococcoides genes from five different sources (strain DET195, CBDB1, BAV1, and VS genomes and the KB-1 metagenome). This "pangenome" probe set provides coverage of core Dehalococcoides genes as well as strain-specific genes while optimizing the potential for hybridization to closely related, previously unknown Dehalococcoides strains. The pangenome probe set was compared to probe sets designed independently for each of the five Dehalococcoides strains. The pangenome probe set demonstrated better predictability and higher detection of Dehalococcoides genes than strain-specific probe sets on nontarget strains with <99% average nucleotide identity. An in silico analysis of the expected probe hybridization against the recently released Dehalococcoides strain GT genome and additional KB-1 metagenome sequence data indicated that the pangenome probe set performs more robustly than the combined strain-specific probe sets in the detection of genes not included in the original design. The pangenome probe set represents a highly specific, universal tool for the detection and characterization of Dehalococcoides from contaminated sites. It has the potential to become a common platform for Dehalococcoides-focused research, allowing meaningful comparisons between microarray experiments regardless of the strain examined.

  17. Quantitative Measurement of Protease-Activity with Correction of Probe Delivery and Tissue Absorption Effects

    PubMed Central

    Salthouse, Christopher D.; Reynolds, Fred; Tam, Jenny M.; Josephson, Lee; Mahmood, Umar

    2009-01-01

    Proteases play important roles in a variety of pathologies from heart disease to cancer. Quantitative measurement of protease activity is possible using a novel spectrally matched dual fluorophore probe and a small animal lifetime imager. The recorded fluorescence from an activatable fluorophore, one that changes its fluorescent amplitude after biological target interaction, is also influenced by other factors including imaging probe delivery and optical tissue absorption of excitation and emission light. Fluorescence from a second spectrally matched constant (non-activatable) fluorophore on each nanoparticle platform can be used to correct for both probe delivery and tissue absorption. The fluorescence from each fluorophore is separated using fluorescence lifetime methods. PMID:20161242

  18. Challenges and Opportunities for Small-Molecule Fluorescent Probes in Redox Biology Applications.

    PubMed

    Jiang, Xiqian; Wang, Lingfei; Carroll, Shaina L; Chen, Jianwei; Wang, Meng C; Wang, Jin

    2018-02-16

    The concentrations of reactive oxygen/nitrogen species (ROS/RNS) are critical to various biochemical processes. Small-molecule fluorescent probes have been widely used to detect and/or quantify ROS/RNS in many redox biology studies and serve as an important complementary to protein-based sensors with unique applications. Recent Advances: New sensing reactions have emerged in probe development, allowing more selective and quantitative detection of ROS/RNS, especially in live cells. Improvements have been made in sensing reactions, fluorophores, and bioavailability of probe molecules. In this review, we will not only summarize redox-related small-molecule fluorescent probes but also lay out the challenges of designing probes to help redox biologists independently evaluate the quality of reported small-molecule fluorescent probes, especially in the chemistry literature. We specifically highlight the advantages of reversibility in sensing reactions and its applications in ratiometric probe design for quantitative measurements in living cells. In addition, we compare the advantages and disadvantages of small-molecule probes and protein-based probes. The low physiological relevant concentrations of most ROS/RNS call for new sensing reactions with better selectivity, kinetics, and reversibility; fluorophores with high quantum yield, wide wavelength coverage, and Stokes shifts; and structural design with good aqueous solubility, membrane permeability, low protein interference, and organelle specificity. Antioxid. Redox Signal. 00, 000-000.

  19. Less haste, less waste: on recycling and its limits in strand displacement systems

    PubMed Central

    Condon, Anne; Hu, Alan J.; Maňuch, Ján; Thachuk, Chris

    2012-01-01

    We study the potential for molecule recycling in chemical reaction systems and their DNA strand displacement realizations. Recycling happens when a product of one reaction is a reactant in a later reaction. Recycling has the benefits of reducing consumption, or waste, of molecules and of avoiding fuel depletion. We present a binary counter that recycles molecules efficiently while incurring just a moderate slowdown compared with alternative counters that do not recycle strands. This counter is an n-bit binary reflecting Gray code counter that advances through 2n states. In the strand displacement realization of this counter, the waste—total number of nucleotides of the DNA strands consumed—is polynomial in n, the number of bits of the counter, while the waste of alternative counters grows exponentially in n. We also show that our n-bit counter fails to work correctly when many (Θ(n)) copies of the species that represent the bits of the counter are present initially. The proof applies more generally to show that in chemical reaction systems where all but one reactant of each reaction are catalysts, computations longer than a polynomial function of the size of the system are not possible when there are polynomially many copies of the system present. PMID:22649584

  20. Computing Gröbner Bases within Linear Algebra

    NASA Astrophysics Data System (ADS)

    Suzuki, Akira

    In this paper, we present an alternative algorithm to compute Gröbner bases, which is based on computations on sparse linear algebra. Both of S-polynomial computations and monomial reductions are computed in linear algebra simultaneously in this algorithm. So it can be implemented to any computational system which can handle linear algebra. For a given ideal in a polynomial ring, it calculates a Gröbner basis along with the corresponding term order appropriately.

  1. A benzothiazole-based fluorescent probe for hypochlorous acid detection and imaging in living cells

    NASA Astrophysics Data System (ADS)

    Nguyen, Khac Hong; Hao, Yuanqiang; Zeng, Ke; Fan, Shengnan; Li, Fen; Yuan, Suke; Ding, Xuejing; Xu, Maotian; Liu, You-Nian

    2018-06-01

    A benzothiazole-based turn-on fluorescent probe with a large Stokes shift (190 nm) has been developed for hypochlorous acid detection. The probe displays prompt fluorescence response for HClO with excellent selectivity over other reactive oxygen species as well as a low detection limit of 0.08 μM. The sensing mechanism involves the HClO-induced specific oxidation of oxime moiety of the probe to nitrile oxide, which was confirmed by HPLC-MS technique. Furthermore, imaging studies demonstrated that the probe is cell permeable and can be applied to detect HClO in living cells.

  2. New class of photonic quantum error correction codes

    NASA Astrophysics Data System (ADS)

    Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.

    We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.

  3. Beam hardening correction in CT myocardial perfusion measurement

    NASA Astrophysics Data System (ADS)

    So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim

    2009-05-01

    This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.

  4. Aligning the magnetic field of a linear induction accelerator with a low-energy electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, J.C.; Deadrick, F.J.; Kallman, J.S.

    1989-03-10

    The Experimental Test Accelerator II (ETA-II) linear induction accelerator at Lawrence Livermore National Laboratory uses a solenoid magnet in each acceleration cell to focus and transport an electron beam over the length of the accelerator. To control growth of the corkscrew mode the magnetic field must be precisely aligned over the full length of the accelerate. Concentric with each solenoid magnet is sine/cosmic-wound correction coil to steer the beam and correct field errors. A low-energy electron probe traces the central flux line through the accelerator referenced to a mechanical axis that is defined by a copropagating laser beam. Correction coilsmore » are activated to force the central flux line to cross the mechanical axis at the end of each acceleration cell. The ratios of correction coil currents determined by the low-energy electron probe are then kept fixed to correct for field errors during normal operation with an accelerated beam. We describe the construction of the low-energy electron probe and report the results of experiments we conducted to measure magnetic alignment with and without the correction coils activated. 5 refs., 3 figs.« less

  5. Large Area MEMS Based Ultrasound Device for Cancer Detection.

    PubMed

    Wodnicki, Robert; Thomenius, Kai; Hooi, Fong Ming; Sinha, Sumedha P; Carson, Paul L; Lin, Der-Song; Zhuang, Xuefeng; Khuri-Yakub, Pierre; Woychik, Charles

    2011-08-21

    We present image results obtained using a prototype ultrasound array which demonstrates the fundamental architecture for a large area MEMS based ultrasound device for detection of breast cancer. The prototype array consists of a tiling of capacitive Micro-Machined Ultrasound Transducers (cMUTs) which have been flip-chip attached to a rigid organic substrate. The pitch on the cMUT elements is 185 um and the operating frequency is nominally 9 MHz. The spatial resolution of the new probe is comparable to production PZT probes, however the sensitivity is reduced by conditions that should be correctable. Simulated opposed-view image registration and Speed of Sound volume reconstruction results for ultrasound in the mammographic geometry are also presented.

  6. Simulation of Shallow Water Jets with a Unified Element-based Continuous/Discontinuous Galerkin Model with Grid Flexibility on the Sphere

    DTIC Science & Technology

    2013-01-01

    is the derivative of the N th-order Legendre polynomial . Given these definitions, the one-dimensional Lagrange polynomials hi(ξ) are hi(ξ) = − 1 N(N...2. Detail of one interface patch in the northern hemisphere. The high-order Legendre -Gauss-Lobatto (LGL) points are added to the linear grid by...smaller ones by a Lagrange polynomial of order nI . The number of quadrilateral elements and grid points of the final grid are then given by Np = 6(N

  7. A "signal on" protection-displacement-hybridization-based electrochemical hepatitis B virus gene sequence sensor with high sensitivity and peculiar adjustable specificity.

    PubMed

    Li, Fengqin; Xu, Yanmei; Yu, Xiang; Yu, Zhigang; He, Xunjun; Ji, Hongrui; Dong, Jinghao; Song, Yongbin; Yan, Hong; Zhang, Guiling

    2016-08-15

    One "signal on" electrochemical sensing strategy was constructed for the detection of a specific hepatitis B virus (HBV) gene sequence based on the protection-displacement-hybridization-based (PDHB) signaling mechanism. This sensing system is composed of three probes, one capturing probe (CP) and one assistant probe (AP) which are co-immobilized on the Au electrode surface, and one 3-methylene blue (MB) modified signaling probe (SP) free in the detection solution. One duplex are formed between AP and SP with the target, a specific HBV gene sequence, hybridizing with CP. This structure can drive the MB labels close to the electrode surface, thereby producing a large detection current. Two electrochemical testing techniques, alternating current voltammetry (ACV) and cyclic voltammetry (CV), were used for characterizing the sensor. Under the optimized conditions, the proposed sensor exhibits a high sensitivity with the detection limit of ∼5fM for the target. When used for the discrimination of point mutation, the sensor also features an outstanding ability and its peculiar high adjustability. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Assignment of xeroderma pigmentosum group C(XPC) gene to chromosome 3p25

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Legerski, R.J.; Liu, P.; Li, L.

    1994-05-01

    The human gene XPC (formerly designated XPCC), which corrects the repair deficiency of xeroderma pigmentosum (XP) group C cells, was mapped to 3p25. A cDNA probe for Southern blot hybridization and diagnostic PCR analyses of hybrid clone panels informative for human chromosomes in general and portions of chromosome 3 in particular produced the initial results. Fluorescence in situ hybridization utilizing both a yeast artificial chromosome DNA containing the gene and XPC cDNA as probes provided verification and specific regional assignment. A conflicting assignment of XPC to chromosome 5 is discussed in light of inadequacies in the exclusive use of microcell-mediatedmore » chromosome transfer for gene mapping. 12 refs., 3 figs.« less

  9. TU-A-12A-12: Improved Airway Measurement Accuracy for Low Dose Quantitative CT (qCT) Using Statistical (ASIR), at Reduced DFOV, and High Resolution Kernels in a Phantom and Swine Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadava, G; Imai, Y; Hsieh, J

    2014-06-15

    Purpose: Quantitative accuracy of Iodine Hounsfield Unit (HU) in conventional single-kVp scanning is susceptible to beam-hardening effect. Dual-energy CT has unique capabilities of quantification using monochromatic CT images, but this scanning mode requires the availability of the state-of-the-art CT scanner and, therefore, is limited in routine clinical practice. Purpose of this work was to develop a beam-hardening-correction (BHC) for single-kVp CT that can linearize Iodine projections at any nominal energy, apply this approach to study Iodine response with respect to keV, and compare with dual-energy based monochromatic images obtained from material-decomposition using 80kVp and 140kVp. Methods: Tissue characterization phantoms (Gammexmore » Inc.), containing solid-Iodine inserts of different concentrations, were scanned using GE multi-slice CT scanner at 80, 100, 120, and 140 kVp. A model-based BHC algorithm was developed where Iodine was estimated using re-projection of image volume and corrected through an iterative process. In the correction, the re-projected Iodine was linearized using a polynomial mapping between monochromatic path-lengths at various nominal energies (40 to 140 keV) and physically modeled polychromatic path-lengths. The beam-hardening-corrected 80kVp and 140kVp images (linearized approximately at effective energy of the beam) were used for dual-energy material-decomposition in Water-Iodine basis-pair followed by generation of monochromatic images. Characterization of Iodine HU and noise in the images obtained from singlekVp with BHC at various nominal keV, and corresponding dual-energy monochromatic images, was carried out. Results: Iodine HU vs. keV response from single-kVp with BHC and dual-energy monochromatic images were found to be very similar, indicating that single-kVp data may be used to create material specific monochromatic equivalent using modelbased projection linearization. Conclusion: This approach may enable quantification of Iodine contrast enhancement and potential reduction in injected contrast without using dual-energy scanning. However, in general, dual-energy scanning has unique value in material characterization and quantification, and its value cannot be discounted. GE Healthcare Employee.« less

  10. Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.

    PubMed

    Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng

    2011-10-01

    This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.

  11. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  12. Polynomial Similarity Transformation Theory: A smooth interpolation between coupled cluster doubles and projected BCS applied to the reduced BCS Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degroote, M.; Henderson, T. M.; Zhao, J.

    We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less

  13. High-order local maximum principle preserving (MPP) discontinuous Galerkin finite element method for the transport equation

    NASA Astrophysics Data System (ADS)

    Anderson, R.; Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Quezada de Luna, M.; Rieben, R.; Tomov, V.

    2017-04-01

    In this work we present a FCT-like Maximum-Principle Preserving (MPP) method to solve the transport equation. We use high-order polynomial spaces; in particular, we consider up to 5th order spaces in two and three dimensions and 23rd order spaces in one dimension. The method combines the concepts of positive basis functions for discontinuous Galerkin finite element spatial discretization, locally defined solution bounds, element-based flux correction, and non-linear local mass redistribution. We consider a simple 1D problem with non-smooth initial data to explain and understand the behavior of different parts of the method. Convergence tests in space indicate that high-order accuracy is achieved. Numerical results from several benchmarks in two and three dimensions are also reported.

  14. New Geometric-distortion Solution for STIS FUV-MAMA

    NASA Astrophysics Data System (ADS)

    Sohn, S. Tony

    2018-04-01

    We derived a new geometric distortion solution for the STIS FUV-MAMA detector. To do this, positions of stars in 89 FUV-MAMA observations of NGC 6681 were compared to an astrometric standard catalog created using WFC3/UVIS imaging data to derive a fourth-order polynomial solution that transforms raw (x, y) positions to geometrically- corrected (x, y) positions. When compared to astrometric catalog positions, the FUV- MAMA position measurements based on the IDCTAB showed residuals with an RMS of ∼ 30 mas in each coordinate. Using the new IDCTAB, the RMS is reduced to ∼ 4 mas, or 0.16 FUV-MAMA pixels, in each coordinate. The updated IDCTAB is now being used in the HST STIS pipeline to process all STIS FUV-MAMA images.

  15. Influence of a fat layer on the near infrared spectra of human muscle: quantitative analysis based on two-layered Monte Carlo simulations and phantom experiments

    NASA Technical Reports Server (NTRS)

    Yang, Ye; Soyemi, Olusola O.; Landry, Michelle R.; Soller, Babs R.

    2005-01-01

    The influence of fat thickness on the diffuse reflectance spectra of muscle in the near infrared (NIR) region is studied by Monte Carlo simulations of a two-layer structure and with phantom experiments. A polynomial relationship was established between the fat thickness and the detected diffuse reflectance. The influence of a range of optical coefficients (absorption and reduced scattering) for fat and muscle over the known range of human physiological values was also investigated. Subject-to-subject variation in the fat optical coefficients and thickness can be ignored if the fat thickness is less than 5 mm. A method was proposed to correct the fat thickness influence. c2005 Optical Society of America.

  16. A quantitative relationship for the shock sensitivities of energetic compounds based on X-NO(2) (X=C, N, O) bond dissociation energy.

    PubMed

    Li, Jinshan

    2010-08-15

    The ZPE-corrected X-NO(2) (X=C, N, O) bond dissociation energies (BDEs(ZPE)) of 11 energetic nitrocompounds of different types have been calculated employing density functional theory methods. Computed results show that using the 6-31G** basis set the UB3LYP calculated BDE(ZPE) is less than the UB3P86. For these typical energetic nitrocompounds the shock-initiated pressure (P(98)) is strongly related to the BDE(ZPE) indeed, and a polynomial correlation of ln(P(98)) with the BDE(ZPE) has been established successfully at different density functional theory levels, which provides a method to address the shock sensitivity problem. Copyright 2010 Elsevier B.V. All rights reserved.

  17. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES

    PubMed Central

    GILLETTE, ANDREW; RAND, ALEXANDER; BAJAJ, CHANDRAJIT

    2016-01-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties. PMID:28077939

  18. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES.

    PubMed

    Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

    2016-10-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties.

  19. New approaches to probing Minkowski functionals

    NASA Astrophysics Data System (ADS)

    Munshi, D.; Smidt, J.; Cooray, A.; Renzi, A.; Heavens, A.; Coles, P.

    2013-10-01

    We generalize the concept of the ordinary skew-spectrum to probe the effect of non-Gaussianity on the morphology of cosmic microwave background (CMB) maps in several domains: in real space (where they are commonly known as cumulant-correlators), and in harmonic and needlet bases. The essential aim is to retain more information than normally contained in these statistics, in order to assist in determining the source of any measured non-Gaussianity, in the same spirit as Munshi & Heavens skew-spectra were used to identify foreground contaminants to the CMB bispectrum in Planck data. Using a perturbative series to construct the Minkowski functionals (MFs), we provide a pseudo-C_ℓ based approach in both harmonic and needlet representations to estimate these spectra in the presence of a mask and inhomogeneous noise. Assuming homogeneous noise, we present approximate expressions for error covariance for the purpose of joint estimation of these spectra. We present specific results for four different models of primordial non-Gaussianity local, equilateral, orthogonal and enfolded models, as well as non-Gaussianity caused by unsubtracted point sources. Closed form results of next-order corrections to MFs too are obtained in terms of a quadruplet of kurt-spectra. We also use the method of modal decomposition of the bispectrum and trispectrum to reconstruct the MFs as an alternative method of reconstruction of morphological properties of CMB maps. Finally, we introduce the odd-parity skew-spectra to probe the odd-parity bispectrum and its impact on the morphology of the CMB sky. Although developed for the CMB, the generic results obtained here can be useful in other areas of cosmology.

  20. NHS-Esters As Versatile Reactivity-Based Probes for Mapping Proteome-Wide Ligandable Hotspots.

    PubMed

    Ward, Carl C; Kleinman, Jordan I; Nomura, Daniel K

    2017-06-16

    Most of the proteome is considered undruggable, oftentimes hindering translational efforts for drug discovery. Identifying previously unknown druggable hotspots in proteins would enable strategies for pharmacologically interrogating these sites with small molecules. Activity-based protein profiling (ABPP) has arisen as a powerful chemoproteomic strategy that uses reactivity-based chemical probes to map reactive, functional, and ligandable hotspots in complex proteomes, which has enabled inhibitor discovery against various therapeutic protein targets. Here, we report an alkyne-functionalized N-hydroxysuccinimide-ester (NHS-ester) as a versatile reactivity-based probe for mapping the reactivity of a wide range of nucleophilic ligandable hotspots, including lysines, serines, threonines, and tyrosines, encompassing active sites, allosteric sites, post-translational modification sites, protein interaction sites, and previously uncharacterized potential binding sites. Surprisingly, we also show that fragment-based NHS-ester ligands can be made to confer selectivity for specific lysine hotspots on specific targets including Dpyd, Aldh2, and Gstt1. We thus put forth NHS-esters as promising reactivity-based probes and chemical scaffolds for covalent ligand discovery.

  1. Wide-field spectral imaging of human ovary autofluorescence and oncologic diagnosis via previously collected probe data

    NASA Astrophysics Data System (ADS)

    Renkoski, Timothy E.; Hatch, Kenneth D.; Utzinger, Urs

    2012-03-01

    With no sufficient screening test for ovarian cancer, a method to evaluate the ovarian disease state quickly and nondestructively is needed. The authors have applied a wide-field spectral imager to freshly resected ovaries of 30 human patients in a study believed to be the first of its magnitude. Endogenous fluorescence was excited with 365-nm light and imaged in eight emission bands collectively covering the 400- to 640-nm range. Linear discriminant analysis was used to classify all image pixels and generate diagnostic maps of the ovaries. Training the classifier with previously collected single-point autofluorescence measurements of a spectroscopic probe enabled this novel classification. The process by which probe-collected spectra were transformed for comparison with imager spectra is described. Sensitivity of 100% and specificity of 51% were obtained in classifying normal and cancerous ovaries using autofluorescence data alone. Specificity increased to 69% when autofluorescence data were divided by green reflectance data to correct for spatial variation in tissue absorption properties. Benign neoplasm ovaries were also found to classify as nonmalignant using the same algorithm. Although applied ex vivo, the method described here appears useful for quick assessment of cancer presence in the human ovary.

  2. Sensitivity Analysis of Flutter Response of a Wing Incorporating Finite-Span Corrections

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian; Kapania, Rakesh K.; Barthelemy, Jean-Francois M.

    1994-01-01

    Flutter analysis of a wing is performed in compressible flow using state-space representation of the unsteady aerodynamic behavior. Three different expressions are used to incorporate corrections due to the finite-span effects of the wing in estimating the lift-curve slope. The structural formulation is based on a Rayleigh-Pitz technique with Chebyshev polynomials used for the wing deflections. The aeroelastic equations are solved as an eigen-value problem to determine the flutter speed of the wing. The flutter speeds are found to be higher in these cases, when compared to that obtained without accounting for the finite-span effects. The derivatives of the flutter speed with respect to the shape parameters, namely: aspect ratio, area, taper ratio and sweep angle, are calculated analytically. The shape sensitivity derivatives give a linear approximation to the flutter speed curves over a range of values of the shape parameter which is perturbed. Flutter and sensitivity calculations are performed on a wing using a lifting-surface unsteady aerodynamic theory using modules from a system of programs called FAST.

  3. Essentially nonoscillatory postprocessing filtering methods

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1992-01-01

    High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.

  4. The Accuracy and Correction of Fuel Consumption from Controller Area Network Broadcast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeffrey D; Wood, Eric W

    Fuel consumption (FC) has always been an important factor in vehicle cost. With the advent of electronically controlled engines, the controller area network (CAN) broadcasts information about engine and vehicle performance, including fuel use. However, the accuracy of the FC estimates is uncertain. In this study, the researchers first compared CAN-broadcasted FC against physically measured fuel use for three different types of trucks, which revealed the inaccuracies of CAN-broadcast fueling estimates. To match precise gravimetric fuel-scale measurements, polynomial models were developed to correct the CAN-broadcasted FC. Lastly, the robustness testing of the correction models was performed. The training cycles inmore » this section included a variety of drive characteristics, such as high speed, acceleration, idling, and deceleration. The mean relative differences were reduced noticeably.« less

  5. Rapid and sensitive detection of early esophageal squamous cell carcinoma with fluorescence probe targeting dipeptidylpeptidase IV

    PubMed Central

    Onoyama, Haruna; Kamiya, Mako; Kuriki, Yugo; Komatsu, Toru; Abe, Hiroyuki; Tsuji, Yosuke; Yagi, Koichi; Yamagata, Yukinori; Aikou, Susumu; Nishida, Masato; Mori, Kazuhiko; Yamashita, Hiroharu; Fujishiro, Mitsuhiro; Nomura, Sachiyo; Shimizu, Nobuyuki; Fukayama, Masashi; Koike, Kazuhiko; Urano, Yasuteru; Seto, Yasuyuki

    2016-01-01

    Early detection of esophageal squamous cell carcinoma (ESCC) is an important prognosticator, but is difficult to achieve by conventional endoscopy. Conventional lugol chromoendoscopy and equipment-based image-enhanced endoscopy, such as narrow-band imaging (NBI), have various practical limitations. Since fluorescence-based visualization is considered a promising approach, we aimed to develop an activatable fluorescence probe to visualize ESCCs. First, based on the fact that various aminopeptidase activities are elevated in cancer, we screened freshly resected specimens from patients with a series of aminopeptidase-activatable fluorescence probes. The results indicated that dipeptidylpeptidase IV (DPP-IV) is specifically activated in ESCCs, and would be a suitable molecular target for detection of esophageal cancer. Therefore, we designed, synthesized and characterized a series of DPP-IV-activatable fluorescence probes. When the selected probe was topically sprayed onto endoscopic submucosal dissection (ESD) or surgical specimens, tumors were visualized within 5 min, and when the probe was sprayed on biopsy samples, the sensitivity, specificity and accuracy reached 96.9%, 85.7% and 90.5%. We believe that DPP-IV-targeted activatable fluorescence probes are practically translatable as convenient tools for clinical application to enable rapid and accurate diagnosis of early esophageal cancer during endoscopic or surgical procedures. PMID:27245876

  6. Horizontal Line-of-Sight Turbulence Over Near-Ground Paths and Implications for Adaptive Optics Corrections in Laser Communications.

    PubMed

    Levine, B M; Martinsen, E A; Wirth, A; Jankevics, A; Toledo-Quinones, M; Landers, F; Bruno, T L

    1998-07-20

    Atmospheric turbulence over long horizontal paths perturbs phase and can also cause severe intensity scintillation in the pupil of an optical communications receiver, which limits the data rate over which intensity-based modulation schemes can operate. The feasibility of using low-order adaptive optics by applying phase-only corrections over horizontal propagation paths is investigated. A Shack-Hartmann wave-front sensor was built and data were gathered on paths 1 m above ground and between a 1- and 2.5-km range. Both intensity fluctuations and optical path fluctuation statistics were gathered within a single frame, and the wave-front reconstructor was modified to allow for scintillated data. The temporal power spectral density for various Zernike polynomial modes was used to determine the effects of the expected corrections by adaptive optics. The slopes of the inertial subrange of turbulence were found to be less than predicted by Kolmogorov theory with an infinite outer scale, and the distribution of variance explained by increasing order was also found to be different. Statistical analysis of these data in the 1-km range indicates that at communications wavelengths of 1.3 mum, a significant improvement in transmitted beam quality could be expected most of the time, to a performance of 10% Strehl ratio or better.

  7. A mass spectrometry-based multiplex SNP genotyping by utilizing allele-specific ligation and strand displacement amplification.

    PubMed

    Park, Jung Hun; Jang, Hyowon; Jung, Yun Kyung; Jung, Ye Lim; Shin, Inkyung; Cho, Dae-Yeon; Park, Hyun Gyu

    2017-05-15

    We herein describe a new mass spectrometry-based method for multiplex SNP genotyping by utilizing allele-specific ligation and strand displacement amplification (SDA) reaction. In this method, allele-specific ligation is first performed to discriminate base sequence variations at the SNP site within the PCR-amplified target DNA. The primary ligation probe is extended by a universal primer annealing site while the secondary ligation probe has base sequences as an overhang with a nicking enzyme recognition site and complementary mass marker sequence. The ligation probe pairs are ligated by DNA ligase only at specific allele in the target DNA and the resulting ligated product serves as a template to promote the SDA reaction using a universal primer. This process isothermally amplifies short DNA fragments, called mass markers, to be analyzed by mass spectrometry. By varying the sizes of the mass markers, we successfully demonstrated the multiplex SNP genotyping capability of this method by reliably identifying several BRCA mutations in a multiplex manner with mass spectrometry. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. In vivo targeted peripheral nerve imaging with a nerve-specific nanoscale magnetic resonance probe.

    PubMed

    Zheng, Linfeng; Li, Kangan; Han, Yuedong; Wei, Wei; Zheng, Sujuan; Zhang, Guixiang

    2014-11-01

    Neuroimaging plays a pivotal role in clinical practice. Currently, computed tomography (CT), magnetic resonance imaging (MRI), ultrasonography, and positron emission tomography (PET) are applied in the clinical setting as neuroimaging modalities. There is no optimal imaging modality for clinical peripheral nerve imaging even though fluorescence/bioluminescence imaging has been used for preclinical studies on the nervous system. Some studies have shown that molecular and cellular MRI (MCMRI) can be used to visualize and image the cellular and molecular level of the nervous system. Other studies revealed that there are different pathological/molecular changes in the proximal and distal sites after peripheral nerve injury (PNI). Therefore, we hypothesized that in vivo peripheral nerve targets can be imaged using MCMRI with specific MRI probes. Specific probes should have higher penetrability for the blood-nerve barrier (BNB) in vivo. Here, a functional nanometre MRI probe that is based on nerve-specific proteins as targets, specifically, using a molecular antibody (mAb) fragment conjugated to iron nanoparticles as an MRI probe, was constructed for further study. The MRI probe allows for imaging the peripheral nerve targets in vivo. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. DC Electric Field Measurement by the Double Probe System Aboard Geotail and its Simulation

    NASA Astrophysics Data System (ADS)

    Kasaba, Y.; Hayakawa, H.; Ishisaka, K.; Okada, T.; Matsuoka, A.; Mukai, T.; Okada, M.

    2005-12-01

    We summarize the characteristics of the DC electric field measurement by the double probe system, PANT and EFD-P, aboard Geotail. The accuracy and correction factors for the gain (effective length) and off-set, which depends on ambient plasma conditions, are provided. Accurate measurements of electric fields are essential for space plasma studies, for example, plasma convection, wave-particle interactions, violation of MHD approximation, etc. One typical measurement techniques is the 'Double Probe method', identical to that of a voltmeter: the potential difference between two top-hat probes [cf. Pedersen et al., 1984]. This method can measure electric fields passively and continuously in all plasma conditions. However, the accuracy of the measured electric field values is limited. The probe measurement is also subjected to the variable gain (effective length) of the probe antenna and the artificial offset of the measured values. Those depend on a) the disturbance from ambient plasma and b) the disturbance from the spacecraft and the probe itself. In this paper, we show the results of the characteristics of DC electric field measurement by the PANT probe and the EFD-P (Electric Field Detector - Probe technique) receiver aboard Geotail [Tsuruda et al., 1994], in order to evaluate the accuracy, gain, and offset controlled by ambient plasmas. We conclude that the Geotail electric field measurement by the double probe system has the accuracy 0.4 mV/m for Ex and 0.3 mV/m for Ey, after the correction of the gain and offset. In better conditions, accuracy of Ey is 0.2 mV/m. The potential accuracy would be better because those values are limited by the accuracy of the particle measurement especially in low density conditions. In practical use, the corrections by long-term variation and spacecraft potential are effective to refine the electric field data. The characteristics of long-term variation and the dependences on ambient plasma are not fully understood well, yet. Further works will be needed based on the calibrated LEP data after 1998. It will also cover the conditions rejected in this paper, i.e., low density regions, potential controlled period, electric field quasi-parallel to magnetic field, etc. The comparison with EFD-B (EFD - Beam technique) data will also be included in order to reject the ambiguity in particle observations. In addition, we are trying to establish the numerical model of the double probe system for the full-quantitative understanding of the effect of potential structure and photoelectron distributions. Those will be the basis for planned experiments, BepiColombo to Mercury, ERG to the inner magnetosphere, and the multi-spacecraft magnetospheric mission SCOPE.

  10. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cieplak, Agnieszka M.; Slosar, Anze

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  11. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisationmore » over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  12. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE PAGES

    Cieplak, Agnieszka M.; Slosar, Anze

    2017-10-12

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  13. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  14. Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices.

    PubMed

    Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh

    2015-01-01

    In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164-168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work.

  15. Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices

    PubMed Central

    Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh

    2015-01-01

    In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164–168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work. PMID:26479495

  16. Improved electron probe microanalysis of trace elements in quartz

    USGS Publications Warehouse

    Donovan, John J.; Lowers, Heather; Rusk, Brian G.

    2011-01-01

    Quartz occurs in a wide range of geologic environments throughout the Earth's crust. The concentration and distribution of trace elements in quartz provide information such as temperature and other physical conditions of formation. Trace element analyses with modern electron-probe microanalysis (EPMA) instruments can achieve 99% confidence detection of ~100 ppm with fairly minimal effort for many elements in samples of low to moderate average atomic number such as many common oxides and silicates. However, trace element measurements below 100 ppm in many materials are limited, not only by the precision of the background measurement, but also by the accuracy with which background levels are determined. A new "blank" correction algorithm has been developed and tested on both Cameca and JEOL instruments, which applies a quantitative correction to the emitted X-ray intensities during the iteration of the sample matrix correction based on a zero level (or known trace) abundance calibration standard. This iterated blank correction, when combined with improved background fit models, and an "aggregate" intensity calculation utilizing multiple spectrometer intensities in software for greater geometric efficiency, yields a detection limit of 2 to 3 ppm for Ti and 6 to 7 ppm for Al in quartz at 99% t-test confidence with similar levels for absolute accuracy.

  17. The Effects of Using Different Procedures to Score Maze Measures

    ERIC Educational Resources Information Center

    Pierce, Rebecca L.; McMaster, Kristen L.; Deno, Stanley L.

    2010-01-01

    The purpose of this study was to examine how different scoring procedures affect interpretation of maze curriculum-based measurements. Fall and spring data were collected from 199 students receiving supplemental reading instruction. Maze probes were scored first by counting all correct maze choices, followed by four scoring variations designed to…

  18. Molecular identification of common Salmonella serovars using multiplex DNA sensor-based suspension array.

    PubMed

    Aydin, Muhsin; Carter-Conger, Jacqueline; Gao, Ning; Gilmore, David F; Ricke, Steven C; Ahn, Soohyoun

    2018-04-01

    Salmonella is one of major foodborne pathogens and the leading cause of foodborne illness-related hospitalizations and deaths. It is critical to develop a sensitive and rapid detection assay that can identify Salmonella to ensure food safety. In this study, a DNA sensor-based suspension array system of high multiplexing ability was developed to identify eight Salmonella serovars commonly associated with foodborne outbreaks to the serotype level. Each DNA sensor was prepared by activating pre-encoded microspheres with oligonucleotide probes that are targeting virulence genes and serovar-specific regions. The mixture of 12 different types of DNA sensors were loaded into a 96-well microplate and used as a 12-plex DNA sensor array platform. DNA isolated from Salmonella was amplified by multiplex polymerase chain reaction (mPCR), and the presence of Salmonella was determined by reading fluorescent signals from hybridization between probes on DNA sensors and fluorescently labeled target DNA using the Bio-Plex® system. The developed multiplex array was able to detect synthetic DNA at the concentration as low as 100 fM and various Salmonella serovars as low as 100 CFU/mL within 1 h post-PCR. Sensitivity of this assay was further improved to 1 CFU/mL with 6-h enrichment. The array system also correctly and specifically identified serotype of tested Salmonella strains without any cross-reactivity with other common foodborne pathogens. Our results indicate the developed DNA sensor suspension array can be a rapid and reliable high-throughput method for simultaneous detection and molecular identification of common Salmonella serotypes.

  19. Transfer matrix computation of generalized critical polynomials in percolation

    DOE PAGES

    Scullard, Christian R.; Jacobsen, Jesper Lykke

    2012-09-27

    Percolation thresholds have recently been studied by means of a graph polynomial PB(p), henceforth referred to as the critical polynomial, that may be defined on any periodic lattice. The polynomial depends on a finite subgraph B, called the basis, and the way in which the basis is tiled to form the lattice. The unique root of P B(p) in [0, 1] either gives the exact percolation threshold for the lattice, or provides an approximation that becomes more accurate with appropriately increasing size of B. Initially P B(p) was defined by a contraction-deletion identity, similar to that satisfied by the Tuttemore » polynomial. Here, we give an alternative probabilistic definition of P B(p), which allows for much more efficient computations, by using the transfer matrix, than was previously possible with contraction-deletion. We present bond percolation polynomials for the (4, 82), kagome, and (3, 122) lattices for bases of up to respectively 96, 162, and 243 edges, much larger than the previous limit of 36 edges using contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. For the largest bases, we obtain the thresholds p c(4, 82) = 0.676 803 329 · · ·, p c(kagome) = 0.524 404 998 · · ·, p c(3, 122) = 0.740 420 798 · · ·, comparable to the best simulation results. We also show that the alternative definition of P B(p) can be applied to study site percolation problems.« less

  20. Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map

    PubMed Central

    2014-01-01

    We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970

Top