Sample records for polynomial-based probe-specific correction

  1. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    ERIC Educational Resources Information Center

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  2. Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao

    2017-10-01

    Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.

  3. Adaptable gene-specific dye bias correction for two-channel DNA microarrays.

    PubMed

    Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank C P

    2009-01-01

    DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available.

  4. Adaptable gene-specific dye bias correction for two-channel DNA microarrays

    PubMed Central

    Margaritis, Thanasis; Lijnzaad, Philip; van Leenen, Dik; Bouwmeester, Diane; Kemmeren, Patrick; van Hooff, Sander R; Holstege, Frank CP

    2009-01-01

    DNA microarray technology is a powerful tool for monitoring gene expression or for finding the location of DNA-bound proteins. DNA microarrays can suffer from gene-specific dye bias (GSDB), causing some probes to be affected more by the dye than by the sample. This results in large measurement errors, which vary considerably for different probes and also across different hybridizations. GSDB is not corrected by conventional normalization and has been difficult to address systematically because of its variance. We show that GSDB is influenced by label incorporation efficiency, explaining the variation of GSDB across different hybridizations. A correction method (Gene- And Slide-Specific Correction, GASSCO) is presented, whereby sequence-specific corrections are modulated by the overall bias of individual hybridizations. GASSCO outperforms earlier methods and works well on a variety of publically available datasets covering a range of platforms, organisms and applications, including ChIP on chip. A sequence-based model is also presented, which predicts which probes will suffer most from GSDB, useful for microarray probe design and correction of individual hybridizations. Software implementing the method is publicly available. PMID:19401678

  5. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  6. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  7. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    NASA Astrophysics Data System (ADS)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates

  8. Astigmatism corrected common path probe for optical coherence tomography.

    PubMed

    Singh, Kanwarpal; Yamada, Daisuke; Tearney, Guillermo

    2017-03-01

    Optical coherence tomography (OCT) catheters for intraluminal imaging are subject to various artifacts due to reference-sample arm dispersion imbalances and sample arm beam astigmatism. The goal of this work was to develop a probe that minimizes such artifacts. Our probe was fabricated using a single mode fiber at the tip of which a glass spacer and graded index objective lens were spliced to achieve the desired focal distance. The signal was reflected using a curved reflector to correct for astigmatism caused by the thin, protective, transparent sheath that surrounds the optics. The probe design was optimized using Zemax, a commercially available optical design software. Common path interferometric operation was achieved using Fresnel reflection from the tip of the focusing graded index objective lens. The performance of the probe was tested using a custom designed spectrometer-based OCT system. The probe achieved an axial resolution of 15.6 μm in air, a lateral resolution 33 μm, and a sensitivity of 103 dB. A scattering tissue phantom was imaged to test the performance of the probe for astigmatism correction. Images of the phantom confirmed that this common-path, astigmatism-corrected OCT imaging probe had minimal artifacts in the axial, and lateral dimensions. In this work, we developed an astigmatism-corrected, common path probe that minimizes artifacts associated with standard OCT probes. This design may be useful for OCT applications that require high axial and lateral resolutions. Lasers Surg. Med. 49:312-318, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  10. Field curvature correction method for ultrashort throw ratio projection optics design using an odd polynomial mirror surface.

    PubMed

    Zhuang, Zhenfeng; Chen, Yanting; Yu, Feihong; Sun, Xiaowei

    2014-08-01

    This paper presents a field curvature correction method of designing an ultrashort throw ratio (TR) projection lens for an imaging system. The projection lens is composed of several refractive optical elements and an odd polynomial mirror surface. A curved image is formed in a direction away from the odd polynomial mirror surface by the refractive optical elements from the image formed on the digital micromirror device (DMD) panel, and the curved image formed is its virtual image. Then the odd polynomial mirror surface enlarges the curved image and a plane image is formed on the screen. Based on the relationship between the chief ray from the exit pupil of each field of view (FOV) and the corresponding predescribed position on the screen, the initial profile of the freeform mirror surface is calculated by using segments of the hyperbolic according to the laws of reflection. For further optimization, the value of the high-order odd polynomial surface is used to express the freeform mirror surface through a least-squares fitting method. As an example, an ultrashort TR projection lens that realizes projection onto a large 50 in. screen at a distance of only 510 mm is presented. The optical performance for the designed projection lens is analyzed by ray tracing method. Results show that an ultrashort TR projection lens modulation transfer function of over 60% at 0.5 cycles/mm for all optimization fields is achievable with f-number of 2.0, 126° full FOV, <1% distortion, and 0.46 TR. Moreover, in comparing the proposed projection lens' optical specifications to that of traditional projection lenses, aspheric mirror projection lenses, and conventional short TR projection lenses, results indicate that this projection lens has the advantages of ultrashort TR, low f-number, wide full FOV, and small distortion.

  11. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    PubMed

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  12. Correction factors for on-line microprobe analysis of multielement alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Brewer, W. D.

    1977-01-01

    An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.

  13. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  14. A new class of homogeneous nucleic acid probes based on specific displacement hybridization

    PubMed Central

    Li, Qingge; Luan, Guoyan; Guo, Qiuping; Liang, Jixuan

    2002-01-01

    We have developed a new class of probes for homogeneous nucleic acid detection based on the proposed displacement hybridization. Our probes consist of two complementary oligodeoxyribonucleotides of different length labeled with a fluorophore and a quencher in close proximity in the duplex. The probes on their own are quenched, but they become fluorescent upon displacement hybridization with the target. These probes display complete discrimination between a perfectly matched target and single nucleotide mismatch targets. A comparison of double-stranded probes with corresponding linear probes confirms that the presence of the complementary strand significantly enhances their specificity. Using four such probes labeled with different color fluorophores, each designed to recognize a different target, we have demonstrated that multiple targets can be distinguished in the same solution, even if they differ from one another by as little as a single nucleotide. Double-stranded probes were used in real-time nucleic acid amplifications as either probes or as primers. In addition to its extreme specificity and flexibility, the new class of probes is simple to design and synthesize, has low cost and high sensitivity and is accessible to a wide range of labels. This class of probes should find applications in a variety of areas wherever high specificity of nucleic acid hybridization is relevant. PMID:11788731

  15. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  16. Development of Thinopyrum ponticum-specific molecular markers and FISH probes based on SLAF-seq technology.

    PubMed

    Liu, Liqin; Luo, Qiaoling; Teng, Wan; Li, Bin; Li, Hongwei; Li, Yiwen; Li, Zhensheng; Zheng, Qi

    2018-05-01

    Based on SLAF-seq, 67 Thinopyrum ponticum-specific markers and eight Th. ponticum-specific FISH probes were developed, and these markers and probes could be used for detection of alien chromatin in a wheat background. Decaploid Thinopyrum ponticum (2n = 10x = 70) is a valuable gene reservoir for wheat improvement. Identification of Th. ponticum introgression would facilitate its transfer into diverse wheat genetic backgrounds and its practical utilization in wheat improvement. Based on specific-locus-amplified fragment sequencing (SLAF-seq) technology, 67 new Th. ponticum-specific molecular markers and eight Th. ponticum-specific fluorescence in situ hybridization (FISH) probes have been developed from a tiny wheat-Th. ponticum translocation line. These newly developed molecular markers allowed the detection of Th. ponticum DNA in a variety of materials specifically and steadily at high throughput. According to the hybridization signal pattern, the eight Th. ponticum-specific probes could be divided into two groups. The first group including five dispersed repetitive sequence probes could identify Th. ponticum chromatin more sensitively and accurately than genomic in situ hybridization (GISH). Whereas the second group having three tandem repetitive sequence probes enabled the discrimination of Th. ponticum chromosomes together with another clone pAs1 in wheat-Th. ponticum partial amphiploid Xiaoyan 68.

  17. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz

  18. Selection of turning-on fluorogenic probe as protein-specific detector obtained via the 10BASEd-T

    NASA Astrophysics Data System (ADS)

    Uematsu, Shuta; Midorikawa, Taiki; Ito, Yuji; Taki, Masumi

    2017-01-01

    In order to obtain a molecular probe for specific protein detection, we have synthesized fluorogenic probe library of vast diversity on bacteriophage T7 via the gp10 based-thioetherification (10BASEd-T). A remarkable turning- on probe which is excitable by widely applicable visible light was selected from the library.

  19. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  20. Activity-Based Probes for Isoenzyme- and Site-Specific Functional Characterization of Glutathione S -Transferases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoddard, Ethan G.; Killinger, Bryan J.; Nair, Reji N.

    Glutathione S-transferases (GSTs) comprise a highly diverse family of phase II drug metabolizing enzymes whose shared function is the conjugation of reduced glutathione to various endo- and xenobiotics. Although the conglomerate activity of these enzymes can be measured by colorimetric assays, measurement of the individual contribution from specific isoforms and their contribution to the detoxification of xenobiotics in complex biological samples has not been possible. For this reason, we have developed two activity-based probes that characterize active glutathione transferases in mammalian tissues. The GST active site is comprised of a glutathione binding “G site” and a distinct substrate binding “Hmore » site”. Therefore, we developed (1) a glutathione-based photoaffinity probe (GSH-ABP) to target the “G site”, and (2) a probe designed to mimic a substrate molecule and show “H site” activity (GST-ABP). The GSH-ABP features a photoreactive moiety for UV-induced covalent binding to GSTs and glutathione-binding enzymes. The GST-ABP is a derivative of a known mechanism-based GST inhibitor that binds within the active site and inhibits GST activity. Validation of probe targets and “G” and “H” site specificity was carried out using a series of competitors in liver homogenates. Herein, we present robust tools for the novel characterization of enzyme- and active site-specific GST activity in mammalian model systems.« less

  1. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    NASA Astrophysics Data System (ADS)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  2. Mobile Image Based Color Correction Using Deblurring

    PubMed Central

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2016-01-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space. PMID:28572697

  3. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  4. Thermodynamic correction of particle concentrations measured by underwing probes on fast flying aircraft

    NASA Astrophysics Data System (ADS)

    Weigel, R.; Spichtinger, P.; Mahnke, C.; Klingebiel, M.; Afchine, A.; Petzold, A.; Krämer, M.; Costa, A.; Molleker, S.; Jurkat, T.; Minikin, A.; Borrmann, S.

    2015-12-01

    ξ as a function of TAS is provided for instances if PAS measurements are lacking. The ξ-correction yields higher ambient particle concentration by about 15-25 % compared to conventional procedures - an improvement which can be considered as significant for many research applications. The calculated ξ-values are specifically related to the considered HALO underwing probe arrangement and may differ for other aircraft or instrument geometries. Moreover, the ξ-correction may not cover all impacts originating from high flight velocities and from interferences between the instruments and, e.g., the aircraft wings and/or fuselage. Consequently, it is important that PAS (as a function of TAS) is individually measured by each probe deployed underneath the wings of a fast-flying aircraft.

  5. Thermodynamic correction of particle concentrations measured by underwing probes on fast-flying aircraft

    NASA Astrophysics Data System (ADS)

    Weigel, Ralf; Spichtinger, Peter; Mahnke, Christoph; Klingebiel, Marcus; Afchine, Armin; Petzold, Andreas; Krämer, Martina; Costa, Anja; Molleker, Sergej; Reutter, Philipp; Szakáll, Miklós; Port, Max; Grulich, Lucas; Jurkat, Tina; Minikin, Andreas; Borrmann, Stephan

    2016-10-01

    underwing probe configuration. The ability of cloud particles to adopt changes of air speed between ambient and measurement conditions depends on the cloud particles' inertia as a function of particle size (diameter Dp). The suggested inertia correction factor μ (Dp) for liquid cloud drops ranges between 1 (for Dp < 70 µm) and 0.8 (for 100 µm < Dp < 225 µm) but it needs to be applied carefully with respect to the particles' phase and nature. The correction of measured concentration by both factors, ξ and μ (Dp), yields higher ambient particle concentration by about 10-25 % compared to conventional procedures - an improvement which can be considered as significant for many research applications. The calculated ξ values are specifically related to the considered HALO underwing probe arrangement and may differ for other aircraft. Moreover, suggested corrections may not cover all impacts originating from high flight velocities and from interferences between the instruments and e.g. the aircraft wings and/or fuselage. Consequently, it is important that PAS (as a function of TAS) is individually measured by each probe deployed underneath the wings of a fast-flying aircraft.

  6. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  7. A Formally-Verified Decision Procedure for Univariate Polynomial Computation Based on Sturm's Theorem

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.

    2014-01-01

    Sturm's Theorem is a well-known result in real algebraic geometry that provides a function that computes the number of roots of a univariate polynomial in a semiopen interval. This paper presents a formalization of this theorem in the PVS theorem prover, as well as a decision procedure that checks whether a polynomial is always positive, nonnegative, nonzero, negative, or nonpositive on any input interval. The soundness and completeness of the decision procedure is proven in PVS. The procedure and its correctness properties enable the implementation of a PVS strategy for automatically proving existential and universal univariate polynomial inequalities. Since the decision procedure is formally verified in PVS, the soundness of the strategy depends solely on the internal logic of PVS rather than on an external oracle. The procedure itself uses a combination of Sturm's Theorem, an interval bisection procedure, and the fact that a polynomial with exactly one root in a bounded interval is always nonnegative on that interval if and only if it is nonnegative at both endpoints.

  8. Clearing the waters: Evaluating the need for site-specific field fluorescence corrections based on turbidity measurements

    USGS Publications Warehouse

    Saraceno, John F.; Shanley, James B.; Downing, Bryan D.; Pellerin, Brian A.

    2017-01-01

    In situ fluorescent dissolved organic matter (fDOM) measurements have gained increasing popularity as a proxy for dissolved organic carbon (DOC) concentrations in streams. One challenge to accurate fDOM measurements in many streams is light attenuation due to suspended particles. Downing et al. (2012) evaluated the need for corrections to compensate for particle interference on fDOM measurements using a single sediment standard in a laboratory study. The application of those results to a large river improved unfiltered field fDOM accuracy. We tested the same correction equation in a headwater tropical stream and found that it overcompensated fDOM when turbidity exceeded ∼300 formazin nephelometric units (FNU). Therefore, we developed a site-specific, field-based fDOM correction equation through paired in situ fDOM measurements of filtered and unfiltered streamwater. The site-specific correction increased fDOM accuracy up to a turbidity as high as 700 FNU, the maximum observed in this study. The difference in performance between the laboratory-based correction equation of Downing et al. (2012) and our site-specific, field-based correction equation likely arises from differences in particle size distribution between the sediment standard used in the lab (silt) and that observed in our study (fine to medium sand), particularly during high flows. Therefore, a particle interference correction equation based on a single sediment type may not be ideal when field sediment size is significantly different. Given that field fDOM corrections for particle interference under turbid conditions are a critical component in generating accurate DOC estimates, we describe a way to develop site-specific corrections.

  9. Extending a Property of Cubic Polynomials to Higher-Degree Polynomials

    ERIC Educational Resources Information Center

    Miller, David A.; Moseley, James

    2012-01-01

    In this paper, the authors examine a property that holds for all cubic polynomials given two zeros. This property is discovered after reviewing a variety of ways to determine the equation of a cubic polynomial given specific conditions through algebra and calculus. At the end of the article, they will connect the property to a very famous method…

  10. Coherent orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es

    2013-08-15

    We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that

  11. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power

  12. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  13. Development of Prevotella intermedia-specific PCR primers based on the nucleotide sequences of a DNA probe Pig27.

    PubMed

    Kim, Min Jung; Hwang, Kyung Hwan; Lee, Young-Seok; Park, Jae-Yoon; Kook, Joong-Ki

    2011-03-01

    The aim of this study was to develop Prevotella intermedia-specific PCR primers based on the P. intermedia-specific DNA probe. The P. intermedia-specific DNA probe was screened by inverted dot blot hybridization and confirmed by Southern blot hybridization. The nucleotide sequences of the species-specific DNA probes were determined using a chain termination method. Southern blot analysis showed that the DNA probe, Pig27, detected only the genomic DNA of P. intermedia strains. PCR showed that the PCR primers, Pin-F1/Pin-R1, had species-specificity for P. intermedia. The detection limits of the PCR primer sets were 0.4pg of the purified genomic DNA of P. intermedia ATCC 49046. These results suggest that the PCR primers, Pin-F1/Pin-R1, could be useful in the detection of P. intermedia as well as in the development of a PCR kit in epidemiological studies related to periodontal diseases. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.

  14. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  15. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  16. Development of species-specific hybridization probes for marine luminous bacteria by using in vitro DNA amplification.

    PubMed Central

    Wimpee, C F; Nadeau, T L; Nealson, K H

    1991-01-01

    By using two highly conserved region of the luxA gene as primers, polymerase chain reaction amplification methods were used to prepare species-specific probes against the luciferase gene from four major groups of marine luminous bacteria. Laboratory studies with test strains indicated that three of the four probes cross-reacted with themselves and with one or more of the other species at low stringencies but were specific for members of their own species at high stringencies. The fourth probe, generated from Vibrio harveyi DNA, cross-reacted with DNAs from two closely related species, V. orientalis and V. vulnificus. When nonluminous cultures were tested with the species-specific probes, no false-positive results were observed, even at low stringencies. Two field isolates were correctly identified as Photobacterium phosphoreum by using the species-specific hybridization probes at high stringency. A mixed probe (four different hybridization probes) used at low stringency gave positive results with all of the luminous bacteria tested, including the terrestrial species, Xenorhabdus luminescens, and the taxonomically distinct marine bacterial species Shewanella hanedai; minimal cross-hybridization with these species was seen at higher stringencies. Images PMID:1854194

  17. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  18. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and

  19. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10

  20. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  1. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  2. Development of mercury (II) ion biosensors based on mercury-specific oligonucleotide probes.

    PubMed

    Li, Lanying; Wen, Yanli; Xu, Li; Xu, Qin; Song, Shiping; Zuo, Xiaolei; Yan, Juan; Zhang, Weijia; Liu, Gang

    2016-01-15

    Mercury (II) ion (Hg(2+)) contamination can be accumulated along the food chain and cause serious threat to the public health. Plenty of research effort thus has been devoted to the development of fast, sensitive and selective biosensors for monitoring Hg(2+). Thymine was demonstrated to specifically combine with Hg(2+) and form a thymine-Hg(2+)-thymine (T-Hg(2+)-T) structure, with binding constant even higher than T-A Watson-Crick pair in DNA duplex. Recently, various novel Hg(2+) biosensors have been developed based on T-rich Mercury-Specific Oligonucleotide (MSO) probes, and exhibited advanced selectivity and excellent sensitivity for Hg(2+) detection. In this review, we explained recent development of MSO-based Hg(2+) biosensors mainly in 3 groups: fluorescent biosensors, colorimetric biosensors and electrochemical biosensors. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. HybProbes-based real-time PCR assay for specific identification of Streptomyces scabies and Streptomyces europaeiscabiei, the potato common scab pathogens.

    PubMed

    Xu, R; Falardeau, J; Avis, T J; Tambong, J T

    2016-02-01

    The aim of this study was to develop and validate a HybProbes-based real-time PCR assay targeting the trpB gene for specific identification of Streptomyces scabies and Streptomyces europaeiscabiei. Four primer pairs and a fluorescent probe were designed and evaluated for specificity in identifying S. scabies and Streptomyces europaeiscabiei, the potato common scab pathogens. The specificity of the HybProbes-based real-time PCR assay was evaluated using 46 bacterial strains, 23 Streptomyces strains and 23 non-Streptomyces bacterial species. Specific and strong fluorescence signals were detected from all nine strains of S. scabies and Streptomyces europaeiscabiei. No fluorescence signal was detected from 14 strains of other Streptomyces species and all non-Streptomyces strains. The identification was corroborated by the melting curve analysis that was performed immediately after the amplification step. Eight of the nine S. scabies and S. europaeiscabiei strains exhibited a unique melting peak, at Tm of 69·1°C while one strain, Warba-6, had a melt peak at Tm of 65·4°C. This difference in Tm peaks could be attributed to a guanine to cytosine mutation in strain Warba-6 at the region spanning the donor HybProbe. The reported HybProbes assay provides a more specific tool for accurate identification of S. scabies and S. europaeiscabiei strains. This study reports a novel assay based on HybProbes chemistry for rapid and accurate identification of the potato common scab pathogens. Since the HybProbes chemistry requires two probes for positive identification, the assay is considered to be more specific than conventional PCR or TaqMan real-time PCR. The developed assay would be a useful tool with great potential in early diagnosis and detection of common scab pathogens of potatoes in infected plants or for surveillance of potatoes grown in soil environment. © 2015 Her Majesty the Queen in Right of Canada © 2015 The Society for Applied Microbiology.

  4. Polynomial equations for science orbits around Europa

    NASA Astrophysics Data System (ADS)

    Cinelli, Marco; Circi, Christian; Ortore, Emiliano

    2015-07-01

    In this paper, the design of science orbits for the observation of a celestial body has been carried out using polynomial equations. The effects related to the main zonal harmonics of the celestial body and the perturbation deriving from the presence of a third celestial body have been taken into account. The third body describes a circular and equatorial orbit with respect to the primary body and, for its disturbing potential, an expansion in Legendre polynomials up to the second order has been considered. These polynomial equations allow the determination of science orbits around Jupiter's satellite Europa, where the third body gravitational attraction represents one of the main forces influencing the motion of an orbiting probe. Thus, the retrieved relationships have been applied to this moon and periodic sun-synchronous and multi-sun-synchronous orbits have been determined. Finally, numerical simulations have been carried out to validate the analytical results.

  5. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  6. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the

  7. Correction: Reactions of metallodrugs with proteins: selective binding of phosphane-based platinum(ii) dichlorides to horse heart cytochrome c probed by ESI MS coupled to enzymatic cleavage.

    PubMed

    Mügge, Carolin; Michelucci, Elena; Boscaro, Francesca; Gabbiani, Chiara; Messori, Luigi; Weigand, Wolfgang

    2018-05-23

    Correction for 'Reactions of metallodrugs with proteins: selective binding of phosphane-based platinum(ii) dichlorides to horse heart cytochrome c probed by ESI MS coupled to enzymatic cleavage' by Carolin Mügge et al., Metallomics, 2011, 3, 987-990.

  8. Improved multivariate polynomial factoring algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P.S.

    1978-10-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less

  9. Tutte polynomial in functional magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    García-Castillón, Marlly V.

    2015-09-01

    Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.

  10. Tensor calculus in polar coordinates using Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Vasil, Geoffrey M.; Burns, Keaton J.; Lecoanet, Daniel; Olver, Sheehan; Brown, Benjamin P.; Oishi, Jeffrey S.

    2016-11-01

    Spectral methods are an efficient way to solve partial differential equations on domains possessing certain symmetries. The utility of a method depends strongly on the choice of spectral basis. In this paper we describe a set of bases built out of Jacobi polynomials, and associated operators for solving scalar, vector, and tensor partial differential equations in polar coordinates on a unit disk. By construction, the bases satisfy regularity conditions at r = 0 for any tensorial field. The coordinate singularity in a disk is a prototypical case for many coordinate singularities. The work presented here extends to other geometries. The operators represent covariant derivatives, multiplication by azimuthally symmetric functions, and the tensorial relationship between fields. These arise naturally from relations between classical orthogonal polynomials, and form a Heisenberg algebra. Other past work uses more specific polynomial bases for solving equations in polar coordinates. The main innovation in this paper is to use a larger set of possible bases to achieve maximum bandedness of linear operations. We provide a series of applications of the methods, illustrating their ease-of-use and accuracy.

  11. Label-free liquid crystal biosensor based on specific oligonucleotide probes for heavy metal ions.

    PubMed

    Yang, Shengyuan; Wu, Chao; Tan, Hui; Wu, Yan; Liao, Shuzhen; Wu, Zhaoyang; Shen, Guoli; Yu, Ruqin

    2013-01-02

    In this study, to enhance the capability of metal ions disturbing the orientation of liquid crystals (LCs), we designed a new label-free LC biosensor for the highly selective and sensitive detection of heavy metal ions. This strategy makes use of the target-induced DNA conformational change to enhance the disruption of target molecules for the orientation of LC leading to an amplified optical signal. The Hg(2+) ion, which possesses a unique property to bind specifically to two DNA thymine (T) bases, is used as a model heavy metal ion. In the presence of Hg(2+), the specific oligonucleotide probes form a conformational reorganization of the oligonucleotide probes from hairpin structure to duplex-like complexes. The duplex-like complexes are then bound on the triethoxysilylbutyraldehyde/N,N-dimethyl-N-octadecyl (3-aminopropyl) trimethoxysilyl chloride (TEA/DMOAP)-coated substrate modified with capture probes, which can greatly distort the orientational profile of LC, making the optical image of LC cell birefringent as a result. The optical signal of LC sensor has a visible change at the Hg(2+) concentration of low to 0.1 nM, showing good detection sensitivity. The cost-effective LC sensing method can translate the concentration signal of heavy metal ions in solution into the presence of DNA duplexes and is expected to be a sensitive detection platform for heavy metal ions and other small molecule monitors.

  12. A branch-migration based fluorescent probe for straightforward, sensitive and specific discrimination of DNA mutations

    PubMed Central

    Xiao, Xianjin; Wu, Tongbo; Xu, Lei; Chen, Wei

    2017-01-01

    Abstract Genetic mutations are important biomarkers for cancer diagnostics and surveillance. Preferably, the methods for mutation detection should be straightforward, highly specific and sensitive to low-level mutations within various sequence contexts, fast and applicable at room-temperature. Though some of the currently available methods have shown very encouraging results, their discrimination efficiency is still very low. Herein, we demonstrate a branch-migration based fluorescent probe (BM probe) which is able to identify the presence of known or unknown single-base variations at abundances down to 0.3%-1% within 5 min, even in highly GC-rich sequence regions. The discrimination factors between the perfect-match target and single-base mismatched target are determined to be 89–311 by measurement of their respective branch-migration products via polymerase elongation reactions. The BM probe not only enabled sensitive detection of two types of EGFR-associated point mutations located in GC-rich regions, but also successfully identified the BRAF V600E mutation in the serum from a thyroid cancer patient which could not be detected by the conventional sequencing method. The new method would be an ideal choice for high-throughput in vitro diagnostics and precise clinical treatment. PMID:28201758

  13. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks.

    PubMed

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-03-24

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.

  14. Von Bertalanffy's dynamics under a polynomial correction: Allee effect and big bang bifurcation

    NASA Astrophysics Data System (ADS)

    Leonel Rocha, J.; Taha, A. K.; Fournier-Prunaret, D.

    2016-02-01

    In this work we consider new one-dimensional populational discrete dynamical systems in which the growth of the population is described by a family of von Bertalanffy's functions, as a dynamical approach to von Bertalanffy's growth equation. The purpose of introducing Allee effect in those models is satisfied under a correction factor of polynomial type. We study classes of von Bertalanffy's functions with different types of Allee effect: strong and weak Allee's functions. Dependent on the variation of four parameters, von Bertalanffy's functions also includes another class of important functions: functions with no Allee effect. The complex bifurcation structures of these von Bertalanffy's functions is investigated in detail. We verified that this family of functions has particular bifurcation structures: the big bang bifurcation of the so-called “box-within-a-box” type. The big bang bifurcation is associated to the asymptotic weight or carrying capacity. This work is a contribution to the study of the big bang bifurcation analysis for continuous maps and their relationship with explosion birth and extinction phenomena.

  15. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks

    PubMed Central

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-01-01

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632

  16. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  17. PROcess Based Diagnostics PROBE

    NASA Technical Reports Server (NTRS)

    Clune, T.; Schmidt, G.; Kuo, K.; Bauer, M.; Oloso, H.

    2013-01-01

    Many of the aspects of the climate system that are of the greatest interest (e.g., the sensitivity of the system to external forcings) are emergent properties that arise via the complex interplay between disparate processes. This is also true for climate models most diagnostics are not a function of an isolated portion of source code, but rather are affected by multiple components and procedures. Thus any model-observation mismatch is hard to attribute to any specific piece of code or imperfection in a specific model assumption. An alternative approach is to identify diagnostics that are more closely tied to specific processes -- implying that if a mismatch is found, it should be much easier to identify and address specific algorithmic choices that will improve the simulation. However, this approach requires looking at model output and observational data in a more sophisticated way than the more traditional production of monthly or annual mean quantities. The data must instead be filtered in time and space for examples of the specific process being targeted.We are developing a data analysis environment called PROcess-Based Explorer (PROBE) that seeks to enable efficient and systematic computation of process-based diagnostics on very large sets of data. In this environment, investigators can define arbitrarily complex filters and then seamlessly perform computations in parallel on the filtered output from their model. The same analysis can be performed on additional related data sets (e.g., reanalyses) thereby enabling routine comparisons between model and observational data. PROBE also incorporates workflow technology to automatically update computed diagnostics for subsequent executions of a model. In this presentation, we will discuss the design and current status of PROBE as well as share results from some preliminary use cases.

  18. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  19. A Fast lattice-based polynomial digital signature system for m-commerce

    NASA Astrophysics Data System (ADS)

    Wei, Xinzhou; Leung, Lin; Anshel, Michael

    2003-01-01

    The privacy and data integrity are not guaranteed in current wireless communications due to the security hole inside the Wireless Application Protocol (WAP) version 1.2 gateway. One of the remedies is to provide an end-to-end security in m-commerce by applying application level security on top of current WAP1.2. The traditional security technologies like RSA and ECC applied on enterprise's server are not practical for wireless devices because wireless devices have relatively weak computation power and limited memory compared with server. In this paper, we developed a lattice based polynomial digital signature system based on NTRU's Polynomial Authentication and Signature Scheme (PASS), which enabled the feasibility of applying high-level security on both server and wireless device sides.

  20. Highly specific detection of genetic modification events using an enzyme-linked probe hybridization chip.

    PubMed

    Zhang, M Z; Zhang, X F; Chen, X M; Chen, X; Wu, S; Xu, L L

    2015-08-10

    The enzyme-linked probe hybridization chip utilizes a method based on ligase-hybridizing probe chip technology, with the principle of using thio-primers for protection against enzyme digestion, and using lambda DNA exonuclease to cut multiple PCR products obtained from the sample being tested into single-strand chains for hybridization. The 5'-end amino-labeled probe was fixed onto the aldehyde chip, and hybridized with the single-stranded PCR product, followed by addition of a fluorescent-modified probe that was then enzymatically linked with the adjacent, substrate-bound probe in order to achieve highly specific, parallel, and high-throughput detection. Specificity and sensitivity testing demonstrated that enzyme-linked probe hybridization technology could be applied to the specific detection of eight genetic modification events at the same time, with a sensitivity reaching 0.1% and the achievement of accurate, efficient, and stable results.

  1. Conformal Galilei algebras, symmetric polynomials and singular vectors

    NASA Astrophysics Data System (ADS)

    Křižka, Libor; Somberg, Petr

    2018-01-01

    We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.

  2. Eye aberration analysis with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  3. Specificity tests of an oligonucleotide probe against food-outbreak salmonella for biosensor detection

    NASA Astrophysics Data System (ADS)

    Chen, I.-H.; Horikawa, S.; Xi, J.; Wikle, H. C.; Barbaree, J. M.; Chin, B. A.

    2017-05-01

    Phage based magneto-elastic (ME) biosensors have been shown to be able to rapidly detect Salmonella in various food systems to serve food pathogen monitoring purposes. In this ME biosensor platform, the free-standing strip-shaped magneto-elastic sensor is the transducer and the phage probe that recognizes Salmonella in food serves as the bio-recognition element. According to Sorokulova et al. at 2005, a developed oligonucleotide probe E2 was reported to have high specificity to Salmonella enterica Typhimurium. In the report, the specificity tests were focused in most of Enterobacterace groups outside of Salmonella family. Here, to understand the specificity of phage E2 to different Salmonella enterica serotypes within Salmonella Family, we further tested the specificity of the phage probe to thirty-two Salmonella serotypes that were present in the major foodborne outbreaks during the past ten years (according to Centers for Disease Control and Prevention). The tests were conducted through an Enzyme linked Immunosorbent Assay (ELISA) format. This assay can mimic probe immobilized conditions on the magnetoelastic biosensor platform and also enable to study the binding specificity of oligonucleotide probes toward different Salmonella while avoiding phage/ sensor lot variations. Test results confirmed that this oligonucleotide probe E2 was high specific to Salmonella Typhimurium cells but showed cross reactivity to Salmonella Tennessee and four other serotypes among the thirty-two tested Salmonella serotypes.

  4. Specific 16S ribosomal RNA targeted oligonucleotide probe against Clavibacter michiganensis subsp. sepedonicus.

    PubMed

    Mirza, M S; Rademaker, J L; Janse, J D; Akkermans, A D

    1993-11-01

    In this article we report on the polymerase chain reaction amplification of a partial 16S rRNA gene from the plant pathogenic bacterium Clavibacter michiganensis subsp. sepedonicus. A partial sequence (about 400 base pairs) of the gene was determined that covered two variable regions important for oligonucleotide probe development. A specific 24mer oligonucleotide probe targeted against the V6 region of 16S rRNA was designed. Specificity of the probe was determined using dot blot hybridization. Under stringent conditions (60 degrees C), the probe hybridized with all 16 Cl. michiganensis subsp. sepedonicus strains tested. Hybridization did not occur with 32 plant pathogenic and saprophytic bacteria used as controls under the same conditions. Under less stringent conditions (55 degrees C) the related Clavibacter michiganensis subsp. insidiosus, Clavibacter michiganensis subsp. nebraskensis, and Clavibacter michiganensis subsp. tesselarius also showed hybridization. At even lower stringency (40 degrees C), all Cl. michiganensis subspecies tested including Clavibacter michiganensis subsp. michiganensis showed hybridization signal, suggesting that under these conditions the probe may be used as a species-specific probe for Cl. michiganensis.

  5. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  6. Target-cancer cell specific activatable fluorescence imaging Probes: Rational Design and in vivo Applications

    PubMed Central

    Kobayashi, Hisataka; Choyke, Peter L.

    2010-01-01

    CONSPECTUS Conventional imaging methods, such as angiography, computed tomography, magnetic resonance imaging and radionuclide imaging, rely on contrast agents (iodine, gadolinium, radioisotopes) that are “always on”. While these agents have proven clinically useful, they are not sufficiently sensitive because of the inadequate target to background ratio. A unique aspect of optical imaging is that fluorescence probes can be designed to be activatable, i.e. only “turned on” under certain conditions. These probes can be designed to emit signal only after binding a target tissue, greatly increasing sensitivity and specificity in the detection of disease. There are two basic types of activatable fluorescence probes; 1) conventional enzymatically activatable probes, which exist in the quenched state until activated by enzymatic cleavage mostly outside of the cells, and 2) newly designed target-cell specific activatable probes, which are quenched until activated in targeted cells by endolysosomal processing that results when the probe binds specific cell-surface receptors and is subsequently internalized. Herein, we present a review of the rational design and in vivo applications of target-cell specific activatable probes. Designing these probes based on their photo-chemical (e.g. activation strategy), pharmacological (e.g. biodistribution), and biological (e.g. target specificity) properties has recently allowed the rational design and synthesis of target-cell specific activatable fluorescence imaging probes, which can be conjugated to a wide variety of targeting molecules. Several different photo-chemical mechanisms have been utilized, each of which offers a unique capability for probe design. These include: self-quenching, homo- and hetero-fluorescence resonance energy transfer (FRET), H-dimer formation and photon-induced electron transfer (PeT). In addition, the repertoire is further expanded by the option for reversibility or irreversibility of the signal

  7. Simple Proof of Jury Test for Complex Polynomials

    NASA Astrophysics Data System (ADS)

    Choo, Younseok; Kim, Dongmin

    Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.

  8. Applications of polynomial optimization in financial risk investment

    NASA Astrophysics Data System (ADS)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  9. An image registration based ultrasound probe calibration

    NASA Astrophysics Data System (ADS)

    Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram

    2012-02-01

    Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).

  10. Aptamer-Based Dual-Functional Probe for Rapid and Specific Counting and Imaging of MCF-7 Cells.

    PubMed

    Yang, Bin; Chen, Beibei; He, Man; Yin, Xiao; Xu, Chi; Hu, Bin

    2018-02-06

    Development of multimodal detection technologies for accurate diagnosis of cancer at early stages is in great demand. In this work, we report a novel approach using an aptamer-based dual-functional probe for rapid, sensitive, and specific counting and visualization of MCF-7 cells by inductively coupled plasma-mass spectrometry (ICP-MS) and fluorescence imaging. The probe consists of a recognition unit of aptamer to catch cancer cells specifically, a fluorescent dye (FAM) moiety for fluorescence resonance energy transfer (FRET)-based "off-on" fluorescence imaging as well as gold nanoparticles (Au NPs) tag for both ICP-MS quantification and fluorescence quenching. Due to the signal amplification effect and low spectral interference of Au NPs in ICP-MS, an excellent linearity and sensitivity were achieved. Accordingly, a limit of detection of 81 MCF-7 cells and a relative standard deviation of 5.6% (800 cells, n = 7) were obtained. The dynamic linear range was 2 × 10 2 to 1.2 × 10 4 cells, and the recoveries in human whole blood were in the range of 98-110%. Overall, the established method provides quantitative and visualized information on MCF-7 cells with a simple and rapid process and paves the way for a promising strategy for biomedical research and clinical diagnostics.

  11. Poly-Frobenius-Euler polynomials

    NASA Astrophysics Data System (ADS)

    Kurt, Burak

    2017-07-01

    Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.

  12. Derivatives of random matrix characteristic polynomials with applications to elliptic curves

    NASA Astrophysics Data System (ADS)

    Snaith, N. C.

    2005-12-01

    The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.

  13. Fluorescent quenching-based quantitative detection of specific DNA/RNA using a BODIPY® FL-labeled probe or primer

    PubMed Central

    Kurata, Shinya; Kanagawa, Takahiro; Yamada, Kazutaka; Torimura, Masaki; Yokomaku, Toyokazu; Kamagata, Yoichi; Kurane, Ryuichiro

    2001-01-01

    We have developed a simple method for the quantitative detection of specific DNA or RNA molecules based on the finding that BODIPY® FL fluorescence was quenched by its interaction with a uniquely positioned guanine. This approach makes use of an oligonucleotide probe or primer containing a BODIPY® FL-modified cytosine at its 5′-end. When such a probe was hybridized with a target DNA, its fluorescence was quenched by the guanine in the target, complementary to the modified cytosine, and the quench rate was proportional to the amount of target DNA. This widely applicable technique will be used directly with larger samples or in conjunction with the polymerase chain reaction to quantify small DNA samples. PMID:11239011

  14. Application of overlay modeling and control with Zernike polynomials in an HVM environment

    NASA Astrophysics Data System (ADS)

    Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill

    2016-03-01

    Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.

  15. Independence polynomial and matching polynomial of the Koch network

    NASA Astrophysics Data System (ADS)

    Liao, Yunhua; Xie, Xiaoliang

    2015-11-01

    The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.

  16. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  17. Effects of Error Correction during Assessment Probes on the Acquisition of Sight Words for Students with Moderate Intellectual Disabilities

    ERIC Educational Resources Information Center

    Waugh, Rebecca E.

    2010-01-01

    Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…

  18. Effects of Error Correction during Assessment Probes on the Acquisition of Sight Words for Students with Moderate Intellectual Disabilities

    ERIC Educational Resources Information Center

    Waugh, Rebecca E.; Alberto, Paul A.; Fredrick, Laura D.

    2011-01-01

    Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…

  19. A Small and Slim Coaxial Probe for Single Rice Grain Moisture Sensing

    PubMed Central

    You, Kok Yeow; Mun, Hou Kit; You, Li Ling; Salleh, Jamaliah; Abbas, Zulkifly

    2013-01-01

    A moisture detection of single rice grains using a slim and small open-ended coaxial probe is presented. The coaxial probe is suitable for the nondestructive measurement of moisture values in the rice grains ranging from from 9.5% to 26%. Empirical polynomial models are developed to predict the gravimetric moisture content of rice based on measured reflection coefficients using a vector network analyzer. The relationship between the reflection coefficient and relative permittivity were also created using a regression method and expressed in a polynomial model, whose model coefficients were obtained by fitting the data from Finite Element-based simulation. Besides, the designed single rice grain sample holder and experimental set-up were shown. The measurement of single rice grains in this study is more precise compared to the measurement in conventional bulk rice grains, as the random air gap present in the bulk rice grains is excluded. PMID:23493127

  20. Colorful protein-based fluorescent probes for collagen imaging.

    PubMed

    Aper, Stijn J A; van Spreeuwel, Ariane C C; van Turnhout, Mark C; van der Linden, Ardjan J; Pieters, Pascal A; van der Zon, Nick L L; de la Rambelje, Sander L; Bouten, Carlijn V C; Merkx, Maarten

    2014-01-01

    Real-time visualization of collagen is important in studies on tissue formation and remodeling in the research fields of developmental biology and tissue engineering. Our group has previously reported on a fluorescent probe for the specific imaging of collagen in live tissue in situ, consisting of the native collagen binding protein CNA35 labeled with fluorescent dye Oregon Green 488 (CNA35-OG488). The CNA35-OG488 probe has become widely used for collagen imaging. To allow for the use of CNA35-based probes in a broader range of applications, we here present a toolbox of six genetically-encoded collagen probes which are fusions of CNA35 to fluorescent proteins that span the visible spectrum: mTurquoise2, EGFP, mAmetrine, LSSmOrange, tdTomato and mCherry. While CNA35-OG488 requires a chemical conjugation step for labeling with the fluorescent dye, these protein-based probes can be easily produced in high yields by expression in E. coli and purified in one step using Ni2+-affinity chromatography. The probes all bind specifically to collagen, both in vitro and in porcine pericardial tissue. Some first applications of the probes are shown in multicolor imaging of engineered tissue and two-photon imaging of collagen in human skin. The fully-genetic encoding of the new probes makes them easily accessible to all scientists interested in collagen formation and remodeling.

  1. PROBES FOR THE SPECIFIC DETECTION OF CRYPTOSPORIDIUM PARVUM

    EPA Science Inventory

    A probe set, consisting of two synthetic oligonucleotides each tagged with a fluorescent reporter molecule, has been developed for specific detection of Cryptosporidium parvum.Each probe strand detects ribosomal RNA from a range of isolates of this species, and the combination is...

  2. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  3. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  4. A cancer cell-specific fluorescent probe for imaging Cu2 + in living cancer cells

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Dong, Baoli; Kong, Xiuqi; Song, Xuezhen; Zhang, Nan; Lin, Weiying

    2017-07-01

    Monitoring copper level in cancer cells is important for the further understanding of its roles in the cell proliferation, and also could afford novel copper-based strategy for the cancer therapy. Herein, we have developed a novel cancer cell-specific fluorescent probe for the detecting Cu2 + in living cancer cells. The probe employed biotin as the cancer cell-specific group. Before the treatment of Cu2 +, the probe showed nearly no fluorescence. However, the probe can display strong fluorescence at 581 nm in response to Cu2 +. The probe exhibited excellent sensitivity and high selectivity for Cu2 + over the other relative species. Under the guidance of biotin group, could be successfully used for detecting Cu2 + in living cancer cells. We expect that this design strategy could be further applied for detection of the other important biomolecules in living cancer cells.

  5. New prediction model for probe specificity in an allele-specific extension reaction for haplotype-specific extraction (HSE) of Y chromosome mixtures.

    PubMed

    Rothe, Jessica; Watkins, Norman E; Nagy, Marion

    2012-01-01

    Allele-specific extension reactions (ASERs) use 3' terminus-specific primers for the selective extension of completely annealed matches by polymerase. The ability of the polymerase to extend non-specific 3' terminal mismatches leads to a failure of the reaction, a process that is only partly understood and predictable, and often requires time-consuming assay design. In our studies we investigated haplotype-specific extraction (HSE) for the separation of male DNA mixtures. HSE is an ASER and provides the ability to distinguish between diploid chromosomes from one or more individuals. Here, we show that the success of HSE and allele-specific extension depend strongly on the concentration difference between complete match and 3' terminal mismatch. Using the oligonucleotide-modeling platform Visual Omp, we demonstrated the dependency of the discrimination power of the polymerase on match- and mismatch-target hybridization between different probe lengths. Therefore, the probe specificity in HSE could be predicted by performing a relative comparison of different probe designs with their simulated differences between the duplex concentration of target-probe match and mismatches. We tested this new model for probe design in more than 300 HSE reactions with 137 different probes and obtained an accordance of 88%.

  6. On Polynomial Solutions of Linear Differential Equations with Polynomial Coefficients

    ERIC Educational Resources Information Center

    Si, Do Tan

    1977-01-01

    Demonstrates a method for solving linear differential equations with polynomial coefficients based on the fact that the operators z and D + d/dz are known to be Hermitian conjugates with respect to the Bargman and Louck-Galbraith scalar products. (MLH)

  7. In vivo targeted peripheral nerve imaging with a nerve-specific nanoscale magnetic resonance probe.

    PubMed

    Zheng, Linfeng; Li, Kangan; Han, Yuedong; Wei, Wei; Zheng, Sujuan; Zhang, Guixiang

    2014-11-01

    Neuroimaging plays a pivotal role in clinical practice. Currently, computed tomography (CT), magnetic resonance imaging (MRI), ultrasonography, and positron emission tomography (PET) are applied in the clinical setting as neuroimaging modalities. There is no optimal imaging modality for clinical peripheral nerve imaging even though fluorescence/bioluminescence imaging has been used for preclinical studies on the nervous system. Some studies have shown that molecular and cellular MRI (MCMRI) can be used to visualize and image the cellular and molecular level of the nervous system. Other studies revealed that there are different pathological/molecular changes in the proximal and distal sites after peripheral nerve injury (PNI). Therefore, we hypothesized that in vivo peripheral nerve targets can be imaged using MCMRI with specific MRI probes. Specific probes should have higher penetrability for the blood-nerve barrier (BNB) in vivo. Here, a functional nanometre MRI probe that is based on nerve-specific proteins as targets, specifically, using a molecular antibody (mAb) fragment conjugated to iron nanoparticles as an MRI probe, was constructed for further study. The MRI probe allows for imaging the peripheral nerve targets in vivo. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Charge-based MOSFET model based on the Hermite interpolation polynomial

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt

    2017-04-01

    An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.

  9. Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra

    NASA Astrophysics Data System (ADS)

    Karstens, William; Smith, David

    2013-03-01

    Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  10. Aberration corrections for free-space optical communications in atmosphere turbulence using orbital angular momentum states.

    PubMed

    Zhao, S M; Leach, J; Gong, L Y; Ding, J; Zheng, B Y

    2012-01-02

    The effect of atmosphere turbulence on light's spatial structure compromises the information capacity of photons carrying the Orbital Angular Momentum (OAM) in free-space optical (FSO) communications. In this paper, we study two aberration correction methods to mitigate this effect. The first one is the Shack-Hartmann wavefront correction method, which is based on the Zernike polynomials, and the second is a phase correction method specific to OAM states. Our numerical results show that the phase correction method for OAM states outperforms the Shark-Hartmann wavefront correction method, although both methods improve significantly purity of a single OAM state and the channel capacities of FSO communication link. At the same time, our experimental results show that the values of participation functions go down at the phase correction method for OAM states, i.e., the correction method ameliorates effectively the bad effect of atmosphere turbulence.

  11. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  12. New Prediction Model for Probe Specificity in an Allele-Specific Extension Reaction for Haplotype-Specific Extraction (HSE) of Y Chromosome Mixtures

    PubMed Central

    Rothe, Jessica; Watkins, Norman E.; Nagy, Marion

    2012-01-01

    Allele-specific extension reactions (ASERs) use 3′ terminus-specific primers for the selective extension of completely annealed matches by polymerase. The ability of the polymerase to extend non-specific 3′ terminal mismatches leads to a failure of the reaction, a process that is only partly understood and predictable, and often requires time-consuming assay design. In our studies we investigated haplotype-specific extraction (HSE) for the separation of male DNA mixtures. HSE is an ASER and provides the ability to distinguish between diploid chromosomes from one or more individuals. Here, we show that the success of HSE and allele-specific extension depend strongly on the concentration difference between complete match and 3′ terminal mismatch. Using the oligonucleotide-modeling platform Visual Omp, we demonstrated the dependency of the discrimination power of the polymerase on match- and mismatch-target hybridization between different probe lengths. Therefore, the probe specificity in HSE could be predicted by performing a relative comparison of different probe designs with their simulated differences between the duplex concentration of target-probe match and mismatches. We tested this new model for probe design in more than 300 HSE reactions with 137 different probes and obtained an accordance of 88%. PMID:23049901

  13. Charactering baseline shift with 4th polynomial function for portable biomedical near-infrared spectroscopy device

    NASA Astrophysics Data System (ADS)

    Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting

    2018-02-01

    The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.

  14. Quantitative Measurement of Protease-Activity with Correction of Probe Delivery and Tissue Absorption Effects

    PubMed Central

    Salthouse, Christopher D.; Reynolds, Fred; Tam, Jenny M.; Josephson, Lee; Mahmood, Umar

    2009-01-01

    Proteases play important roles in a variety of pathologies from heart disease to cancer. Quantitative measurement of protease activity is possible using a novel spectrally matched dual fluorophore probe and a small animal lifetime imager. The recorded fluorescence from an activatable fluorophore, one that changes its fluorescent amplitude after biological target interaction, is also influenced by other factors including imaging probe delivery and optical tissue absorption of excitation and emission light. Fluorescence from a second spectrally matched constant (non-activatable) fluorophore on each nanoparticle platform can be used to correct for both probe delivery and tissue absorption. The fluorescence from each fluorophore is separated using fluorescence lifetime methods. PMID:20161242

  15. Modern Focused-Ion-Beam-Based Site-Specific Specimen Preparation for Atom Probe Tomography.

    PubMed

    Prosa, Ty J; Larson, David J

    2017-04-01

    Approximately 30 years after the first use of focused ion beam (FIB) instruments to prepare atom probe tomography specimens, this technique has grown to be used by hundreds of researchers around the world. This past decade has seen tremendous advances in atom probe applications, enabled by the continued development of FIB-based specimen preparation methodologies. In this work, we provide a short review of the origin of the FIB method and the standard methods used today for lift-out and sharpening, using the annular milling method as applied to atom probe tomography specimens. Key steps for enabling correlative analysis with transmission electron-beam backscatter diffraction, transmission electron microscopy, and atom probe tomography are presented, and strategies for preparing specimens for modern microelectronic device structures are reviewed and discussed in detail. Examples are used for discussion of the steps for each of these methods. We conclude with examples of the challenges presented by complex topologies such as nanowires, nanoparticles, and organic materials.

  16. SHAPE Selection (SHAPES) enrich for RNA structure signal in SHAPE sequencing-based probing data

    PubMed Central

    Poulsen, Line Dahl; Kielpinski, Lukasz Jan; Salama, Sofie R.; Krogh, Anders; Vinther, Jeppe

    2015-01-01

    Selective 2′ Hydroxyl Acylation analyzed by Primer Extension (SHAPE) is an accurate method for probing of RNA secondary structure. In existing SHAPE methods, the SHAPE probing signal is normalized to a no-reagent control to correct for the background caused by premature termination of the reverse transcriptase. Here, we introduce a SHAPE Selection (SHAPES) reagent, N-propanone isatoic anhydride (NPIA), which retains the ability of SHAPE reagents to accurately probe RNA structure, but also allows covalent coupling between the SHAPES reagent and a biotin molecule. We demonstrate that SHAPES-based selection of cDNA–RNA hybrids on streptavidin beads effectively removes the large majority of background signal present in SHAPE probing data and that sequencing-based SHAPES data contain the same amount of RNA structure data as regular sequencing-based SHAPE data obtained through normalization to a no-reagent control. Moreover, the selection efficiently enriches for probed RNAs, suggesting that the SHAPES strategy will be useful for applications with high-background and low-probing signal such as in vivo RNA structure probing. PMID:25805860

  17. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  18. Sequence-specific bias correction for RNA-seq data using recurrent neural networks.

    PubMed

    Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-01-25

    The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.

  19. Locked Nucleic Acid Probe-Based Real-Time PCR Assay for the Rapid Detection of Rifampin-Resistant Mycobacterium tuberculosis

    PubMed Central

    Sun, Chongyun; Li, Chao; Wang, Xiaochen; Liu, Haican; Zhang, Pingping; Zhao, Xiuqin; Wang, Xinrui; Jiang, Yi; Yang, Ruifu; Wan, Kanglin; Zhou, Lei

    2015-01-01

    Drug-resistant Mycobacterium tuberculosis can be rapidly diagnosed through nucleic acid amplification techniques by analyzing the variations in the associated gene sequences. In the present study, a locked nucleic acid (LNA) probe-based real-time PCR assay was developed to identify the mutations in the rpoB gene associated with rifampin (RFP) resistance in M. tuberculosis. Six LNA probes with the discrimination capability of one-base mismatch were designed to monitor the 23 most frequent rpoB mutations. The target mutations were identified using the probes in a “probe dropout” manner (quantification cycle = 0); thus, the proposed technique exhibited superiority in mutation detection. The LNA probe-based real-time PCR assay was developed in a two-tube format with three LNA probes and one internal amplification control probe in each tube. The assay showed excellent specificity to M. tuberculosis with or without RFP resistance by evaluating 12 strains of common non-tuberculosis mycobacteria. The limit of detection of M. tuberculosis was 10 genomic equivalents (GE)/reaction by further introducing a nested PCR method. In a blind validation of 154 clinical mycobacterium isolates, 142/142 (100%) were correctly detected through the assay. Of these isolates, 88/88 (100%) were determined as RFP susceptible and 52/54 (96.3%) were characterized as RFP resistant. Two unrecognized RFP-resistant strains were sequenced and were found to contain mutations outside the range of the 23 mutation targets. In conclusion, this study established a sensitive, accurate, and low-cost LNA probe-based assay suitable for a four-multiplexing real-time PCR instrument. The proposed method can be used to diagnose RFP-resistant tuberculosis in clinical laboratories. PMID:26599667

  20. Resonance Raman Probes for Organelle-Specific Labeling in Live Cells

    NASA Astrophysics Data System (ADS)

    Kuzmin, Andrey N.; Pliss, Artem; Lim, Chang-Keun; Heo, Jeongyun; Kim, Sehoon; Rzhevskii, Alexander; Gu, Bobo; Yong, Ken-Tye; Wen, Shangchun; Prasad, Paras N.

    2016-06-01

    Raman microspectroscopy provides for high-resolution non-invasive molecular analysis of biological samples and has a breakthrough potential for dissection of cellular molecular composition at a single organelle level. However, the potential of Raman microspectroscopy can be fully realized only when novel types of molecular probes distinguishable in the Raman spectroscopy modality are developed for labeling of specific cellular domains to guide spectrochemical spatial imaging. Here we report on the design of a next generation Raman probe, based on BlackBerry Quencher 650 compound, which provides unprecedentedly high signal intensity through the Resonance Raman (RR) enhancement mechanism. Remarkably, RR enhancement occurs with low-toxic red light, which is close to maximum transparency in the biological optical window. The utility of proposed RR probes was validated for targeting lysosomes in live cultured cells, which enabled identification and subsequent monitoring of dynamic changes in this organelle by Raman imaging.

  1. Incorporation of extra amino acids in peptide recognition probe to improve specificity and selectivity of an electrochemical peptide-based sensor.

    PubMed

    Zaitouna, Anita J; Maben, Alex J; Lai, Rebecca Y

    2015-07-30

    We investigated the effect of incorporating extra amino acids (AA) at the n-terminus of the thiolated and methylene blue-modified peptide probe on both specificity and selectivity of an electrochemical peptide-based (E-PB) HIV sensor. The addition of a flexible (SG)3 hexapeptide is, in particular, useful in improving sensor selectivity, whereas the addition of a highly hydrophilic (EK)3 hexapeptide has shown to be effective in enhancing sensor specificity. Overall, both E-PB sensors fabricated using peptide probes with the added AA (SG-EAA and EK-EAA) showed better specificity and selectivity, especially when compared to the sensor fabricated using a peptide probe without the extra AA (EAA). For example, the selectivity factor recorded in the 50% saliva was ∼2.5 for the EAA sensor, whereas the selectivity factor was 7.8 for both the SG-EAA and EK-EAA sensors. Other sensor properties such as the limit of detection and dynamic range were minimally affected by the addition of the six AA sequence. The limit of detection was 0.5 nM for the EAA sensor and 1 nM for both SG-EAA and EK-EAA sensors. The saturation target concentration was ∼200 nM for all three sensors. Unlike previously reported E-PB HIV sensors, the peptide probe functions as both the recognition element and antifouling passivating agent; this modification eliminates the need to include an additional antifouling diluent, which simplifies the sensor design and fabrication protocol. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Animal Viruses Probe dataset (AVPDS) for microarray-based diagnosis and identification of viruses.

    PubMed

    Yadav, Brijesh S; Pokhriyal, Mayank; Vasishtha, Dinesh P; Sharma, Bhaskar

    2014-03-01

    AVPDS (Animal Viruses Probe dataset) is a dataset of virus-specific and conserve oligonucleotides for identification and diagnosis of viruses infecting animals. The current dataset contain 20,619 virus specific probes for 833 viruses and their subtypes and 3,988 conserved probes for 146 viral genera. Dataset of virus specific probe has been divided into two fields namely virus name and probe sequence. Similarly conserved probes for virus genera table have genus, and subgroup within genus name and probe sequence. The subgroup within genus is artificially divided subgroups with no taxonomic significance and contains probes which identifies viruses in that specific subgroup of the genus. Using this dataset we have successfully diagnosed the first case of Newcastle disease virus in sheep and reported a mixed infection of Bovine viral diarrhea and Bovine herpesvirus in cattle. These dataset also contains probes which cross reacts across species experimentally though computationally they meet specifications. These probes have been marked. We hope that this dataset will be useful in microarray-based detection of viruses. The dataset can be accessed through the link https://dl.dropboxusercontent.com/u/94060831/avpds/HOME.html.

  3. Cylinder surface test with Chebyshev polynomial fitting method

    NASA Astrophysics Data System (ADS)

    Yu, Kui-bang; Guo, Pei-ji; Chen, Xi

    2017-10-01

    Zernike polynomials fitting method is often applied in the test of optical components and systems, used to represent the wavefront and surface error in circular domain. Zernike polynomials are not orthogonal in rectangular region which results in its unsuitable for the test of optical element with rectangular aperture such as cylinder surface. Applying the Chebyshev polynomials which are orthogonal among the rectangular area as an substitution to the fitting method, can solve the problem. Corresponding to a cylinder surface with diameter of 50 mm and F number of 1/7, a measuring system has been designed in Zemax based on Fizeau Interferometry. The expressions of the two-dimensional Chebyshev polynomials has been given and its relationship with the aberration has been presented. Furthermore, Chebyshev polynomials are used as base items to analyze the rectangular aperture test data. The coefficient of different items are obtained from the test data through the method of least squares. Comparing the Chebyshev spectrum in different misalignment, it show that each misalignment is independence and has a certain relationship with the certain Chebyshev terms. The simulation results show that, through the Legendre polynomials fitting method, it will be a great improvement in the efficient of the detection and adjustment of the cylinder surface test.

  4. Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map

    PubMed Central

    2014-01-01

    We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970

  5. Photoelectrocyclization as an activation mechanism for organelle-specific live-cell imaging probes.

    PubMed

    Tran, Mai N; Chenoweth, David M

    2015-05-26

    Photoactivatable fluorophores are useful tools in live-cell imaging owing to their potential for precise spatial and temporal control. In this report, a new photoactivatable organelle-specific live-cell imaging probe based on a 6π electrocyclization/oxidation mechanism is described. It is shown that this new probe is water-soluble, non-cytotoxic, cell-permeable, and useful for mitochondrial imaging. The probe displays large Stokes shifts in both pre-activated and activated forms, allowing simultaneous use with common dyes and fluorescent proteins. Sequential single-cell activation experiments in dense cellular environments demonstrate high spatial precision and utility in single- or multi-cell labeling experiments. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Analytical and numerical construction of vertical periodic orbits about triangular libration points based on polynomial expansion relations among directions

    NASA Astrophysics Data System (ADS)

    Qian, Ying-Jing; Yang, Xiao-Dong; Zhai, Guan-Qiao; Zhang, Wei

    2017-08-01

    Innovated by the nonlinear modes concept in the vibrational dynamics, the vertical periodic orbits around the triangular libration points are revisited for the Circular Restricted Three-body Problem. The ζ -component motion is treated as the dominant motion and the ξ and η -component motions are treated as the slave motions. The slave motions are in nature related to the dominant motion through the approximate nonlinear polynomial expansions with respect to the ζ -position and ζ -velocity during the one of the periodic orbital motions. By employing the relations among the three directions, the three-dimensional system can be transferred into one-dimensional problem. Then the approximate three-dimensional vertical periodic solution can be analytically obtained by solving the dominant motion only on ζ -direction. To demonstrate the effectiveness of the proposed method, an accuracy study was carried out to validate the polynomial expansion (PE) method. As one of the applications, the invariant nonlinear relations in polynomial expansion form are used as constraints to obtain numerical solutions by differential correction. The nonlinear relations among the directions provide an alternative point of view to explore the overall dynamics of periodic orbits around libration points with general rules.

  7. Model-based multi-fringe interferometry using Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan

    2018-06-01

    In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.

  8. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    NASA Astrophysics Data System (ADS)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  9. Microscopic Analysis of Corn Fiber Using Corn Starch- and Cellulose-Specific Molecular Probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, S. E.; Donohoe, B. S.; Beery, K. E.

    Ethanol is the primary liquid transportation fuel produced from renewable feedstocks in the United States today. The majority of corn grain, the primary feedstock for ethanol production, has been historically processed in wet mills yielding products such as gluten feed, gluten meal, starch, and germ. Starch extracted from the grain is used to produce ethanol in saccharification and fermentation steps; however the extraction of starch is not 100% efficient. To better understand starch extraction during the wet milling process, we have developed fluorescent probes that can be used to visually localize starch and cellulose in samples using confocal microscopy. Thesemore » probes are based on the binding specificities of two types of carbohydrate binding modules (CBMs), which are small substrate-specific protein domains derived from carbohydrate degrading enzymes. CBMs were fused, using molecular cloning techniques, to a green fluorescent protein (GFP) or to the red fluorescent protein DsRed (RFP). Using these engineered probes, we found that the binding of the starch-specific probe correlates with starch content in corn fiber samples. We also demonstrate that there is starch internally localized in the endosperm that may contribute to the high starch content in corn fiber. We also surprisingly found that the cellulose-specific probe did not bind to most corn fiber samples, but only to corn fiber that had been hydrolyzed using a thermochemical process that removes the residual starch and much of the hemicellulose. Our findings should be of interest to those working to increase the efficiency of the corn grain to ethanol process.« less

  10. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  11. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  12. Umbral orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Sendino, J. E.; del Olmo, M. A.

    2010-12-23

    We present an umbral operator version of the classical orthogonal polynomials. We obtain three families which are the umbral counterpart of the Jacobi, Laguerre and Hermite polynomials in the classical case.

  13. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  14. Quantification of different Eubacterium spp. in human fecal samples with species-specific 16S rRNA-targeted oligonucleotide probes.

    PubMed

    Schwiertz, A; Le Blay, G; Blaut, M

    2000-01-01

    Species-specific 16S rRNA-targeted, Cy3 (indocarbocyanine)-labeled oligonucleotide probes were designed and validated to quantify different Eubacterium species in human fecal samples. Probes were directed at Eubacterium barkeri, E. biforme, E. contortum, E. cylindroides (two probes), E. dolichum, E. hadrum, E. lentum, E. limosum, E. moniliforme, and E. ventriosum. The specificity of the probes was tested with the type strains and a range of common intestinal bacteria. With one exception, none of the probes showed cross-hybridization under stringent conditions. The species-specific probes were applied to fecal samples obtained from 12 healthy volunteers. E. biforme, E. cylindroides, E. hadrum, E. lentum, and E. ventriosum could be determined. All other Eubacterium species for which probes had been designed were under the detection limit of 10(7) cells g (dry weight) of feces(-1). The cell counts obtained are essentially in accordance with the literature data, which are based on colony counts. This shows that whole-cell in situ hybridization with species-specific probes is a valuable tool for the enumeration of Eubacterium species in feces.

  15. Quantification of Different Eubacterium spp. in Human Fecal Samples with Species-Specific 16S rRNA-Targeted Oligonucleotide Probes

    PubMed Central

    Schwiertz, Andreas; Le Blay, Gwenaelle; Blaut, Michael

    2000-01-01

    Species-specific 16S rRNA-targeted, Cy3 (indocarbocyanine)-labeled oligonucleotide probes were designed and validated to quantify different Eubacterium species in human fecal samples. Probes were directed at Eubacterium barkeri, E. biforme, E. contortum, E. cylindroides (two probes), E. dolichum, E. hadrum, E. lentum, E. limosum, E. moniliforme, and E. ventriosum. The specificity of the probes was tested with the type strains and a range of common intestinal bacteria. With one exception, none of the probes showed cross-hybridization under stringent conditions. The species-specific probes were applied to fecal samples obtained from 12 healthy volunteers. E. biforme, E. cylindroides, E. hadrum, E. lentum, and E. ventriosum could be determined. All other Eubacterium species for which probes had been designed were under the detection limit of 107 cells g (dry weight) of feces−1. The cell counts obtained are essentially in accordance with the literature data, which are based on colony counts. This shows that whole-cell in situ hybridization with species-specific probes is a valuable tool for the enumeration of Eubacterium species in feces. PMID:10618251

  16. Polynomial fuzzy observer designs: a sum-of-squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O

    2012-10-01

    This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.

  17. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  18. Cosmographic analysis with Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.

  19. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  20. Robust stability of fractional order polynomials with complicated uncertainty structure

    PubMed Central

    Şenol, Bilal; Pekař, Libor

    2017-01-01

    The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173

  1. Optical design of a novel instrument that uses the Hartmann-Shack sensor and Zernike polynomials to measure and simulate customized refraction correction surgery outcomes and patient satisfaction

    NASA Astrophysics Data System (ADS)

    Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2016-03-01

    An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.

  2. A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.

    PubMed

    Langley, Jason; Zhao, Qun

    2009-09-07

    The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.

  3. Numerical solutions for Helmholtz equations using Bernoulli polynomials

    NASA Astrophysics Data System (ADS)

    Bicer, Kubra Erdem; Yalcinbas, Salih

    2017-07-01

    This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.

  4. Fluorescent Probes for Sensing and Imaging within Specific Cellular Organelles.

    PubMed

    Zhu, Hao; Fan, Jiangli; Du, Jianjun; Peng, Xiaojun

    2016-10-18

    Fluorescent probes have become powerful tools in biosensing and bioimaging because of their high sensitivity, specificity, fast response, and technical simplicity. In the last decades, researchers have made remarkable progress in developing fluorescent probes that respond to changes in microenvironments (e.g., pH, viscosity, and polarity) or quantities of biomolecules of interest (e.g., ions, reactive oxygen species, and enzymes). All of these analytes are specialized to carry out vital functions and are linked to serious disorders in distinct subcellular organelles. Each of these organelles plays a specific and indispensable role in cellular processes. For example, the nucleus regulates gene expression, mitochondria are responsible for aerobic metabolism, and lysosomes digest macromolecules for cell recycling. A certain organelle requires specific biological species and the appropriate microenvironment to perform its cellular functions, while breakdown of the homeostasis of biomolecules or microenvironmental mutations leads to organelle malfunctions, which further cause disorders or diseases. Fluorescent probes that can be targeted to both specific organelles and biochemicals/microenvironmental factors are capable of reporting localized bioinformation and are potentially useful for gaining insight into the contributions of analytes to both healthy and diseased states. In this Account, we review our recent work on the development of fluorescent probes for sensing and imaging within specific organelles. We present an overview of the design, photophysical properties, and biological applications of the probes, which can localize to mitochondria, lysosomes, the nucleus, the Golgi apparatus, and the endoplasmic reticulum. Although a diversity of organelle-specific fluorescent stains have been commercially available, our efforts place an emphasis on improvements in terms of low cytotoxicity, high photostability, near-infrared (NIR) emission, two-photon excitation, and

  5. Nonpeptide-Based Small-Molecule Probe for Fluorogenic and Chromogenic Detection of Chymotrypsin.

    PubMed

    Wu, Lei; Yang, Shu-Hou; Xiong, Hao; Yang, Jia-Qian; Guo, Jun; Yang, Wen-Chao; Yang, Guang-Fu

    2017-03-21

    We report herein a nonpeptide-based small-molecule probe for fluorogenic and chromogenic detection of chymotrypsin, as well as the primary application for this probe. This probe was rationally designed by mimicking the peptide substrate and optimized by adjusting the recognition group. The refined probe 2 exhibits good specificity toward chymotrypsin, producing about 25-fold higher enhancement in both the fluorescence intensity and absorbance upon the catalysis by chymotrypsin. Compared with the most widely used peptide substrate (AMC-FPAA-Suc) of chymotrypsin, probe 2 shows about 5-fold higher binding affinity and comparable catalytical efficiency against chymotrypsin. Furthermore, it was successfully applied for the inhibitor characterization. To the best of our knowledge, probe 2 is the first nonpeptide-based small-molecule probe for chymotrypsin, with the advantages of simple structure and high sensitivity compared to the widely used peptide-based substrates. This small-molecule probe is expected to be a useful molecular tool for drug discovery and chymotrypsin-related disease diagnosis.

  6. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  7. A gradient-based model parametrization using Bernstein polynomials in Bayesian inversion of surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan

    2017-10-01

    This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.

  8. Bias correction of satellite-based rainfall data

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Biswa; Solomatine, Dimitri

    2015-04-01

    Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall

  9. Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles

    NASA Astrophysics Data System (ADS)

    Dennis, S.

    2016-02-01

    Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was

  10. The usefulness of "corrected" body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data.

    PubMed

    Dutton, Daniel J; McLaren, Lindsay

    2014-05-06

    National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18-65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23-28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind

  11. A minor groove binder probe real-time PCR assay for discrimination between type 2-based vaccines and field strains of canine parvovirus.

    PubMed

    Decaro, Nicola; Elia, Gabriella; Desario, Costantina; Roperto, Sante; Martella, Vito; Campolo, Marco; Lorusso, Alessio; Cavalli, Alessandra; Buonavoglia, Canio

    2006-09-01

    A minor groove binder (MGB) probe assay was developed to discriminate between type 2-based vaccines and field strains of canine parvovirus (CPV). Considering that most of the CPV vaccines contain the old type 2, no longer circulating in canine population, two MGB probes specific for CPV-2 and the antigenic variants (types 2a, 2b and 2c), respectively, were labeled with different fluorophores. The MGB probe assay was able to discriminate correctly between the old type and the variants, with a detection limit of 10(1) DNA copies and a good reproducibility. Quantitation of the viral DNA loads was accurate, as demonstrated by comparing the CPV DNA titres to those calculated by means of the TaqMan assay recognising all CPV types. This assay will ensure resolution of most diagnostic problems in dogs showing CPV disease shortly after CPV vaccination, although it does not discriminate between field strains and type 2b-based vaccines, recently licensed to market in some countries.

  12. Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices.

    PubMed

    Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh

    2015-01-01

    In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164-168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work.

  13. Calibration free beam hardening correction for cardiac CT perfusion imaging

    NASA Astrophysics Data System (ADS)

    Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.

  14. Orbifold E-functions of dual invertible polynomials

    NASA Astrophysics Data System (ADS)

    Ebeling, Wolfgang; Gusein-Zade, Sabir M.; Takahashi, Atsushi

    2016-08-01

    An invertible polynomial is a weighted homogeneous polynomial with the number of monomials coinciding with the number of variables and such that the weights of the variables and the quasi-degree are well defined. In the framework of the search for mirror symmetric orbifold Landau-Ginzburg models, P. Berglund and M. Henningson considered a pair (f , G) consisting of an invertible polynomial f and an abelian group G of its symmetries together with a dual pair (f ˜ , G ˜) . We consider the so-called orbifold E-function of such a pair (f , G) which is a generating function for the exponents of the monodromy action on an orbifold version of the mixed Hodge structure on the Milnor fibre of f. We prove that the orbifold E-functions of Berglund-Henningson dual pairs coincide up to a sign depending on the number of variables and a simple change of variables. The proof is based on a relation between monomials (say, elements of a monomial basis of the Milnor algebra of an invertible polynomial) and elements of the whole symmetry group of the dual polynomial.

  15. Designing Flavoprotein-GFP Fusion Probes for Analyte-Specific Ratiometric Fluorescence Imaging.

    PubMed

    Hudson, Devin A; Caplan, Jeffrey L; Thorpe, Colin

    2018-02-20

    The development of genetically encoded fluorescent probes for analyte-specific imaging has revolutionized our understanding of intracellular processes. Current classes of intracellular probes depend on the selection of binding domains that either undergo conformational changes on analyte binding or can be linked to thiol redox chemistry. Here we have designed novel probes by fusing a flavoenzyme, whose fluorescence is quenched on reduction by the analyte of interest, with a GFP domain to allow for rapid and specific ratiometric sensing. Two flavoproteins, Escherichia coli thioredoxin reductase and Saccharomyces cerevisiae lipoamide dehydrogenase, were successfully developed into thioredoxin and NAD + /NADH specific probes, respectively, and their performance was evaluated in vitro and in vivo. A flow cell format, which allowed dynamic measurements, was utilized in both bacterial and mammalian systems. In E. coli the first reported intracellular steady-state of the cytoplasmic thioredoxin pool was measured. In HEK293T mammalian cells, the steady-state cytosolic ratio of NAD + /NADH induced by glucose was determined. These genetically encoded fluorescent constructs represent a modular approach to intracellular probe design that should extend the range of metabolites that can be quantitated in live cells.

  16. Mismatch and G-Stack Modulated Probe Signals on SNP Microarrays

    PubMed Central

    Binder, Hans; Fasold, Mario; Glomb, Torsten

    2009-01-01

    Background Single nucleotide polymorphism (SNP) arrays are important tools widely used for genotyping and copy number estimation. This technology utilizes the specific affinity of fragmented DNA for binding to surface-attached oligonucleotide DNA probes. We analyze the variability of the probe signals of Affymetrix GeneChip SNP arrays as a function of the probe sequence to identify relevant sequence motifs which potentially cause systematic biases of genotyping and copy number estimates. Methodology/Principal Findings The probe design of GeneChip SNP arrays enables us to disentangle different sources of intensity modulations such as the number of mismatches per duplex, matched and mismatched base pairings including nearest and next-nearest neighbors and their position along the probe sequence. The effect of probe sequence was estimated in terms of triple-motifs with central matches and mismatches which include all 256 combinations of possible base pairings. The probe/target interactions on the chip can be decomposed into nearest neighbor contributions which correlate well with free energy terms of DNA/DNA-interactions in solution. The effect of mismatches is about twice as large as that of canonical pairings. Runs of guanines (G) and the particular type of mismatched pairings formed in cross-allelic probe/target duplexes constitute sources of systematic biases of the probe signals with consequences for genotyping and copy number estimates. The poly-G effect seems to be related to the crowded arrangement of probes which facilitates complex formation of neighboring probes with at minimum three adjacent G's in their sequence. Conclusions The applied method of “triple-averaging” represents a model-free approach to estimate the mean intensity contributions of different sequence motifs which can be applied in calibration algorithms to correct signal values for sequence effects. Rules for appropriate sequence corrections are suggested. PMID:19924253

  17. Hadamard Factorization of Stable Polynomials

    NASA Astrophysics Data System (ADS)

    Loredo-Villalobos, Carlos Arturo; Aguirre-Hernández, Baltazar

    2011-11-01

    The stable (Hurwitz) polynomials are important in the study of differential equations systems and control theory (see [7] and [19]). A property of these polynomials is related to Hadamard product. Consider two polynomials p,q ∈ R[x]:p(x) = anxn+an-1xn-1+...+a1x+a0q(x) = bmx m+bm-1xm-1+...+b1x+b0the Hadamard product (p × q) is defined as (p×q)(x) = akbkxk+ak-1bk-1xk-1+...+a1b1x+a0b0where k = min(m,n). Some results (see [16]) shows that if p,q ∈R[x] are stable polynomials then (p×q) is stable, also, i.e. the Hadamard product is closed; however, the reciprocal is not always true, that is, not all stable polynomial has a factorization into two stable polynomials the same degree n, if n> 4 (see [15]).In this work we will give some conditions to Hadamard factorization existence for stable polynomials.

  18. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  19. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  20. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  1. Calibration correction of an active scattering spectrometer probe to account for refractive index of stratospheric aerosols

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.

    1990-01-01

    The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.

  2. [A novel TaqMan® MGB probe for specifically detecting Streptococcus mutans].

    PubMed

    Zheng, Hui; Lin, Jiu-Xiang; DU, Ning; Chen, Feng

    2013-10-18

    To design a new TaqMan® MGB probe for improving the specificity of Streptococcus mutans's detection. We extracted six DNA samples from different streptococcal strains for PCR reaction. Conventional nested PCR and TaqMan® MGB real-time PCR were applied independently. The first round of nested PCR was carried out with the bacterial universal primers, while a second PCR was conducted by using primers specific for the 16S rRNA gene of Streptococcus mutans. The TaqMan® MGB probe for Streptococcus mutans was designed from sequence analyses, and the primers were the same as nested PCR. Streptococcus mutans DNA with 2.5 mg/L was sequentially diluted at 5-fold intervals to 0.16 μg/L. Standard DNA samples were used to generate standard curves by TaqMan® MGB real-time PCR. In the nested PCR, the primers specific for Streptococcus mutans also detected Streptococcus gordonii with visible band of 282 bp, giving false-positive results. In the TaqMan® MGB real-time PCR reaction, only Streptococcus mutans was detected. The detection limitation of TaqMan® MGB real-time PCR for Streptococcus mutans 16S rRNA gene was 20 μg/L. We designed a new TaqMan® MGB probe, and successfully set up a PCR based method for detecting oral Streptococcus mutans. TaqMan® MGB real-time PCR is a both specific and sensitive bacterial detection method.

  3. On universal knot polynomials

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Mkrtchyan, R.; Morozov, A.

    2016-02-01

    We present a universal knot polynomials for 2- and 3-strand torus knots in adjoint representation, by universalization of appropriate Rosso-Jones formula. According to universality, these polynomials coincide with adjoined colored HOMFLY and Kauffman polynomials at SL and SO/Sp lines on Vogel's plane, respectively and give their exceptional group's counterparts on exceptional line. We demonstrate that [m,n]=[n,m] topological invariance, when applicable, take place on the entire Vogel's plane. We also suggest the universal form of invariant of figure eight knot in adjoint representation, and suggest existence of such universalization for any knot in adjoint and its descendant representations. Properties of universal polynomials and applications of these results are discussed.

  4. Species-specific identification of Dekkera/Brettanomyces yeasts by fluorescently labeled DNA probes targeting the 26S rRNA.

    PubMed

    Röder, Christoph; König, Helmut; Fröhlich, Jürgen

    2007-09-01

    Sequencing of the complete 26S rRNA genes of all Dekkera/Brettanomyces species colonizing different beverages revealed the potential for a specific primer and probe design to support diagnostic PCR approaches and FISH. By analysis of the complete 26S rRNA genes of all five currently known Dekkera/Brettanomyces species (Dekkera bruxellensis, D. anomala, Brettanomyces custersianus, B. nanus and B. naardenensis), several regions with high nucleotide sequence variability yet distinct from the D1/D2 domains were identified. FISH species-specific probes targeting the 26S rRNA gene's most variable regions were designed. Accessibility of probe targets for hybridization was facilitated by the construction of partially complementary 'side'-labeled probes, based on secondary structure models of the rRNA sequences. The specificity and routine applicability of the FISH-based method for yeast identification were tested by analyzing different wine isolates. Investigation of the prevalence of Dekkera/Brettanomyces yeasts in the German viticultural regions Wonnegau, Nierstein and Bingen (Rhinehesse, Rhineland-Palatinate) resulted in the isolation of 37 D. bruxellensis strains from 291 wine samples.

  5. Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices

    PubMed Central

    Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh

    2015-01-01

    In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164–168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work. PMID:26479495

  6. Automatic transperineal ultrasound probe positioning based on CT scan for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Camps, S. M.; Verhaegen, F.; Paiva Fonesca, G.; de With, P. H. N.; Fontanarosa, D.

    2017-03-01

    Image interpretation is crucial during ultrasound image acquisition. A skilled operator is typically needed to verify if the correct anatomical structures are all visualized and with sufficient quality. The need for this operator is one of the major reasons why presently ultrasound is not widely used in radiotherapy workflows. To solve this issue, we introduce an algorithm that uses anatomical information derived from a CT scan to automatically provide the operator with a patient-specific ultrasound probe setup. The first application we investigated, for its relevance to radiotherapy, is 4D transperineal ultrasound image acquisition for prostate cancer patients. As initial test, the algorithm was applied on a CIRS multi-modality pelvic phantom. Probe setups were calculated in order to allow visualization of the prostate and adjacent edges of bladder and rectum, as clinically required. Five of the proposed setups were reproduced using a precision robotic arm and ultrasound volumes were acquired. A gel-filled probe cover was used to ensure proper acoustic coupling, while taking into account possible tilted positions of the probe with respect to the flat phantom surface. Visual inspection of the acquired volumes revealed that clinical requirements were fulfilled. Preliminary quantitative evaluation was also performed. The mean absolute distance (MAD) was calculated between actual anatomical structure positions and positions predicted by the CT-based algorithm. This resulted in a MAD of (2.8±0.4) mm for prostate, (2.5±0.6) mm for bladder and (2.8±0.6) mm for rectum. These results show that no significant systematic errors due to e.g. probe misplacement were introduced.

  7. Repeat sequence chromosome specific nucleic acid probes and methods of preparing and using

    DOEpatents

    Weier, H.U.G.; Gray, J.W.

    1995-06-27

    A primer directed DNA amplification method to isolate efficiently chromosome-specific repeated DNA wherein degenerate oligonucleotide primers are used is disclosed. The probes produced are a heterogeneous mixture that can be used with blocking DNA as a chromosome-specific staining reagent, and/or the elements of the mixture can be screened for high specificity, size and/or high degree of repetition among other parameters. The degenerate primers are sets of primers that vary in sequence but are substantially complementary to highly repeated nucleic acid sequences, preferably clustered within the template DNA, for example, pericentromeric alpha satellite repeat sequences. The template DNA is preferably chromosome-specific. Exemplary primers and probes are disclosed. The probes of this invention can be used to determine the number of chromosomes of a specific type in metaphase spreads, in germ line and/or somatic cell interphase nuclei, micronuclei and/or in tissue sections. Also provided is a method to select arbitrarily repeat sequence probes that can be screened for chromosome-specificity. 18 figs.

  8. Repeat sequence chromosome specific nucleic acid probes and methods of preparing and using

    DOEpatents

    Weier, Heinz-Ulrich G.; Gray, Joe W.

    1995-01-01

    A primer directed DNA amplification method to isolate efficiently chromosome-specific repeated DNA wherein degenerate oligonucleotide primers are used is disclosed. The probes produced are a heterogeneous mixture that can be used with blocking DNA as a chromosome-specific staining reagent, and/or the elements of the mixture can be screened for high specificity, size and/or high degree of repetition among other parameters. The degenerate primers are sets of primers that vary in sequence but are substantially complementary to highly repeated nucleic acid sequences, preferably clustered within the template DNA, for example, pericentromeric alpha satellite repeat sequences. The template DNA is preferably chromosome-specific. Exemplary primers ard probes are disclosed. The probes of this invention can be used to determine the number of chromosomes of a specific type in metaphase spreads, in germ line and/or somatic cell interphase nuclei, micronuclei and/or in tissue sections. Also provided is a method to select arbitrarily repeat sequence probes that can be screened for chromosome-specificity.

  9. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  10. Target-cancer-cell-specific activatable fluorescence imaging probes: rational design and in vivo applications.

    PubMed

    Kobayashi, Hisataka; Choyke, Peter L

    2011-02-15

    Conventional imaging methods, such as angiography, computed tomography (CT), magnetic resonance imaging (MRI), and radionuclide imaging, rely on contrast agents (iodine, gadolinium, and radioisotopes, for example) that are "always on." Although these indicators have proven clinically useful, their sensitivity is lacking because of inadequate target-to-background signal ratio. A unique aspect of optical imaging is that fluorescence probes can be designed to be activatable, that is, only "turned on" under certain conditions. These probes are engineered to emit signal only after binding a target tissue; this design greatly increases sensitivity and specificity in the detection of disease. Current research focuses on two basic types of activatable fluorescence probes. The first developed were conventional enzymatically activatable probes. These fluorescent molecules exist in the quenched state until activated by enzymatic cleavage, which occurs mostly outside of the cells. However, more recently, researchers have begun designing target-cell-specific activatable probes. These fluorophores exist in the quenched state until activated within targeted cells by endolysosomal processing, which results when the probe binds specific receptors on the cell surface and is subsequently internalized. In this Account, we present a review of the rational design and in vivo applications of target-cell-specific activatable probes. In engineering these probes, researchers have asserted control over a variety of factors, including photochemistry, pharmacological profile, and biological properties. Their progress has recently allowed the rational design and synthesis of target-cell-specific activatable fluorescence imaging probes, which can be conjugated to a wide variety of targeting molecules. Several different photochemical mechanisms have been utilized, each of which offers a unique capability for probe design. These include self-quenching, homo- and hetero-fluorescence resonance

  11. Site-Specific Infrared Probes of Proteins

    PubMed Central

    Ma, Jianqiang; Pazos, Ileana M.; Zhang, Wenkai; Culik, Robert M.; Gai, Feng

    2015-01-01

    Infrared spectroscopy has played an instrumental role in studying a wide variety of biological questions. However, in many cases it is impossible or difficult to rely on the intrinsic vibrational modes of biological molecules of interest, such as proteins, to reveal structural and/or environmental information in a site-specific manner. To overcome this limitation, many recent efforts have been dedicated to the development and application of various extrinsic vibrational probes that can be incorporated into biological molecules and used to site-specifically interrogate their structural and/or environmental properties. In this Review, we highlight some recent advancements of this rapidly growing research area. PMID:25580624

  12. Polynomial interpretation of multipole vectors

    NASA Astrophysics Data System (ADS)

    Katz, Gabriel; Weeks, Jeff

    2004-09-01

    Copi, Huterer, Starkman, and Schwarz introduced multipole vectors in a tensor context and used them to demonstrate that the first-year Wilkinson microwave anisotropy probe (WMAP) quadrupole and octopole planes align at roughly the 99.9% confidence level. In the present article, the language of polynomials provides a new and independent derivation of the multipole vector concept. Bézout’s theorem supports an elementary proof that the multipole vectors exist and are unique (up to rescaling). The constructive nature of the proof leads to a fast, practical algorithm for computing multipole vectors. We illustrate the algorithm by finding exact solutions for some simple toy examples and numerical solutions for the first-year WMAP quadrupole and octopole. We then apply our algorithm to Monte Carlo skies to independently reconfirm the estimate that the WMAP quadrupole and octopole planes align at the 99.9% level.

  13. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    PubMed

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  14. A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer

    2018-04-01

    In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.

  15. A PCR-Based Method for RNA Probes and Applications in Neuroscience.

    PubMed

    Hua, Ruifang; Yu, Shanshan; Liu, Mugen; Li, Haohong

    2018-01-01

    In situ hybridization (ISH) is a powerful technique that is used to detect the localization of specific nucleic acid sequences for understanding the organization, regulation, and function of genes. However, in most cases, RNA probes are obtained by in vitro transcription from plasmids containing specific promoter elements and mRNA-specific cDNA. Probes originating from plasmid vectors are time-consuming and not suitable for the rapid gene mapping. Here, we introduce a simplified method to prepare digoxigenin (DIG)-labeled non-radioactive RNA probes based on polymerase chain reaction (PCR) amplification and applications in free-floating mouse brain sections. Employing a transgenic reporter line, we investigate the expression of the somatostatin (SST) mRNA in the adult mouse brain. The method can be applied to identify the colocalization of SST mRNA and proteins including corticotrophin-releasing hormone (CRH) and protein kinase C delta type (PKC-δ) using double immunofluorescence, which is useful for understanding the organization of complex brain nuclei. Moreover, the method can also be incorporated with retrograde tracing to visualize the functional connection in the neural circuitry. Briefly, the PCR-based method for non-radioactive RNA probes is a useful tool that can be substantially utilized in neuroscience studies.

  16. Determinants with orthogonal polynomial entries

    NASA Astrophysics Data System (ADS)

    Ismail, Mourad E. H.

    2005-06-01

    We use moment representations of orthogonal polynomials to evaluate the corresponding Hankel determinants formed by the orthogonal polynomials. We also study the Hankel determinants which start with pn on the top left-hand corner. As examples we evaluate the Hankel determinants whose entries are q-ultraspherical or Al-Salam-Chihara polynomials.

  17. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  18. A fully automated and scalable timing probe-based method for time alignment of the LabPET II scanners

    NASA Astrophysics Data System (ADS)

    Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean

    2018-05-01

    A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).

  19. Subfamily-Specific Fluorescent Probes for Cysteine Proteases Display Dynamic Protease Activities during Seed Germination.

    PubMed

    Lu, Haibin; Chandrasekar, Balakumaran; Oeljeklaus, Julian; Misas-Villamil, Johana C; Wang, Zheming; Shindo, Takayuki; Bogyo, Matthew; Kaiser, Markus; van der Hoorn, Renier A L

    2015-08-01

    Cysteine proteases are an important class of enzymes implicated in both developmental and defense-related programmed cell death and other biological processes in plants. Because there are dozens of cysteine proteases that are posttranslationally regulated by processing, environmental conditions, and inhibitors, new methodologies are required to study these pivotal enzymes individually. Here, we introduce fluorescence activity-based probes that specifically target three distinct cysteine protease subfamilies: aleurain-like proteases, cathepsin B-like proteases, and vacuolar processing enzymes. We applied protease activity profiling with these new probes on Arabidopsis (Arabidopsis thaliana) protease knockout lines and agroinfiltrated leaves to identify the probe targets and on other plant species to demonstrate their broad applicability. These probes revealed that most commercially available protease inhibitors target unexpected proteases in plants. When applied on germinating seeds, these probes reveal dynamic activities of aleurain-like proteases, cathepsin B-like proteases, and vacuolar processing enzymes, coinciding with the remobilization of seed storage proteins. © 2015 American Society of Plant Biologists. All Rights Reserved.

  20. Timing Calibration in PET Using a Time Alignment Probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moses, William W.; Thompson, Christopher J.

    2006-05-05

    We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less

  1. Generalized clustering conditions of Jack polynomials at negative Jack parameter {alpha}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernevig, B. Andrei; Department of Physics, Princeton University, Princeton, New Jersey 08544; Haldane, F. D. M.

    We present several conjectures on the behavior and clustering properties of Jack polynomials at a negative parameter {alpha}=-(k+1/r-1), with partitions that violate the (k,r,N)- admissibility rule of [Feigin et al. [Int. Math. Res. Notices 23, 1223 (2002)]. We find that the ''highest weight'' Jack polynomials of specific partitions represent the minimum degree polynomials in N variables that vanish when s distinct clusters of k+1 particles are formed, where s and k are positive integers. Explicit counting formulas are conjectured. The generalized clustering conditions are useful in a forthcoming description of fractional quantum Hall quasiparticles.

  2. Distortion theorems for polynomials on a circle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubinin, V N

    2000-12-31

    Inequalities for the derivatives with respect to {phi}=arg z the functions ReP(z), |P(z)|{sup 2} and arg P(z) are established for an algebraic polynomial P(z) at points on the circle |z|=1. These estimates depend, in particular, on the constant term and the leading coefficient of the polynomial P(z) and improve the classical Bernstein and Turan inequalities. The method of proof is based on the techniques of generalized reduced moduli.

  3. Polynomial Conjoint Analysis of Similarities: A Model for Constructing Polynomial Conjoint Measurement Algorithms.

    ERIC Educational Resources Information Center

    Young, Forrest W.

    A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…

  4. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  5. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  6. Parallel gene analysis with allele-specific padlock probes and tag microarrays

    PubMed Central

    Banér, Johan; Isaksson, Anders; Waldenström, Erik; Jarvius, Jonas; Landegren, Ulf; Nilsson, Mats

    2003-01-01

    Parallel, highly specific analysis methods are required to take advantage of the extensive information about DNA sequence variation and of expressed sequences. We present a scalable laboratory technique suitable to analyze numerous target sequences in multiplexed assays. Sets of padlock probes were applied to analyze single nucleotide variation directly in total genomic DNA or cDNA for parallel genotyping or gene expression analysis. All reacted probes were then co-amplified and identified by hybridization to a standard tag oligonucleotide array. The technique was illustrated by analyzing normal and pathogenic variation within the Wilson disease-related ATP7B gene, both at the level of DNA and RNA, using allele-specific padlock probes. PMID:12930977

  7. Chaos, Fractals, and Polynomials.

    ERIC Educational Resources Information Center

    Tylee, J. Louis; Tylee, Thomas B.

    1996-01-01

    Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)

  8. High-throughput microarray technology in diagnostics of enterobacteria based on genome-wide probe selection and regression analysis.

    PubMed

    Friedrich, Torben; Rahmann, Sven; Weigel, Wilfried; Rabsch, Wolfgang; Fruth, Angelika; Ron, Eliora; Gunzer, Florian; Dandekar, Thomas; Hacker, Jörg; Müller, Tobias; Dobrindt, Ulrich

    2010-10-21

    The Enterobacteriaceae comprise a large number of clinically relevant species with several individual subspecies. Overlapping virulence-associated gene pools and the high overall genome plasticity often interferes with correct enterobacterial strain typing and risk assessment. Array technology offers a fast, reproducible and standardisable means for bacterial typing and thus provides many advantages for bacterial diagnostics, risk assessment and surveillance. The development of highly discriminative broad-range microbial diagnostic microarrays remains a challenge, because of marked genome plasticity of many bacterial pathogens. We developed a DNA microarray for strain typing and detection of major antimicrobial resistance genes of clinically relevant enterobacteria. For this purpose, we applied a global genome-wide probe selection strategy on 32 available complete enterobacterial genomes combined with a regression model for pathogen classification. The discriminative power of the probe set was further tested in silico on 15 additional complete enterobacterial genome sequences. DNA microarrays based on the selected probes were used to type 92 clinical enterobacterial isolates. Phenotypic tests confirmed the array-based typing results and corroborate that the selected probes allowed correct typing and prediction of major antibiotic resistances of clinically relevant Enterobacteriaceae, including the subspecies level, e.g. the reliable distinction of different E. coli pathotypes. Our results demonstrate that the global probe selection approach based on longest common factor statistics as well as the design of a DNA microarray with a restricted set of discriminative probes enables robust discrimination of different enterobacterial variants and represents a proof of concept that can be adopted for diagnostics of a wide range of microbial pathogens. Our approach circumvents misclassifications arising from the application of virulence markers, which are highly affected by

  9. Recovery and radiation corrections and time constants of several sizes of shielded and unshielded thermocouple probes for measuring gas temperature

    NASA Technical Reports Server (NTRS)

    Glawe, G. E.; Holanda, R.; Krause, L. N.

    1978-01-01

    Performance characteristics were experimentally determined for several sizes of a shielded and unshielded thermocouple probe design. The probes are of swaged construction and were made of type K wire with a stainless steel sheath and shield and MgO insulation. The wire sizes ranged from 0.03- to 1.02-mm diameter for the unshielded design and from 0.16- to 0.81-mm diameter for the shielded design. The probes were tested through a Mach number range of 0.2 to 0.9, through a temperature range of room ambient to 1420 K, and through a total-pressure range of 0.03 to 0.2.2 MPa (0.3 to 22 atm). Tables and graphs are presented to aid in selecting a particular type and size. Recovery corrections, radiation corrections, and time constants were determined.

  10. Distortion theorems for polynomials on a circle

    NASA Astrophysics Data System (ADS)

    Dubinin, V. N.

    2000-12-01

    Inequalities for the derivatives with respect to \\varphi=\\arg z the functions \\operatorname{Re}P(z), \\vert P(z)\\vert^2 and \\arg P(z) are established for an algebraic polynomial P(z) at points on the circle \\vert z\\vert=1. These estimates depend, in particular, on the constant term and the leading coefficient of the polynomial P(z) and improve the classical Bernstein and Turan inequalities. The method of proof is based on the techniques of generalized reduced moduli.

  11. [A new class of exciplex-formed probe detect of specific sequence DNA].

    PubMed

    Li, Qing-Yong; Zu, Yuan-Gang; Lü, Hong-Yan; Wang, Li-Min

    2009-07-01

    The present research was to develop the exciplex-based fluorescence detection of DNA. A SNP-containing region of cytochrome P450 2C9 DNA systems was evaluated to define some of the structural and associated requirement of this new class of exciplex-formed probe, and a 24-base target was selected which contains single-nucleotide polymorphisms (SNP) in genes coding for cytochrome P450. The two probes were all 12-base to give coverage of a 24-base target region to ensure specificity within the human genome. Exciplex partners used in this study were prepared using analogous phosphoramide attachment to the 3'- or 5'-phosphate group of the appropriate oligonucleotide probes. The target effectively assembled its own detector by hybridization from components which were non-fluorescent at the detection wavelength, leading to the huge improvement in terms of decreased background. This research provides details of the effects of different partner, position of partners and different excitation wavelengths for the split-oligonucleotide probe system for exciplex-based fluorescence detection of DNA. This study demonstrates that the emission intensity of the excimer formed by new pyrene derivative is the highest in these excimer and exciplex, and the excimer is easy to be formed and not sensitive to the position of partners. However the exciplex formed by the new pyrene derivative and naphthalene emitted strongly at -505 nm with large Stokes shifts (120-130 nm), and the monomer emission at 390 and 410 nm is nearly zero. Excitation wavelength of 400 nm is the best for I(e505)/I(m410) (exciplex emission at 505 nm/monomer emission at 410 nm) of the exciplex. This method features lower background and high sensitivity. Moreover the exciplex is sensitive to the steric factor, different position of partners and microenvironment, so this exciplex system is promising and could be tried to identify the SNP genes.

  12. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  13. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  14. PNA-COMBO-FISH: From combinatorial probe design in silico to vitality compatible, specific labelling of gene targets in cell nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Patrick; Rößler, Jens; Schwarz-Finsterle, Jutta

    Recently, advantages concerning targeting specificity of PCR constructed oligonucleotide FISH probes in contrast to established FISH probes, e.g. BAC clones, have been demonstrated. These techniques, however, are still using labelling protocols with DNA denaturing steps applying harsh heat treatment with or without further denaturing chemical agents. COMBO-FISH (COMBinatorial Oligonucleotide FISH) allows the design of specific oligonucleotide probe combinations in silico. Thus, being independent from primer libraries or PCR laboratory conditions, the probe sequences extracted by computer sequence data base search can also be synthesized as single stranded PNA-probes (Peptide Nucleic Acid probes). Gene targets can be specifically labelled with atmore » least about 20 PNA-probes obtaining visibly background free specimens. By using appropriately designed triplex forming oligonucleotides, the denaturing procedures can completely be omitted. These results reveal a significant step towards oligonucleotide-FISH maintaining the 3D-nanostructure and even the viability of the cell target. The method is demonstrated with the detection of Her2/neu and GRB7 genes, which are indicators in breast cancer diagnosis and therapy. - Highlights: • Denaturation free protocols preserve 3D architecture of chromosomes and nuclei. • Labelling sets are determined in silico for duplex and triplex binding. • Probes are produced chemically with freely chosen backbones and base variants. • Peptide nucleic acid backbones reduce hindering charge interactions. • Intercalating side chains stabilize binding of short oligonucleotides.« less

  15. Application of polynomial su(1, 1) algebra to Pöschl-Teller potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong-Biao, E-mail: zhanghb017@nenu.edu.cn; Lu, Lu

    2013-12-15

    Two novel polynomial su(1, 1) algebras for the physical systems with the first and second Pöschl-Teller (PT) potentials are constructed, and their specific representations are presented. Meanwhile, these polynomial su(1, 1) algebras are used as an algebraic technique to solve eigenvalues and eigenfunctions of the Hamiltonians associated with the first and second PT potentials. The algebraic approach explores an appropriate new pair of raising and lowing operators K-circumflex{sub ±} of polynomial su(1, 1) algebra as a pair of shift operators of our Hamiltonians. In addition, two usual su(1, 1) algebras associated with the first and second PT potentials are derivedmore » naturally from the polynomial su(1, 1) algebras built by us.« less

  16. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution

  17. Histogram-driven cupping correction (HDCC) in CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  18. Polynomial reduction and evaluation of tree- and loop-level CHY amplitudes

    DOE PAGES

    Zlotnikov, Michael

    2016-08-24

    We develop a polynomial reduction procedure that transforms any gauge fixed CHY amplitude integrand for n scattering particles into a σ-moduli multivariate polynomial of what we call the standard form. We show that a standard form polynomial must have a specific ladder type monomial structure, which has finite size at any n, with highest multivariate degree given by (n – 3)(n – 4)/2. This set of monomials spans a complete basis for polynomials with rational coefficients in kinematic data on the support of scattering equations. Subsequently, at tree and one-loop level, we employ the global residue theorem to derive amore » prescription that evaluates any CHY amplitude by means of collecting simple residues at infinity only. Furthermore, the prescription is then applied explicitly to some tree and one-loop amplitude examples.« less

  19. Multi-indexed (q-)Racah polynomials

    NASA Astrophysics Data System (ADS)

    Odake, Satoru; Sasaki, Ryu

    2012-09-01

    As the second stage of the project multi-indexed orthogonal polynomials, we present, in the framework of ‘discrete quantum mechanics’ with real shifts in one dimension, the multi-indexed (q-)Racah polynomials. They are obtained from the (q-)Racah polynomials by the multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of ‘virtual state’ vectors, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier. The virtual state vectors are the ‘solutions’ of the matrix Schrödinger equation with negative ‘eigenvalues’, except for one of the two boundary points.

  20. Stabilization of nonlinear systems using sampled-data output-feedback fuzzy controller based on polynomial-fuzzy-model-based control approach.

    PubMed

    Lam, H K

    2012-02-01

    This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.

  1. Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields

    NASA Astrophysics Data System (ADS)

    Milstead, Jonathan

    The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.

  2. Bunch mode specific rate corrections for PILATUS3 detectors

    DOE PAGES

    Trueb, P.; Dejoie, C.; Kobas, M.; ...

    2015-04-09

    PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less

  3. Structure design and characteristic analysis of micro-nano probe based on six dimensional micro-force measuring principle

    NASA Astrophysics Data System (ADS)

    Yang, Hong-tao; Cai, Chun-mei; Fang, Chuan-zhi; Wu, Tian-feng

    2013-10-01

    In order to develop micro-nano probe having error self-correcting function and good rigidity structure, a new micro-nano probe system was developed based on six-dimensional micro-force measuring principle. The structure and working principle of the probe was introduced in detail. The static nonlinear decoupling method was established with BP neural network to do the static decoupling for the dimension coupling existing in each direction force measurements. The optimal parameters of BP neural network were selected and the decoupling simulation experiments were done. The maximum probe coupling rate after decoupling is 0.039% in X direction, 0.025% in Y direction and 0.027% in Z direction. The static measurement sensitivity of the probe can reach 10.76μɛ / mN in Z direction and 14.55μɛ / mN in X and Y direction. The modal analysis and harmonic response analysis under three dimensional harmonic load of the probe were done by using finite element method. The natural frequencies under different vibration modes were obtained and the working frequency of the probe was determined, which is higher than 10000 Hz . The transient response analysis of the probe was done, which indicates that the response time of the probe can reach 0.4 ms. From the above results, it is shown that the developed micro-nano probe meets triggering requirements of micro-nano probe. Three dimension measuring force can be measured precisely by the developed probe, which can be used to predict and correct the force deformation error and the touch error of the measuring ball and the measuring rod.

  4. An algorithmic approach to solving polynomial equations associated with quantum circuits

    NASA Astrophysics Data System (ADS)

    Gerdt, V. P.; Zinin, M. V.

    2009-12-01

    In this paper we present two algorithms for reducing systems of multivariate polynomial equations over the finite field F 2 to the canonical triangular form called lexicographical Gröbner basis. This triangular form is the most appropriate for finding solutions of the system. On the other hand, the system of polynomials over F 2 whose variables also take values in F 2 (Boolean polynomials) completely describes the unitary matrix generated by a quantum circuit. In particular, the matrix itself can be computed by counting the number of solutions (roots) of the associated polynomial system. Thereby, efficient construction of the lexicographical Gröbner bases over F 2 associated with quantum circuits gives a method for computing their circuit matrices that is alternative to the direct numerical method based on linear algebra. We compare our implementation of both algorithms with some other software packages available for computing Gröbner bases over F 2.

  5. Equivalences of the multi-indexed orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odake, Satoru

    2014-01-15

    Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.

  6. Probe-specific mixed-model approach to detect copy number differences using multiplex ligation-dependent probe amplification (MLPA)

    PubMed Central

    González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier

    2008-01-01

    Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760

  7. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    PubMed Central

    Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337–43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring technique currently exists. PMID:23807573

  8. Online Removal of Baseline Shift with a Polynomial Function for Hemodynamic Monitoring Using Near-Infrared Spectroscopy.

    PubMed

    Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting

    2018-01-21

    Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.

  9. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation

  10. Linear precoding based on polynomial expansion: reducing complexity in massive MIMO.

    PubMed

    Mueller, Axel; Kammoun, Abla; Björnson, Emil; Debbah, Mérouane

    Massive multiple-input multiple-output (MIMO) techniques have the potential to bring tremendous improvements in spectral efficiency to future communication systems. Counterintuitively, the practical issues of having uncertain channel knowledge, high propagation losses, and implementing optimal non-linear precoding are solved more or less automatically by enlarging system dimensions. However, the computational precoding complexity grows with the system dimensions. For example, the close-to-optimal and relatively "antenna-efficient" regularized zero-forcing (RZF) precoding is very complicated to implement in practice, since it requires fast inversions of large matrices in every coherence period. Motivated by the high performance of RZF, we propose to replace the matrix inversion and multiplication by a truncated polynomial expansion (TPE), thereby obtaining the new TPE precoding scheme which is more suitable for real-time hardware implementation and significantly reduces the delay to the first transmitted symbol. The degree of the matrix polynomial can be adapted to the available hardware resources and enables smooth transition between simple maximum ratio transmission and more advanced RZF. By deriving new random matrix results, we obtain a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by TPE precoding in massive MIMO systems. Furthermore, we provide a closed-form expression for the polynomial coefficients that maximizes this SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and the signal-to-noise ratio.

  11. Probe Array Correction With Strong Target Interactions

    DTIC Science & Technology

    2012-08-01

    exciting each probe array feed with a unit voltage source and computing the short circuit currents, ii, i = 1 , 2 , . . . , 5, at each probe array feed...that only one probe array element has unit terminal currents. In this case I2 = Îi = IN − Y NV 2 = IN − Y N [ Y is + Y N ]− 1 IN = (I − Y N [ Y is + Y...YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YY) 2 . REPORT TYPE 3. DATES COVERED (From - To) August 2012 Interim 01 May 2011 – 31 May

  12. Bayer Demosaicking with Polynomial Interpolation.

    PubMed

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  13. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Restoring the lattice of Si-based atom probe reconstructions for enhanced information on dopant positioning.

    PubMed

    Breen, Andrew J; Moody, Michael P; Ceguerra, Anna V; Gault, Baptiste; Araullo-Peters, Vicente J; Ringer, Simon P

    2015-12-01

    The following manuscript presents a novel approach for creating lattice based models of Sb-doped Si directly from atom probe reconstructions for the purposes of improving information on dopant positioning and directly informing quantum mechanics based materials modeling approaches. Sophisticated crystallographic analysis techniques are used to detect latent crystal structure within the atom probe reconstructions with unprecedented accuracy. A distortion correction algorithm is then developed to precisely calibrate the detected crystal structure to the theoretically known diamond cubic lattice. The reconstructed atoms are then positioned on their most likely lattice positions. Simulations are then used to determine the accuracy of such an approach and show that improvements to short-range order measurements are possible for noise levels and detector efficiencies comparable with experimentally collected atom probe data. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Extending Romanovski polynomials in quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesne, C.

    2013-12-15

    Some extensions of the (third-class) Romanovski polynomials (also called Romanovski/pseudo-Jacobi polynomials), which appear in bound-state wavefunctions of rationally extended Scarf II and Rosen-Morse I potentials, are considered. For the former potentials, the generalized polynomials satisfy a finite orthogonality relation, while for the latter an infinite set of relations among polynomials with degree-dependent parameters is obtained. Both types of relations are counterparts of those known for conventional polynomials. In the absence of any direct information on the zeros of the Romanovski polynomials present in denominators, the regularity of the constructed potentials is checked by taking advantage of the disconjugacy properties ofmore » second-order differential equations of Schrödinger type. It is also shown that on going from Scarf I to Scarf II or from Rosen-Morse II to Rosen-Morse I potentials, the variety of rational extensions is narrowed down from types I, II, and III to type III only.« less

  16. Evaluate More General Integrals Involving Universal Associated Legendre Polynomials via Taylor’s Theorem

    NASA Astrophysics Data System (ADS)

    Yañez-Navarro, G.; Sun, Guo-Hua; Sun, Dong-Sheng; Chen, Chang-Yuan; Dong, Shi-Hai

    2017-08-01

    A few important integrals involving the product of two universal associated Legendre polynomials {P}{l\\prime}{m\\prime}(x), {P}{k\\prime}{n\\prime}(x) and x2a(1 - x2)-p-1, xb(1 ± x)-p-1 and xc(1 -x2)-p-1 (1 ± x) are evaluated using the operator form of Taylor’s theorem and an integral over a single universal associated Legendre polynomial. These integrals are more general since the quantum numbers are unequal, i.e. l‧ ≠ k‧ and m‧ ≠ n‧. Their selection rules are also given. We also verify the correctness of those integral formulas numerically. Supported by 20170938-SIP-IPN, Mexico

  17. PNA-COMBO-FISH: From combinatorial probe design in silico to vitality compatible, specific labelling of gene targets in cell nuclei.

    PubMed

    Müller, Patrick; Rößler, Jens; Schwarz-Finsterle, Jutta; Schmitt, Eberhard; Hausmann, Michael

    2016-07-01

    Recently, advantages concerning targeting specificity of PCR constructed oligonucleotide FISH probes in contrast to established FISH probes, e.g. BAC clones, have been demonstrated. These techniques, however, are still using labelling protocols with DNA denaturing steps applying harsh heat treatment with or without further denaturing chemical agents. COMBO-FISH (COMBinatorial Oligonucleotide FISH) allows the design of specific oligonucleotide probe combinations in silico. Thus, being independent from primer libraries or PCR laboratory conditions, the probe sequences extracted by computer sequence data base search can also be synthesized as single stranded PNA-probes (Peptide Nucleic Acid probes) or TINA-DNA (Twisted Intercalating Nucleic Acids). Gene targets can be specifically labelled with at least about 20 probes obtaining visibly background free specimens. By using appropriately designed triplex forming oligonucleotides, the denaturing procedures can completely be omitted. These results reveal a significant step towards oligonucleotide-FISH maintaining the 3d-nanostructure and even the viability of the cell target. The method is demonstrated with the detection of Her2/neu and GRB7 genes, which are indicators in breast cancer diagnosis and therapy. Copyright © 2016. Published by Elsevier Inc.

  18. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network

    PubMed Central

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701

  19. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    PubMed

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  20. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  1. NHS-Esters As Versatile Reactivity-Based Probes for Mapping Proteome-Wide Ligandable Hotspots.

    PubMed

    Ward, Carl C; Kleinman, Jordan I; Nomura, Daniel K

    2017-06-16

    Most of the proteome is considered undruggable, oftentimes hindering translational efforts for drug discovery. Identifying previously unknown druggable hotspots in proteins would enable strategies for pharmacologically interrogating these sites with small molecules. Activity-based protein profiling (ABPP) has arisen as a powerful chemoproteomic strategy that uses reactivity-based chemical probes to map reactive, functional, and ligandable hotspots in complex proteomes, which has enabled inhibitor discovery against various therapeutic protein targets. Here, we report an alkyne-functionalized N-hydroxysuccinimide-ester (NHS-ester) as a versatile reactivity-based probe for mapping the reactivity of a wide range of nucleophilic ligandable hotspots, including lysines, serines, threonines, and tyrosines, encompassing active sites, allosteric sites, post-translational modification sites, protein interaction sites, and previously uncharacterized potential binding sites. Surprisingly, we also show that fragment-based NHS-ester ligands can be made to confer selectivity for specific lysine hotspots on specific targets including Dpyd, Aldh2, and Gstt1. We thus put forth NHS-esters as promising reactivity-based probes and chemical scaffolds for covalent ligand discovery.

  2. Individually addressable vertically aligned carbon nanofiber-based electrochemical probes

    NASA Astrophysics Data System (ADS)

    Guillorn, M. A.; McKnight, T. E.; Melechko, A.; Merkulov, V. I.; Britt, P. F.; Austin, D. W.; Lowndes, D. H.; Simpson, M. L.

    2002-03-01

    In this paper we present the fabrication and initial testing results of high aspect ratio vertically aligned carbon nanofiber (VACNF)-based electrochemical probes. Electron beam lithography was used to define the catalytic growth sites of the VACNFs. Following catalyst deposition, VACNF were grown using a plasma enhanced chemical vapor deposition process. Photolithography was performed to realize interconnect structures. These probes were passivated with a thin layer of SiO2, which was then removed from the tips of the VACNF, rendering them electrochemically active. We have investigated the functionality of completed devices using cyclic voltammetry (CV) of ruthenium hexammine trichloride, a highly reversible, outer sphere redox system. The faradaic current obtained during CV potential sweeps shows clear oxidation and reduction peaks at magnitudes that correspond well with the geometry of these nanoscale electrochemical probes. Due to the size and the site-specific directed synthesis of the VACNFs, these probes are ideally suited for characterizing electrochemical phenomena with an unprecedented degree of spatial resolution.

  3. Comparing Explicit Exemplar-Based and Rule-Based Corrective Feedback: Introducing Analogy-Based Corrective Feedback

    ERIC Educational Resources Information Center

    Thomas, Kavita E.

    2018-01-01

    This study introduces an approach to providing corrective feedback to L2 learners termed analogy-based corrective feedback that is motivated by analogical learning theories and syntactic alignment in dialogue. Learners are presented with a structurally similar synonymous version of their output where the erroneous form is corrected, and they must…

  4. Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Saxena, Nitin

    Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less

  5. Nodal Statistics for the Van Vleck Polynomials

    NASA Astrophysics Data System (ADS)

    Bourget, Alain

    The Van Vleck polynomials naturally arise from the generalized Lamé equation as the polynomials of degree for which Eq. (1) has a polynomial solution of some degree k. In this paper, we compute the limiting distribution, as well as the limiting mean level spacings distribution of the zeros of any Van Vleck polynomial as N --> ∞.

  6. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  7. NADH-fluorescence scattering correction for absolute concentration determination in a liquid tissue phantom using a novel multispectral magnetic-resonance-imaging-compatible needle probe

    NASA Astrophysics Data System (ADS)

    Braun, Frank; Schalk, Robert; Heintz, Annabell; Feike, Patrick; Firmowski, Sebastian; Beuermann, Thomas; Methner, Frank-Jürgen; Kränzlin, Bettina; Gretz, Norbert; Rädle, Matthias

    2017-07-01

    In this report, a quantitative nicotinamide adenine dinucleotide hydrate (NADH) fluorescence measurement algorithm in a liquid tissue phantom using a fiber-optic needle probe is presented. To determine the absolute concentrations of NADH in this phantom, the fluorescence emission spectra at 465 nm were corrected using diffuse reflectance spectroscopy between 600 nm and 940 nm. The patented autoclavable Nitinol needle probe enables the acquisition of multispectral backscattering measurements of ultraviolet, visible, near-infrared and fluorescence spectra. As a phantom, a suspension of calcium carbonate (Calcilit) and water with physiological NADH concentrations between 0 mmol l-1 and 2.0 mmol l-1 were used to mimic human tissue. The light scattering characteristics were adjusted to match the backscattering attributes of human skin by modifying the concentration of Calcilit. To correct the scattering effects caused by the matrices of the samples, an algorithm based on the backscattered remission spectrum was employed to compensate the influence of multiscattering on the optical pathway through the dispersed phase. The monitored backscattered visible light was used to correct the fluorescence spectra and thereby to determine the true NADH concentrations at unknown Calcilit concentrations. Despite the simplicity of the presented algorithm, the root-mean-square error of prediction (RMSEP) was 0.093 mmol l-1.

  8. On Certain Wronskians of Multiple Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Lun; Filipuk, Galina

    2014-11-01

    We consider determinants of Wronskian type whose entries are multiple orthogonal polynomials associated with a path connecting two multi-indices. By assuming that the weight functions form an algebraic Chebyshev (AT) system, we show that the polynomials represented by the Wronskians keep a constant sign in some cases, while in some other cases oscillatory behavior appears, which generalizes classical results for orthogonal polynomials due to Karlin and Szegő. There are two applications of our results. The first application arises from the observation that the m-th moment of the average characteristic polynomials for multiple orthogonal polynomial ensembles can be expressed as a Wronskian of the type II multiple orthogonal polynomials. Hence, it is straightforward to obtain the distinct behavior of the moments for odd and even m in a special multiple orthogonal ensemble - the AT ensemble. As the second application, we derive some Turán type inequalities for m! ultiple Hermite and multiple Laguerre polynomials (of two kinds). Finally, we study numerically the geometric configuration of zeros for the Wronskians of these multiple orthogonal polynomials. We observe that the zeros have regular configurations in the complex plane, which might be of independent interest.

  9. Polynomial Graphs and Symmetry

    ERIC Educational Resources Information Center

    Goehle, Geoff; Kobayashi, Mitsuo

    2013-01-01

    Most quadratic functions are not even, but every parabola has symmetry with respect to some vertical line. Similarly, every cubic has rotational symmetry with respect to some point, though most cubics are not odd. We show that every polynomial has at most one point of symmetry and give conditions under which the polynomial has rotational or…

  10. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  11. Vector-valued Jack polynomials and wavefunctions on the torus

    NASA Astrophysics Data System (ADS)

    Dunkl, Charles F.

    2017-06-01

    The Hamiltonian of the quantum Calogero-Sutherland model of N identical particles on the circle with 1/r 2 interactions has eigenfunctions consisting of Jack polynomials times the base state. By use of the generalized Jack polynomials taking values in modules of the symmetric group and the matrix solution of a system of linear differential equations one constructs novel eigenfunctions of the Hamiltonian. Like the usual wavefunctions each eigenfunction determines a symmetric probability density on the N-torus. The construction applies to any irreducible representation of the symmetric group. The methods depend on the theory of generalized Jack polynomials due to Griffeth, and the Yang-Baxter graph approach of Luque and the author.

  12. Gold nanoparticle-based probes for the colorimetric detection of Mycobacterium avium subspecies paratuberculosis DNA.

    PubMed

    Ganareal, Thenor Aristotile Charles S; Balbin, Michelle M; Monserate, Juvy J; Salazar, Joel R; Mingala, Claro N

    2018-02-12

    Gold nanoparticle (AuNP) is considered to be the most stable metal nanoparticle having the ability to be functionalized with biomolecules. Recently, AuNP-based DNA detection methods captured the interest of researchers worldwide. Paratuberculosis or Johne's disease, a chronic gastroenteritis in ruminants caused by Mycobacterium avium subsp. paratuberculosis (MAP), was found to have negative effect in the livestock industry. In this study, AuNP-based probes were evaluated for the specific and sensitive detection of MAP DNA. AuNP-based probe was produced by functionalization of AuNPs with thiol-modified oligonucleotide and was confirmed by Fourier-Transform Infrared (FTIR) spectroscopy. UV-Vis spectroscopy and Scanning Electron Microscopy (SEM) were used to characterize AuNPs. DNA detection was done by hybridization of 10 μL of DNA with 5 μL of probe at 63 °C for 10 min and addition of 3 μL salt solution. The method was specific to MAP with detection limit of 103 ng. UV-Vis and SEM showed dispersion and aggregation of the AuNPs for the positive and negative results, respectively, with no observed particle growth. This study therefore reports an AuNP-based probes which can be used for the specific and sensitive detection of MAP DNA. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Algorithms for Solvents and Spectral Factors of Matrix Polynomials

    DTIC Science & Technology

    1981-01-01

    spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right

  14. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  15. DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER

    2009-04-01

    Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less

  16. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  17. Percolation critical polynomial as a graph invariant

    DOE PAGES

    Scullard, Christian R.

    2012-10-18

    Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less

  18. Using Tutte polynomials to analyze the structure of the benzodiazepines

    NASA Astrophysics Data System (ADS)

    Cadavid Muñoz, Juan José

    2014-05-01

    Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.

  19. Transfer matrix computation of generalized critical polynomials in percolation

    DOE PAGES

    Scullard, Christian R.; Jacobsen, Jesper Lykke

    2012-09-27

    Percolation thresholds have recently been studied by means of a graph polynomial PB(p), henceforth referred to as the critical polynomial, that may be defined on any periodic lattice. The polynomial depends on a finite subgraph B, called the basis, and the way in which the basis is tiled to form the lattice. The unique root of P B(p) in [0, 1] either gives the exact percolation threshold for the lattice, or provides an approximation that becomes more accurate with appropriately increasing size of B. Initially P B(p) was defined by a contraction-deletion identity, similar to that satisfied by the Tuttemore » polynomial. Here, we give an alternative probabilistic definition of P B(p), which allows for much more efficient computations, by using the transfer matrix, than was previously possible with contraction-deletion. We present bond percolation polynomials for the (4, 82), kagome, and (3, 122) lattices for bases of up to respectively 96, 162, and 243 edges, much larger than the previous limit of 36 edges using contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. For the largest bases, we obtain the thresholds p c(4, 82) = 0.676 803 329 · · ·, p c(kagome) = 0.524 404 998 · · ·, p c(3, 122) = 0.740 420 798 · · ·, comparable to the best simulation results. We also show that the alternative definition of P B(p) can be applied to study site percolation problems.« less

  20. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  1. Protocol for chromosome-specific probe construction using PRINS, micromanipulation and DOP-PCR techniques.

    PubMed

    Passamani, Paulo Z; Carvalho, Carlos R; Soares, Fernanda A F

    2018-01-01

    Chromosome-specific probes have been widely used in molecular cytogenetics, being obtained with different methods. In this study, a reproducible protocol for construction of chromosome-specific probes is proposed which associates in situ amplification (PRINS), micromanipulation and degenerate oligonucleotide-primed PCR (DOP-PCR). Human lymphocyte cultures were used to obtain metaphases from male and female individuals. The chromosomes were amplified via PRINS, and subcentromeric fragments of the X chromosome were microdissected using microneedles coupled to a phase contrast microscope. The fragments were amplified by DOP-PCR and labeled with tetramethyl-rhodamine-5-dUTP. The probes were used in fluorescent in situ hybridization (FISH) procedure to highlight these specific regions in the metaphases. The results show one fluorescent red spot in male and two in female X chromosomes and interphase nuclei.

  2. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Al-Rousan, M.

    2005-12-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  3. Detection of iron-depositing Pedomicrobium species in native biofilms from the Odertal National Park by a new, specific FISH probe.

    PubMed

    Braun, Burga; Richert, Inga; Szewzyk, Ulrich

    2009-10-01

    Iron-depositing bacteria play an important role in technical water systems (water wells, distribution systems) due to their intense deposition of iron oxides and resulting clogging effects. Pedomicrobium is known as iron- and manganese-oxidizing and accumulating bacterium. The ability to detect and quantify members of this species in biofilm communities is therefore desirable. In this study the fluorescence in situ hybridization (FISH) method was used to detect Pedomicrobium in iron and manganese incrusted biofilms. Based on comparative sequence analysis, we designed and evaluated a specific oligonucleotide probe (Pedo 1250) complementary to the hypervariable region 8 of the 16S rRNA gene for Pedomicrobium. Probe specificities were tested against 3 different strains of Pedomicrobium and Sphingobium yanoikuyae as non-target organism. Using optimized conditions the probe hybridized with all tested strains of Pedomicrobium with an efficiency of 80%. The non-target organism showed no hybridization signals. The new FISH probe was applied successfully for the in situ detection of Pedomicrobium in different native, iron-depositing biofilms. The hybridization results of native bioflims using probe Pedo_1250 agreed with the results of the morphological structure of Pedomicrobium bioflims based on scanning electron microscopy.

  4. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland W.

    1992-01-01

    The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.

  5. Simple synthesis of carbon-11 labeled styryl dyes as new potential PET RNA-specific, living cell imaging probes.

    PubMed

    Wang, Min; Gao, Mingzhang; Miller, Kathy D; Sledge, George W; Hutchins, Gary D; Zheng, Qi-Huang

    2009-05-01

    A new type of styryl dyes have been developed as RNA-specific, live cell imaging probes for fluorescent microscopy technology to study nuclear structure and function. This study was designed to develop carbon-11 labeled styryl dyes as new probes for biomedical imaging technique positron emission tomography (PET) imaging of RNA in living cells. Precursors (E)-2-(2-(1-(triisopropylsilyl)-1H-indol-3-yl)vinyl)quinoline (2), (E)-2-(2,4,6-trimethoxystyryl)quinoline (3) and (E)-4-(2-(6-methoxyquinolin-2-yl)vinyl)-N,N-diemthylaniline (4), and standards styryl dyes E36 (6), E144 (7) and F22 (9) were synthesized in multiple steps with moderate to high chemical yields. Precursor 2 was labeled by [(11)C]CH(3)OTf, trapped on a cation-exchange CM Sep-Pak cartridge following a quick deprotecting reaction by addition of (n-Bu)(4)NF in THF, and isolated by solid-phase extraction (SPE) purification to provide target tracer [(11)C]E36 ([(11)C]6) in 40-50% radiochemical yields, decay corrected to end of bombardment (EOB), based on [(11)C]CO(2). The target tracers [(11)C]E144 ([(11)C]7) and [(11)C]F22 ([(11)C]9) were prepared by N-[(11)C]methylation of the precursors 3 and 4, respectively, using [(11)C]CH(3)OTf and isolated by SPE method in 50-70% radiochemical yields at EOB. The specific activity of the target tracers [(11)C]6, [(11)C]7 and [(11)C]9 was in a range of 74-111GBq/mumol at the end of synthesis (EOS).

  6. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement.

    PubMed

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.

  7. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    PubMed

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  8. Weierstrass method for quaternionic polynomial root-finding

    NASA Astrophysics Data System (ADS)

    Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana

    2018-01-01

    Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.

  9. Visualizing tributyltin (TBT) in bacterial aggregates by specific rhodamine-based fluorescent probes.

    PubMed

    Jin, Xilang; Hao, Likai; She, Mengyao; Obst, Martin; Kappler, Andreas; Yin, Bing; Liu, Ping; Li, Jianli; Wang, Lanying; Shi, Zhen

    2015-01-01

    Here we present the first examples of fluorescent and colorimetric probes for microscopic TBT imaging. The fluorescent probes are highly selective and sensitive to TBT and have successfully been applied for imaging of TBT in bacterial Rhodobacter ferrooxidans sp. strain SW2 cell-EPS-mineral aggregates and in cell suspensions of the marine cyanobacterium Synechococcus PCC 7002 by using confocal laser scanning microscopy. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.

  11. Overview of Probe-based Storage Technologies

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Yang, Ci Hui; Wen, Jing; Gong, Si Di; Peng, Yuan Xiu

    2016-07-01

    The current world is in the age of big data where the total amount of global digital data is growing up at an incredible rate. This indeed necessitates a drastic enhancement on the capacity of conventional data storage devices that are, however, suffering from their respective physical drawbacks. Under this circumstance, it is essential to aggressively explore and develop alternative promising mass storage devices, leading to the presence of probe-based storage devices. In this paper, the physical principles and the current status of several different probe storage devices, including thermo-mechanical probe memory, magnetic probe memory, ferroelectric probe memory, and phase-change probe memory, are reviewed in details, as well as their respective merits and weakness. This paper provides an overview of the emerging probe memories potentially for next generation storage device so as to motivate the exploration of more innovative technologies to push forward the development of the probe storage devices.

  12. Overview of Probe-based Storage Technologies.

    PubMed

    Wang, Lei; Yang, Ci Hui; Wen, Jing; Gong, Si Di; Peng, Yuan Xiu

    2016-12-01

    The current world is in the age of big data where the total amount of global digital data is growing up at an incredible rate. This indeed necessitates a drastic enhancement on the capacity of conventional data storage devices that are, however, suffering from their respective physical drawbacks. Under this circumstance, it is essential to aggressively explore and develop alternative promising mass storage devices, leading to the presence of probe-based storage devices. In this paper, the physical principles and the current status of several different probe storage devices, including thermo-mechanical probe memory, magnetic probe memory, ferroelectric probe memory, and phase-change probe memory, are reviewed in details, as well as their respective merits and weakness. This paper provides an overview of the emerging probe memories potentially for next generation storage device so as to motivate the exploration of more innovative technologies to push forward the development of the probe storage devices.

  13. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  14. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE PAGES

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    2015-12-10

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  15. SU-E-T-614: Derivation of Equations to Define Inflection Points and Its Analysis in Flattening Filter Free Photon Beams Based On the Principle of Polynomial function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muralidhar, K Raja; Komanduri, K

    2014-06-01

    Purpose: The objective of this work is to present a mechanism for calculating inflection points on profiles at various depths and field sizes and also a significant study on the percentage of doses at the inflection points for various field sizes and depths for 6XFFF and 10XFFF energy profiles. Methods: Graphical representation was done on Percentage of dose versus Inflection points. Also using the polynomial function, the authors formulated equations for calculating spot-on inflection point on the profiles for 6X FFF and 10X FFF energies for all field sizes and at various depths. Results: In a flattening filter free radiationmore » beam which is not like in Flattened beams, the dose at inflection point of the profile decreases as field size increases for 10XFFF. Whereas in 6XFFF, the dose at the inflection point initially increases up to 10x10cm2 and then decreases. The polynomial function was fitted for both FFF beams for all field sizes and depths. For small fields less than 5x5 cm2 the inflection point and FWHM are almost same and hence analysis can be done just like in FF beams. A change in 10% of dose can change the field width by 1mm. Conclusion: The present study, Derivative of equations based on the polynomial equation to define inflection point concept is precise and accurate way to derive the inflection point dose on any FFF beam profile at any depth with less than 1% accuracy. Corrections can be done in future studies based on the multiple number of machine data. Also a brief study was done to evaluate the inflection point positions with respect to dose in FFF energies for various field sizes and depths for 6XFFF and 10XFFF energy profiles.« less

  16. Specific probe selection from landscape phage display library and its application in enzyme-linked immunosorbent assay of free prostate-specific antigen.

    PubMed

    Lang, Qiaolin; Wang, Fei; Yin, Long; Liu, Mingjun; Petrenko, Valery A; Liu, Aihua

    2014-03-04

    Probes against targets can be selected from the landscape phage library f8/8, displaying random octapeptides on the pVIII coat protein of the phage fd-tet and demonstrating many excellent features including multivalency, stability, and high structural homogeneity. Prostate-specific antigen (PSA) is usually determined by immunoassay, by which antibodies are frequently used as the specific probes. Herein we found that more advanced probes against free prostate-specific antigen (f-PSA) can be screened from the landscape phage library. Four phage monoclones were selected and identified by the specificity array. One phage clone displaying the fusion peptide ERNSVSPS showed good specificity and affinity to f-PSA and was used as a PSA capture probe in a sandwich enzyme-linked immunosorbent assay (ELISA) array. An anti-human PSA monoclonal antibody (anti-PSA mAb) was used to recognize the captured antigen, followed by horseradish peroxidase-conjugated antibody (HRP-IgG) and o-phenylenediamine, which were successively added to develop plate color. The ELISA conditions such as effect of blocking agent, coating buffer pH, phage concentration, antigen incubation time, and anti-PSA mAb dilution for phage ELISA were optimized. On the basis of the optimal phage ELISA conditions, the absorbance taken at 492 nm on a microplate reader was linear with f-PSA concentration within 0.825-165 ng/mL with a low limit of detection of 0.16 ng/mL. Thus, the landscape phage is an attractive biomolecular probe in bioanalysis.

  17. In Vivo Fluorescence Lifetime Imaging Monitors Binding of Specific Probes to Cancer Biomarkers

    PubMed Central

    Ardeshirpour, Yasaman; Chernomordik, Victor; Zielinski, Rafal; Capala, Jacek; Griffiths, Gary; Vasalatiy, Olga; Smirnov, Aleksandr V.; Knutson, Jay R.; Lyakhov, Ilya; Achilefu, Samuel; Gandjbakhche, Amir; Hassan, Moinuddin

    2012-01-01

    One of the most important factors in choosing a treatment strategy for cancer is characterization of biomarkers in cancer cells. Particularly, recent advances in Monoclonal Antibodies (MAB) as primary-specific drugs targeting tumor receptors show that their efficacy depends strongly on characterization of tumor biomarkers. Assessment of their status in individual patients would facilitate selection of an optimal treatment strategy, and the continuous monitoring of those biomarkers and their binding process to the therapy would provide a means for early evaluation of the efficacy of therapeutic intervention. In this study we have demonstrated for the first time in live animals that the fluorescence lifetime can be used to detect the binding of targeted optical probes to the extracellular receptors on tumor cells in vivo. The rationale was that fluorescence lifetime of a specific probe is sensitive to local environment and/or affinity to other molecules. We attached Near-InfraRed (NIR) fluorescent probes to Human Epidermal Growth Factor 2 (HER2/neu)-specific Affibody molecules and used our time-resolved optical system to compare the fluorescence lifetime of the optical probes that were bound and unbound to tumor cells in live mice. Our results show that the fluorescence lifetime changes in our model system delineate HER2 receptor bound from the unbound probe in vivo. Thus, this method is useful as a specific marker of the receptor binding process, which can open a new paradigm in the “image and treat” concept, especially for early evaluation of the efficacy of the therapy. PMID:22384092

  18. Phase demodulation method from a single fringe pattern based on correlation with a polynomial form.

    PubMed

    Robin, Eric; Valle, Valéry; Brémand, Fabrice

    2005-12-01

    The method presented extracts the demodulated phase from only one fringe pattern. Locally, this method approaches the fringe pattern morphology with the help of a mathematical model. The degree of similarity between the mathematical model and the real fringe is estimated by minimizing a correlation function. To use an optimization process, we have chosen a polynomial form such as a mathematical model. However, the use of a polynomial form induces an identification procedure with the purpose of retrieving the demodulated phase. This method, polynomial modulated phase correlation, is tested on several examples. Its performance, in terms of speed and precision, is presented on very noised fringe patterns.

  19. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.

    PubMed

    Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.

  20. Hetero-site-specific X-ray pump-probe spectroscopy for femtosecond intramolecular dynamics

    DOE PAGES

    Picón, A.; Lehmann, C. S.; Bostedt, C.; ...

    2016-05-23

    New capabilities at X-ray free-electron laser facilities allow the generation of two-colour femtosecond X-ray pulses, opening the possibility of performing ultrafast studies of X-ray-induced phenomena. Specifically, the experimental realization of hetero-site-specific X-ray-pump/X-ray-probe spectroscopy is of special interest, in which an X-ray pump pulse is absorbed at one site within a molecule and an X-ray probe pulse follows the X-ray-induced dynamics at another site within the same molecule. In this paper, we show experimental evidence of a hetero-site pump-probe signal. By using two-colour 10-fs X-ray pulses, we are able to observe the femtosecond time dependence for the formation of F ionsmore » during the fragmentation of XeF 2 molecules following X-ray absorption at the Xe site.« less

  1. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  2. Lattice Boltzmann method for bosons and fermions and the fourth-order Hermite polynomial expansion.

    PubMed

    Coelho, Rodrigo C V; Ilha, Anderson; Doria, Mauro M; Pereira, R M; Aibe, Valter Yoshihiko

    2014-04-01

    The Boltzmann equation with the Bhatnagar-Gross-Krook collision operator is considered for the Bose-Einstein and Fermi-Dirac equilibrium distribution functions. We show that the expansion of the microscopic velocity in terms of Hermite polynomials must be carried to the fourth order to correctly describe the energy equation. The viscosity and thermal coefficients, previously obtained by Yang et al. [Shi and Yang, J. Comput. Phys. 227, 9389 (2008); Yang and Hung, Phys. Rev. E 79, 056708 (2009)] through the Uehling-Uhlenbeck approach, are also derived here. Thus the construction of a lattice Boltzmann method for the quantum fluid is possible provided that the Bose-Einstein and Fermi-Dirac equilibrium distribution functions are expanded to fourth order in the Hermite polynomials.

  3. Wavefront reconstruction algorithm based on Legendre polynomials for radial shearing interferometry over a square area and error analysis.

    PubMed

    Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai

    2015-08-10

    Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.

  4. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  5. Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos

    DTIC Science & Technology

    2001-09-11

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,

  6. Deciphering the Sensitivity and Specificity of the Implantable Doppler Probe in Free Flap Monitoring.

    PubMed

    Chang, Edward I; Ibrahim, Amir; Zhang, Hong; Liu, Jun; Nguyen, Alexander T; Reece, Gregory P; Yu, Peirong

    2016-03-01

    The efficacy of implantable Doppler probes remains an area of considerable debate. This study aims to decipher its sensitivity and specificity for free flap monitoring. A retrospective review of all free flaps with an implantable Doppler probe was performed between 2000 and 2012. A Cook-Swartz implantable Doppler probe was used in 439 patients (head and neck, n = 364; breast, n = 53; extremity, n = 22), and demonstrated equivalent sensitivity and specificity between flap types. The overall sensitivity and specificity were 77.8 percent and 88.4 percent, respectively. The artery was monitored in 267 patients, compared to venous monitoring in 101 patients, and in 71 patients both the artery and vein were monitored. Arterial monitoring had significantly greater specificity than venous monitoring, (94.2 percent versus 74.0 percent; p < 0.001), but no benefit was found in monitoring both the artery and the vein. Venous monitoring was significantly associated with reoperation (OR, 3.17; 95 percent CI, 1.70 to 5.91; p = 0.0003). There were 284 flaps that had a monitoring segment in addition to the implantable Doppler probe that significantly increased overall specificity for microvascular complications (OR, 17.71; 95 percent CI, 3.39 to 92.23; p = 0.0006). The specificity (90.5 percent versus 84.8 percent) and sensitivity (80.0 percent versus 66.7 percent) were significantly higher for clinically monitored flaps. The take-back rate was 13.0 percent, with positive findings in 59.6 percent, and 5.2 percent total flap loss. The use of implantable Doppler probes has high sensitivity and specificity for buried free flaps despite positive findings in less than 60 percent of take-backs. Monitoring the artery is recommended, but clinical examination remains the gold standard for flap monitoring. Diagnostic, IV.

  7. Gaussian quadrature for multiple orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Coussement, Jonathan; van Assche, Walter

    2005-06-01

    We study multiple orthogonal polynomials of type I and type II, which have orthogonality conditions with respect to r measures. These polynomials are connected by their recurrence relation of order r+1. First we show a relation with the eigenvalue problem of a banded lower Hessenberg matrix Ln, containing the recurrence coefficients. As a consequence, we easily find that the multiple orthogonal polynomials of type I and type II satisfy a generalized Christoffel-Darboux identity. Furthermore, we explain the notion of multiple Gaussian quadrature (for proper multi-indices), which is an extension of the theory of Gaussian quadrature for orthogonal polynomials and was introduced by Borges. In particular, we show that the quadrature points and quadrature weights can be expressed in terms of the eigenvalue problem of Ln.

  8. A benzothiazole-based fluorescent probe for hypochlorous acid detection and imaging in living cells

    NASA Astrophysics Data System (ADS)

    Nguyen, Khac Hong; Hao, Yuanqiang; Zeng, Ke; Fan, Shengnan; Li, Fen; Yuan, Suke; Ding, Xuejing; Xu, Maotian; Liu, You-Nian

    2018-06-01

    A benzothiazole-based turn-on fluorescent probe with a large Stokes shift (190 nm) has been developed for hypochlorous acid detection. The probe displays prompt fluorescence response for HClO with excellent selectivity over other reactive oxygen species as well as a low detection limit of 0.08 μM. The sensing mechanism involves the HClO-induced specific oxidation of oxime moiety of the probe to nitrile oxide, which was confirmed by HPLC-MS technique. Furthermore, imaging studies demonstrated that the probe is cell permeable and can be applied to detect HClO in living cells.

  9. Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Choon-Lin, E-mail: hcl@mail.tku.edu.tw

    2011-04-15

    Research Highlights: > Physical examples involving exceptional orthogonal polynomials. > Exceptional polynomials as deformations of classical orthogonal polynomials. > Exceptional polynomials from Darboux-Crum transformation. - Abstract: An interesting discovery in the last two years in the field of mathematical physics has been the exceptional X{sub l} Laguerre and Jacobi polynomials. Unlike the well-known classical orthogonal polynomials which start with constant terms, these new polynomials have lowest degree l = 1, 2, and ..., and yet they form complete set with respect to some positive-definite measure. While the mathematical properties of these new X{sub l} polynomials deserve further analysis, it ismore » also of interest to see if they play any role in physical systems. In this paper we indicate some physical models in which these new polynomials appear as the main part of the eigenfunctions. The systems we consider include the Dirac equations coupled minimally and non-minimally with some external fields, and the Fokker-Planck equations. The systems presented here have enlarged the number of exactly solvable physical systems known so far.« less

  10. Graphical Solution of Polynomial Equations

    ERIC Educational Resources Information Center

    Grishin, Anatole

    2009-01-01

    Graphing utilities, such as the ubiquitous graphing calculator, are often used in finding the approximate real roots of polynomial equations. In this paper the author offers a simple graphing technique that allows one to find all solutions of a polynomial equation (1) of arbitrary degree; (2) with real or complex coefficients; and (3) possessing…

  11. Thermodynamic characterization of networks using graph polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.

    2015-09-01

    In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.

  12. Approximating smooth functions using algebraic-trigonometric polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharapudinov, Idris I

    2011-01-14

    The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3polynomials which the author has introduced andmore » investigated previously. Bibliography: 13 titles.« less

  13. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    PubMed Central

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by

  14. Long-time uncertainty propagation using generalized polynomial chaos and flow map composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.

    2014-10-01

    We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less

  15. Algorithms and applications of aberration correction and American standard-based digital evaluation in surface defects evaluating system

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-11-01

    The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.

  16. Local collective motion analysis for multi-probe dynamic imaging and microrheology

    NASA Astrophysics Data System (ADS)

    Khan, Manas; Mason, Thomas G.

    2016-08-01

    Dynamical artifacts, such as mechanical drift, advection, and hydrodynamic flow, can adversely affect multi-probe dynamic imaging and passive particle-tracking microrheology experiments. Alternatively, active driving by molecular motors can cause interesting non-Brownian motion of probes in local regions. Existing drift-correction techniques, which require large ensembles of probes or fast temporal sampling, are inadequate for handling complex spatio-temporal drifts and non-Brownian motion of localized domains containing relatively few probes. Here, we report an analytical method based on local collective motion (LCM) analysis of as few as two probes for detecting the presence of non-Brownian motion and for accurately eliminating it to reveal the underlying Brownian motion. By calculating an ensemble-average, time-dependent, LCM mean square displacement (MSD) of two or more localized probes and comparing this MSD to constituent single-probe MSDs, we can identify temporal regimes during which either thermal or athermal motion dominates. Single-probe motion, when referenced relative to the moving frame attached to the multi-probe LCM trajectory, provides a true Brownian MSD after scaling by an appropriate correction factor that depends on the number of probes used in LCM analysis. We show that LCM analysis can be used to correct many different dynamical artifacts, including spatially varying drifts, gradient flows, cell motion, time-dependent drift, and temporally varying oscillatory advection, thereby offering a significant improvement over existing approaches.

  17. Identification of salivary Lactobacillus rhamnosus species by DNA profiling and a specific probe.

    PubMed

    Richard, B; Groisillier, A; Badet, C; Dorignac, G; Lonvaud-Funel, A

    2001-03-01

    The Lactobacillus genus has been shown to be associated with the dental carious process, but little is known about the species related to the decay, although Lactobacillus rhamnosus is suspected to be the most implicated species. Conventional identification methods based on biochemical criteria lead to ambiguous results, since the Lactobacillus species found in saliva are phenotypically close. To clarify the role of this genus in the evolution of carious disease, this work aimed to find a rapid and reliable method for identifying the L. rhamnosus species. Methods based on hybridization with DNA probes and DNA amplification by PCR were used. The dominant salivary Lactobacillus species (reference strains from the ATCC) were selected for this purpose as well as some wild strains isolated from children's saliva. DNA profiling using semirandom polymorphic DNA amplification (semi-RAPD) generated specific patterns for L. rhamnosus ATCC 7469. The profiles of all L. rhamnosus strains tested were similar and could be grouped; these strains shared four common fragments. Wild strains first identified with classic methods shared common patterns with the L. rhamnosus species and could be reclassified. One fragment of the profile was purified, cloned, used as a probe and found to be specific to the L. rhamnosus species. These results may help to localize this species within its ecological niche and to elucidate the progression of the carious process.

  18. New frontier in regenerative medicine: site-specific gene correction in patient-specific induced pluripotent stem cells.

    PubMed

    Garate, Zita; Davis, Brian R; Quintana-Bustamante, Oscar; Segovia, Jose C

    2013-06-01

    Advances in cell and gene therapy are opening up new avenues for regenerative medicine. Because of their acquired pluripotency, human induced pluripotent stem cells (hiPSCs) are a promising source of autologous cells for regenerative medicine. They show unlimited self-renewal while retaining the ability, in principle, to differentiate into any cell type of the human body. Since Yamanaka and colleagues first reported the generation of hiPSCs in 2007, significant efforts have been made to understand the reprogramming process and to generate hiPSCs with potential for clinical use. On the other hand, the development of gene-editing platforms to increase homologous recombination efficiency, namely DNA nucleases (zinc finger nucleases, TAL effector nucleases, and meganucleases), is making the application of locus-specific gene therapy in human cells an achievable goal. The generation of patient-specific hiPSC, together with gene correction by homologous recombination, will potentially allow for their clinical application in the near future. In fact, reports have shown targeted gene correction through DNA-Nucleases in patient-specific hiPSCs. Various technologies have been described to reprogram patient cells and to correct these patient hiPSCs. However, no approach has been clearly more efficient and safer than the others. In addition, there are still significant challenges for the clinical application of these technologies, such as inefficient differentiation protocols, genetic instability resulting from the reprogramming process and hiPSC culture itself, the efficacy and specificity of the engineered DNA nucleases, and the overall homologous recombination efficiency. To summarize advances in the generation of gene corrected patient-specific hiPSCs, this review focuses on the available technological platforms, including their strengths and limitations regarding future therapeutic use of gene-corrected hiPSCs.

  19. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  20. Fluorescent-protein-based probes: general principles and practices.

    PubMed

    Ai, Hui-Wang

    2015-01-01

    An important application of fluorescent proteins is to derive genetically encoded fluorescent probes that can actively respond to cellular dynamics such as pH change, redox signaling, calcium oscillation, enzyme activities, and membrane potential. Despite the large diverse group of fluorescent-protein-based probes, a few basic principles have been established and are shared by most of these probes. In this article, the focus is on these general principles and strategies that guide the development of fluorescent-protein-based probes. A few examples are provided in each category to illustrate the corresponding principles. Since these principles are quite straightforward, others may adapt them to create fluorescent probes for their own interest. Hopefully, the development of the ever-growing family of fluorescent-protein-based probes will no longer be limited to a small number of laboratories specialized in senor development, leading to the situation that biological studies will be bettered assisted by genetically encoded sensors.

  1. Recurrence approach and higher order polynomial algebras for superintegrable monopole systems

    NASA Astrophysics Data System (ADS)

    Hoque, Md Fazlul; Marquette, Ian; Zhang, Yao-Zhong

    2018-05-01

    We revisit the MIC-harmonic oscillator in flat space with monopole interaction and derive the polynomial algebra satisfied by the integrals of motion and its energy spectrum using the ad hoc recurrence approach. We introduce a superintegrable monopole system in a generalized Taub-Newman-Unti-Tamburino (NUT) space. The Schrödinger equation of this model is solved in spherical coordinates in the framework of Stäckel transformation. It is shown that wave functions of the quantum system can be expressed in terms of the product of Laguerre and Jacobi polynomials. We construct ladder and shift operators based on the corresponding wave functions and obtain the recurrence formulas. By applying these recurrence relations, we construct higher order algebraically independent integrals of motion. We show that the integrals form a polynomial algebra. We construct the structure functions of the polynomial algebra and obtain the degenerate energy spectra of the model.

  2. Chebyshev polynomials in the spectral Tau method and applications to Eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Johnson, Duane

    1996-01-01

    Chebyshev Spectral methods have received much attention recently as a technique for the rapid solution of ordinary differential equations. This technique also works well for solving linear eigenvalue problems. Specific detail is given to the properties and algebra of chebyshev polynomials; the use of chebyshev polynomials in spectral methods; and the recurrence relationships that are developed. These formula and equations are then applied to several examples which are worked out in detail. The appendix contains an example FORTRAN program used in solving an eigenvalue problem.

  3. Quantum Hurwitz numbers and Macdonald polynomials

    NASA Astrophysics Data System (ADS)

    Harnad, J.

    2016-11-01

    Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.

  4. Estimating thermal diffusivity and specific heat from needle probe thermal conductivity data

    USGS Publications Warehouse

    Waite, W.F.; Gilbert, L.Y.; Winters, W.J.; Mason, D.H.

    2006-01-01

    Thermal diffusivity and specific heat can be estimated from thermal conductivity measurements made using a standard needle probe and a suitably high data acquisition rate. Thermal properties are calculated from the measured temperature change in a sample subjected to heating by a needle probe. Accurate thermal conductivity measurements are obtained from a linear fit to many tens or hundreds of temperature change data points. In contrast, thermal diffusivity calculations require a nonlinear fit to the measured temperature change occurring in the first few tenths of a second of the measurement, resulting in a lower accuracy than that obtained for thermal conductivity. Specific heat is calculated from the ratio of thermal conductivity to diffusivity, and thus can have an uncertainty no better than that of the diffusivity estimate. Our thermal conductivity measurements of ice Ih and of tetrahydrofuran (THF) hydrate, made using a 1.6 mm outer diameter needle probe and a data acquisition rate of 18.2 pointss, agree with published results. Our thermal diffusivity and specific heat results reproduce published results within 25% for ice Ih and 3% for THF hydrate. ?? 2006 American Institute of Physics.

  5. Development of Fluorescent Protein Probes Specific for Parallel DNA and RNA G-Quadruplexes.

    PubMed

    Dang, Dung Thanh; Phan, Anh Tuân

    2016-01-01

    We have developed fluorescent protein probes specific for parallel G-quadruplexes by attaching cyan fluorescent protein to the G-quadruplex-binding motif of the RNA helicase RHAU. Fluorescent probes containing RHAU peptide fragments of different lengths were constructed, and their binding to G-quadruplexes was characterized. The selective recognition and discrimination of G-quadruplex topologies by the fluorescent protein probes was easily detected by the naked eye or by conventional gel imaging. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. 77 FR 3636 - Federal Acquisition Regulation; Brand-Name Specifications; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... 26] RIN 9000-AK55 Federal Acquisition Regulation; Brand-Name Specifications; Correction AGENCY... Regulation (FAR) to implement the Office of Management and Budget memoranda on brand-name specifications, FAR Case 2005-037, Brand-Name Specifications, which published in the Federal Register at 77 FR 189 on...

  7. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation

    PubMed Central

    Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784

  8. The challenges of using fluorescent probes to detect and quantify specific reactive oxygen species in living cells.

    PubMed

    Winterbourn, Christine C

    2014-02-01

    Small molecule fluorescent probes are vital tools for monitoring reactive oxygen species in cells. The types of probe available, the extent to which they are specific or quantitative and complications in interpreting results are discussed. Most commonly used probes (e.g. dihydrodichlorofluorescein, dihydrorhodamine) have some value in providing information on changes to the redox environment of the cell, but they are not specific for any one oxidant and the response is affected by numerous chemical interactions and not just increased oxidant generation. These probes generate the fluorescent end product by a free radical mechanism, and to react with hydrogen peroxide they require a metal catalyst. Probe radicals can react with oxygen, superoxide, and various antioxidant molecules, all of which influence the signal. Newer generation probes such as boronates act by a different mechanism in which nucleophilic attack by the oxidant on a blocking group releases masked fluorescence. Boronates react with hydrogen peroxide, peroxynitrite, hypochlorous acid and in some cases superoxide, so are selective but not specific. They react with hydrogen peroxide very slowly, and kinetic considerations raise questions about how the reaction could occur in cells. Data from oxidant-sensitive fluorescent probes can provide some information on cellular redox activity but is widely misinterpreted. Recently developed non-redox probes show promise but are not generally available and more information on specificity and cellular reactions is needed. We do not yet have probes that can quantify cellular production of specific oxidants. This article is part of a Special Issue entitled Current methods to study reactive oxygen species - pros and cons and biophysics of membrane proteins. Guest Editor: Christine Winterbourn. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. A Comparative Study on Preprocessing Techniques in Diabetic Retinopathy Retinal Images: Illumination Correction and Contrast Enhancement

    PubMed Central

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940

  10. TARGETED PRINCIPLE COMPONENT ANALYSIS: A NEW MOTION ARTIFACT CORRECTION APPROACH FOR NEAR-INFRARED SPECTROSCOPY

    PubMed Central

    YÜCEL, MERYEM A.; SELB, JULIETTE; COOPER, ROBERT J.; BOAS, DAVID A.

    2014-01-01

    As near-infrared spectroscopy (NIRS) broadens its application area to different age and disease groups, motion artifacts in the NIRS signal due to subject movement is becoming an important challenge. Motion artifacts generally produce signal fluctuations that are larger than physiological NIRS signals, thus it is crucial to correct for them before obtaining an estimate of stimulus evoked hemodynamic responses. There are various methods for correction such as principle component analysis (PCA), wavelet-based filtering and spline interpolation. Here, we introduce a new approach to motion artifact correction, targeted principle component analysis (tPCA), which incorporates a PCA filter only on the segments of data identified as motion artifacts. It is expected that this will overcome the issues of filtering desired signals that plagues standard PCA filtering of entire data sets. We compared the new approach with the most effective motion artifact correction algorithms on a set of data acquired simultaneously with a collodion-fixed probe (low motion artifact content) and a standard Velcro probe (high motion artifact content). Our results show that tPCA gives statistically better results in recovering hemodynamic response function (HRF) as compared to wavelet-based filtering and spline interpolation for the Velcro probe. It results in a significant reduction in mean-squared error (MSE) and significant enhancement in Pearson’s correlation coefficient to the true HRF. The collodion-fixed fiber probe with no motion correction performed better than the Velcro probe corrected for motion artifacts in terms of MSE and Pearson’s correlation coefficient. Thus, if the experimental study permits, the use of a collodion-fixed fiber probe may be desirable. If the use of a collodion-fixed probe is not feasible, then we suggest the use of tPCA in the processing of motion artifact contaminated data. PMID:25360181

  11. Direct calculation of modal parameters from matrix orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Guillaume, Patrick

    2011-10-01

    The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.

  12. On the derivatives of unimodular polynomials

    NASA Astrophysics Data System (ADS)

    Nevai, P.; Erdélyi, T.

    2016-04-01

    Let D be the open unit disk of the complex plane; its boundary, the unit circle of the complex plane, is denoted by \\partial D. Let \\mathscr P_n^c denote the set of all algebraic polynomials of degree at most n with complex coefficients. For λ ≥ 0, let {\\mathscr K}_n^λ \\stackrel{{def}}{=} \\biggl\\{P_n: P_n(z) = \\sumk=0^n{ak k^λ z^k}, ak \\in { C}, |a_k| = 1 \\biggr\\} \\subset {\\mathscr P}_n^c.The class \\mathscr K_n^0 is often called the collection of all (complex) unimodular polynomials of degree n. Given a sequence (\\varepsilon_n) of positive numbers tending to 0, we say that a sequence (P_n) of polynomials P_n\\in\\mathscr K_n^λ is \\{λ, (\\varepsilon_n)\\}-ultraflat if \\displaystyle (1-\\varepsilon_n)\\frac{nλ+1/2}{\\sqrt{2λ+1}}≤\\ve......a +1/2}}{\\sqrt{2λ +1}},\\qquad z \\in \\partial D,\\quad n\\in N_0.Although we do not know, in general, whether or not \\{λ, (\\varepsilon_n)\\}-ultraflat sequences of polynomials P_n\\in\\mathscr K_n^λ exist for each fixed λ>0, we make an effort to prove various interesting properties of them. These allow us to conclude that there are no sequences (P_n) of either conjugate, or plain, or skew reciprocal unimodular polynomials P_n\\in\\mathscr K_n^0 such that (Q_n) with Q_n(z)\\stackrel{{def}}{=} zP_n'(z)+1 is a \\{1,(\\varepsilon_n)\\}-ultraflat sequence of polynomials.Bibliography: 18 titles.

  13. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  14. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  15. Use of localized performance-based functions for the specification and correction of hybrid imaging systems

    NASA Astrophysics Data System (ADS)

    Lisson, Jerold B.; Mounts, Darryl I.; Fehniger, Michael J.

    1992-08-01

    Localized wavefront performance analysis (LWPA) is a system that allows the full utilization of the system optical transfer function (OTF) for the specification and acceptance of hybrid imaging systems. We show that LWPA dictates the correction of wavefront errors with the greatest impact on critical imaging spatial frequencies. This is accomplished by the generation of an imaging performance map-analogous to a map of the optic pupil error-using a local OTF. The resulting performance map a function of transfer function spatial frequency is directly relatable to the primary viewing condition of the end-user. In addition to optimizing quality for the viewer it will be seen that the system has the potential for an improved matching of the optical and electronic bandpass of the imager and for the development of more realistic acceptance specifications. 1. LOCAL WAVEFRONT PERFORMANCE ANALYSIS The LWPA system generates a local optical quality factor (LOQF) in the form of a map analogous to that used for the presentation and evaluation of wavefront errors. In conjunction with the local phase transfer function (LPTF) it can be used for maximally efficient specification and correction of imaging system pupil errors. The LOQF and LPTF are respectively equivalent to the global modulation transfer function (MTF) and phase transfer function (PTF) parts of the OTF. The LPTF is related to difference of the average of the errors in separated regions of the pupil. Figure

  16. The Gibbs Phenomenon for Series of Orthogonal Polynomials

    ERIC Educational Resources Information Center

    Fay, T. H.; Kloppers, P. Hendrik

    2006-01-01

    This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…

  17. Piecewise polynomial representations of genomic tracks.

    PubMed

    Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz

    2012-01-01

    Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.

  18. A recursive algorithm for Zernike polynomials

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    The analysis of a function defined on a rotationally symmetric system, with either a circular or annular pupil is discussed. In order to numerically analyze such systems it is typical to expand the given function in terms of a class of orthogonal polynomials. Because of their particular properties, the Zernike polynomials are especially suited for numerical calculations. Developed is a recursive algorithm that can be used to generate the Zernike polynomials up to a given order. The algorithm is recursively defined over J where R(J,N) is the Zernike polynomial of degree N obtained by orthogonalizing the sequence R(J), R(J+2), ..., R(J+2N) over (epsilon, 1). The terms in the preceding row - the (J-1) row - up to the N+1 term is needed for generating the (J,N)th term. Thus, the algorith generates an upper left-triangular table. This algorithm was placed in the computer with the necessary support program also included.

  19. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  20. Wavefront propagation from one plane to another with the use of Zernike polynomials and Taylor monomials.

    PubMed

    Dai, Guang-ming; Campbell, Charles E; Chen, Li; Zhao, Huawei; Chernyak, Dimitri

    2009-01-20

    In wavefront-driven vision correction, ocular aberrations are often measured on the pupil plane and the correction is applied on a different plane. The problem with this practice is that any changes undergone by the wavefront as it propagates between planes are not currently included in devising customized vision correction. With some valid approximations, we have developed an analytical foundation based on geometric optics in which Zernike polynomials are used to characterize the propagation of the wavefront from one plane to another. Both the boundary and the magnitude of the wavefront change after the propagation. Taylor monomials were used to realize the propagation because of their simple form for this purpose. The method we developed to identify changes in low-order aberrations was verified with the classical vertex correction formula. The method we developed to identify changes in high-order aberrations was verified with ZEMAX ray-tracing software. Although the method may not be valid for highly irregular wavefronts and it was only proven for wavefronts with low-order or high-order aberrations, our analysis showed that changes in the propagating wavefront are significant and should, therefore, be included in calculating vision correction. This new approach could be of major significance in calculating wavefront-driven vision correction whether by refractive surgery, contact lenses, intraocular lenses, or spectacles.

  1. Small-molecule-based protein-labeling technology in live cell studies: probe-design concepts and applications.

    PubMed

    Mizukami, Shin; Hori, Yuichiro; Kikuchi, Kazuya

    2014-01-21

    The use of genetic engineering techniques allows researchers to combine functional proteins with fluorescent proteins (FPs) to produce fusion proteins that can be visualized in living cells, tissues, and animals. However, several limitations of FPs, such as slow maturation kinetics or issues with photostability under laser illumination, have led researchers to examine new technologies beyond FP-based imaging. Recently, new protein-labeling technologies using protein/peptide tags and tag-specific probes have attracted increasing attention. Although several protein-labeling systems are com mercially available, researchers continue to work on addressing some of the limitations of this technology. To reduce the level of background fluorescence from unlabeled probes, researchers have pursued fluorogenic labeling, in which the labeling probes do not fluoresce until the target proteins are labeled. In this Account, we review two different fluorogenic protein-labeling systems that we have recently developed. First we give a brief history of protein labeling technologies and describe the challenges involved in protein labeling. In the second section, we discuss a fluorogenic labeling system based on a noncatalytic mutant of β-lactamase, which forms specific covalent bonds with β-lactam antibiotics such as ampicillin or cephalosporin. Based on fluorescence (or Förster) resonance energy transfer and other physicochemical principles, we have developed several types of fluorogenic labeling probes. To extend the utility of this labeling system, we took advantage of a hydrophobic β-lactam prodrug structure to achieve intracellular protein labeling. We also describe a small protein tag, photoactive yellow protein (PYP)-tag, and its probes. By utilizing a quenching mechanism based on close intramolecular contact, we incorporated a turn-on switch into the probes for fluorogenic protein labeling. One of these probes allowed us to rapidly image a protein while avoiding washout. In

  2. Mimtags: the use of phage display technology to produce novel protein-specific probes.

    PubMed

    Ahmed, Nayyar; Dhanapala, Pathum; Sadli, Nadia; Barrow, Colin J; Suphioglu, Cenk

    2014-03-01

    In recent times the use of protein-specific probes in the field of proteomics has undergone evolutionary changes leading to the discovery of new probing techniques. Protein-specific probes serve two main purposes: epitope mapping and detection assays. One such technique is the use of phage display in the random selection of peptide mimotopes (mimtags) that can tag epitopes of proteins, replacing the use of monoclonal antibodies in detection systems. In this study, phage display technology was used to screen a random peptide library with a biologically active purified human interleukin-4 receptor (IL-4R) and interleukin-13 (IL-13) to identify mimtag candidates that interacted with these proteins. Once identified, the mimtags were commercially synthesised, biotinylated and used for in vitro immunoassays. We have used phage display to identify M13 phage clones that demonstrated specific binding to IL-4R and IL-13 cytokine. A consensus in binding sequences was observed and phage clones characterised had identical peptide sequence motifs. Only one was synthesised for use in further immunoassays, demonstrating significant binding to either IL-4R or IL-13. We have successfully shown the use of phage display to identify and characterise mimtags that specifically bind to their target epitope. Thus, this new method of probing proteins can be used in the future as a novel tool for immunoassay and detection technique, which is cheaper and more rapidly produced and therefore a better alternative to the use of monoclonal antibodies. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  4. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  5. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound.

    PubMed

    Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-10-01

    To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the

  6. The Translated Dowling Polynomials and Numbers.

    PubMed

    Mangontarum, Mahid M; Macodi-Ringia, Amila P; Abdulcarim, Normalah S

    2014-01-01

    More properties for the translated Whitney numbers of the second kind such as horizontal generating function, explicit formula, and exponential generating function are proposed. Using the translated Whitney numbers of the second kind, we will define the translated Dowling polynomials and numbers. Basic properties such as exponential generating functions and explicit formula for the translated Dowling polynomials and numbers are obtained. Convexity, integral representation, and other interesting identities are also investigated and presented. We show that the properties obtained are generalizations of some of the known results involving the classical Bell polynomials and numbers. Lastly, we established the Hankel transform of the translated Dowling numbers.

  7. 'FloraArray' for screening of specific DNA probes representing the characteristics of a certain microbial community.

    PubMed

    Yokoi, Takahide; Kaku, Yoshiko; Suzuki, Hiroyuki; Ohta, Masayuki; Ikuta, Hajime; Isaka, Kazuichi; Sumino, Tatsuo; Wagatsuma, Masako

    2007-08-01

    To investigate uncharacterized microbial communities, a custom DNA microarray named 'FloraArray' was developed for screening specific probes that would represent the characteristics of a microbial community. The array was prepared by spotting 2000 plasmid DNAs from a genomic shotgun library of a sludge sample on a DNA microarray. By comparative hybridization of the array with two different samples of genomic DNA, one from the activated sludge and the other from a nonactivated sludge sample of an anaerobic ammonium oxidation (anammox) bacterial community, specific spots were visualized as a definite fluctuating profile in an MA (differential intensity ratio vs. spot intensity) plot. About 300 spots of the array accounted for the candidate probes to represent anammox reaction of the activated sludge. After sequence analysis of the probes and examination of the results of blastn searches against the reported anammox reference sequence, complete matches were found for 161 probes (58.3%) and >90% matches were found for 242 probes (87.1%). These results demonstrate that 'FloraArray' could be a useful tool for screening specific DNA molecules of unknown microbial communities.

  8. Site-specific incorporation of probes into RNA polymerase by unnatural-amino-acid mutagenesis and Staudinger-Bertozzi ligation

    PubMed Central

    Chakraborty, Anirban; Mazumder, Abhishek; Lin, Miaoxin; Hasemeyer, Adam; Xu, Qumiao; Wang, Dongye; Ebright, Yon W.; Ebright, Richard H.

    2015-01-01

    Summary A three-step procedure comprising (i) unnatural-amino-acid mutagenesis with 4-azido-phenylalanine, (ii) Staudinger-Bertozzi ligation with a probe-phosphine derivative, and (iii) in vitro reconstitution of RNA polymerase (RNAP) enables the efficient site-specific incorporation of a fluorescent probe, a spin label, a crosslinking agent, a cleaving agent, an affinity tag, or any other biochemical or biophysical probe, at any site of interest in RNAP. Straightforward extensions of the procedure enable the efficient site-specific incorporation of two or more different probes in two or more different subunits of RNAP. We present protocols for synthesis of probe-phosphine derivatives, preparation of RNAP subunits and the transcription initiation factor σ, unnatural amino acid mutagenesis of RNAP subunits and σ, Staudinger ligation with unnatural-amino-acid-containing RNAP subunits and σ, quantitation of labelling efficiency and labelling specificity, and reconstitution of RNAP. PMID:25665560

  9. Degenerate r-Stirling Numbers and r-Bell Polynomials

    NASA Astrophysics Data System (ADS)

    Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.

    2018-01-01

    The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.

  10. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  11. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  12. Determination for Enterobacter cloacae based on a europium ternary complex labeled DNA probe

    NASA Astrophysics Data System (ADS)

    He, Hui; Niu, Cheng-Gang; Zeng, Guang-Ming; Ruan, Min; Qin, Pin-Zhu; Liu, Jing

    2011-11-01

    The fast detection and accurate diagnosis of the prevalent pathogenic bacteria is very important for the treatment of disease. Nowadays, fluorescence techniques are important tools for diagnosis. A two-probe tandem DNA hybridization assay was designed for the detection of Enterobacter cloacae based on time-resolved fluorescence. In this work, the authors synthesized a novel europium ternary complex Eu(TTA) 3(5-NH 2-phen) with intense luminescence, high fluorescence quantum yield and long lifetime before. We developed a method based on this europium complex for the specific detection of original extracted DNA from E. cloacae. In the hybridization assay format, the reporter probe was labeled with Eu(TTA) 3(5-NH 2-phen) on the 5'-terminus, and the capture probe capture probe was covalent immobilized on the surface of the glutaraldehyde treated glass slides. The original extracted DNA of samples was directly used without any DNA purification and amplification. The detection was conducted by monitoring the fluorescence intensity from the glass surface after DNA hybridization. The detection limit of the DNA was 5 × 10 -10 mol L -1. The results of the present work proved that this new approach was easy to operate with high sensitivity and specificity. It could be conducted as a powerful tool for the detection of pathogen microorganisms in the environment.

  13. Polynomial solutions of the Monge-Ampère equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aminov, Yu A

    2014-11-30

    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less

  14. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  15. Specificity Re-evaluation of Oligonucleotide Probes for the Detection of Marine Picoplankton by Tyramide Signal Amplification-Fluorescent In Situ Hybridization.

    PubMed

    Riou, Virginie; Périot, Marine; Biegala, Isabelle C

    2017-01-01

    Oligonucleotide probes are increasingly being used to characterize natural microbial assemblages by Tyramide Signal Amplification-Fluorescent in situ Hybridization (TSA-FISH, or CAtalysed Reporter Deposition CARD-FISH). In view of the fast-growing rRNA databases, we re-evaluated the in silico specificity of eleven bacterial and eukaryotic probes and competitor frequently used for the quantification of marine picoplankton. We performed tests on cell cultures to decrease the risk for non-specific hybridization, before they are used on environmental samples. The probes were confronted to recent databases and hybridization conditions were tested against target strains matching perfectly with the probes, and against the closest non-target strains presenting one to four mismatches. We increased the hybridization stringency from 55 to 65% formamide for the Eub338+EubII+EubIII probe mix to be specific to the Eubacteria domain. In addition, we found that recent changes in the Gammaproteobacteria classification decreased the specificity of Gam42a probe, and that the Roseo536R and Ros537 probes were not specific to, and missed part of the Roseobacter clade. Changes in stringency conditions were important for bacterial probes; these induced, respectively, a significant increase, in Eubacteria and Roseobacter and no significant changes in Gammaproteobacteria concentrations from the investigated natural environment. We confirmed the eukaryotic probes original conditions, and propose the Euk1209+NChlo01+Chlo02 probe mix to target the largest picoeukaryotic diversity. Experiences acquired through these investigations leads us to propose the use of seven steps protocol for complete FISH probe specificity check-up to improve data quality in environmental studies.

  16. Locus-specific oligonucleotide probes increase the usefulness of inter-Alu polymorphisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarnik, M.; Tang, J.Q.; Korab-Laskowska, M.

    1994-09-01

    Most of the mapping approaches are based on single-locus codominant markers of known location. Their multiplex ratio, defined as the number of loci that can be simultaneously tested, is typically one. An increased multiplex ratio was obtained by typing anonymous polymorphisms using PCR primers anchored in ubiquitous Alu-repeats. These so called alumorphs are revealed by inter-Alu-PCR and seen as the presence or absence of an amplified band of a given length. We decided to map alumorphs and to develop locus-specific oligonucleotide (LSO) probes to facilitate their use and transfer among different laboratories. We studied the segregation of alumorphs in eightmore » CEPH families, using two distinct Alu-primers, both directing PCR between the repeats in a tail-to-tail orientation. The segregating bands were assigned to chromosomal locations by two-point linkage analysis with CEPH markers (V6.0). They were excised from dried gels, reamplified, cloned and sequenced. The resulting LSOs were used as hybridization probes (i) to confirm chromosomal assignments in a human/hamster somatic cell hybrid panel, and (ii) to group certain allelic length variants, originally coded as separate dominant markres, into more informative codominant loci. These codominants were then placed by multipoint analysis on a microsatellite Genethon map. Finally, the LSO probes were used as polymorphic STSs, to identify by hybridization the corresponding markers among products of inter-Alu-PCR. The use of LSOs converts alumorphs into a system of non-anonymous, often multiallelic codominant markes which can be simultaneously typed, thus achieving the goal of high multiplex ratio.« less

  17. Determination of solute site occupancies within γ' precipitates in nickel-base superalloys via orientation-specific atom probe tomography

    DOE PAGES

    Meher, Subhashish; Rojhirunsakool, Tanaporn; Nandwana, Peeyush; ...

    2015-04-28

    In this study, the analytical limitations in atom probe tomography such as resolving a desired set of atomic planes, for solving complex materials science problems, have been overcome by employing a well-developed unique and reproducible crystallographic technique, involving synergetic coupling of orientation microscopy with atom probe tomography. The crystallographic information in atom probe reconstructions has been utilized to determine the solute site occupancies in Ni-Al-Cr based superalloys accurately. The structural information in atom probe reveals that both Al and Cr occupy the same sub-lattice within the L1 2-ordered g precipitates to form Ni 3(Al,Cr) precipitates in a Ni-14Al-7Cr(at.%) alloy. Interestingly,more » the addition of Co, which is a solid solution strengthener, to a Ni-14Al-7Cr alloy results in the partial reversal of Al site occupancy within g precipitates to form (Ni,Al) 3(Al,Cr,Co) precipitates. This unique evidence of reversal of Al site occupancy, resulting from the introduction of other solutes within the ordered structures, gives insights into the relative energetics of different sub-lattice sites when occupied by different solutes.« less

  18. Ricci polynomial gravity

    NASA Astrophysics Data System (ADS)

    Hao, Xin; Zhao, Liu

    2017-12-01

    We study a novel class of higher-curvature gravity models in n spacetime dimensions which we call Ricci polynomial gravity. The action consists purely of a polynomial in Ricci curvature of order N . In the absence of the second-order terms in the action, the models are ghost free around the Minkowski vacuum. By appropriately choosing the coupling coefficients in front of each term in the action, it is shown that the models can have multiple vacua with different effective cosmological constants, and can be made free of ghost and scalar degrees of freedom around at least one of the maximally symmetric vacua for any n >2 and any N ≥4 . We also discuss some of the physical implications of the existence of multiple vacua in the contexts of black hole physics and cosmology.

  19. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    PubMed

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  20. Dynamic pressure probe response tests for robust measurements in periodic flows close to probe resonating frequency

    NASA Astrophysics Data System (ADS)

    Ceyhun Şahin, Fatma; Schiffmann, Jürg

    2018-02-01

    A single-hole probe was designed to measure steady and periodic flows with high fluctuation amplitudes and with minimal flow intrusion. Because of its high aspect ratio, estimations showed that the probe resonates at a frequency two orders of magnitude lower than the fast response sensor cut-off frequencies. The high fluctuation amplitudes cause a non-linear behavior of the probe and available models are neither adequate for a quantitative estimation of the resonating frequencies nor for predicting the system damping. Instead, a non-linear data correction procedure based on individual transfer functions defined for each harmonic contribution is introduced for pneumatic probes that allows to extend their operating range beyond the resonating frequencies and linear dynamics. This data correction procedure was assessed on a miniature single-hole probe of 0.35 mm inner diameter which was designed to measure flow speed and direction. For the reliable use of such a probe in periodic flows, its frequency response was reproduced with a siren disk, which allows exciting the probe up to 10 kHz with peak-to-peak amplitudes ranging between 20%-170% of the absolute mean pressure. The effect of the probe interior design on the phase lag and amplitude distortion in periodic flow measurements was investigated on probes with similar inner diameters and different lengths or similar aspect ratios (L/D) and different total interior volumes. The results suggest that while the tube length consistently sets the resonance frequency, the internal total volume affects the non-linear dynamic response in terms of varying gain functions. A detailed analysis of the introduced calibration methodology shows that the goodness of the reconstructed data compared to the reference data is above 75% for fundamental frequencies up to twice the probe resonance frequency. The results clearly suggest that the introduced procedure is adequate to capture non-linear pneumatic probe dynamics and to

  1. Imaging characteristics of Zernike and annular polynomial aberrations.

    PubMed

    Mahajan, Virendra N; Díaz, José Antonio

    2013-04-01

    The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.

  2. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  3. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  4. 76 FR 12349 - Environmental Management Site-Specific Advisory Board, Nevada; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-07

    ... AGENCY: Department of Energy. ACTION: Notice of open meeting: Correction SUMMARY: On February 24, 2011... Site- Specific Advisory Board, Nevada to be held on March 9, 2011 (76 FR 10343). This document makes...: [email protected] . Corrections In the Federal Register of February 24, 2011, in FR Doc. 2011-4148, on...

  5. RPM-WEBBSYS: A web-based computer system to apply the rational polynomial method for estimating static formation temperatures of petroleum and geothermal wells

    NASA Astrophysics Data System (ADS)

    Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.

    2015-12-01

    A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.

  6. Neck curve polynomials in neck rupture model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul

    2012-06-06

    The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of {sup 280}X{sub 90} with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.

  7. More on rotations as spin matrix polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtright, Thomas L.

    2015-09-15

    Any nonsingular function of spin j matrices always reduces to a matrix polynomial of order 2j. The challenge is to find a convenient form for the coefficients of the matrix polynomial. The theory of biorthogonal systems is a useful framework to meet this challenge. Central factorial numbers play a key role in the theoretical development. Explicit polynomial coefficients for rotations expressed either as exponentials or as rational Cayley transforms are considered here. Structural features of the results are discussed and compared, and large j limits of the coefficients are examined.

  8. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  9. New concepts of fluorescent probes for specific detection of DNA sequences: bis-modified oligonucleotides in excimer and exciplex detection.

    PubMed

    Gbaj, A; Bichenkova, Ev; Walsh, L; Savage, He; Sardarian, Ar; Etchells, Ll; Gulati, A; Hawisa, S; Douglas, Kt

    2009-12-01

    The detection of single base mismatches in DNA is important for diagnostics, treatment of genetic diseases, and identification of single nucleotide polymorphisms. Highly sensitive, specific assays are needed to investigate genetic samples from patients. The use of a simple fluorescent nucleoside analogue in detection of DNA sequence and point mutations by hybridisation in solution is described in this study. The 5'-bispyrene and 3'-naphthalene oligonucleotide probes form an exciplex on hybridisation to target in water and the 5'-bispyrene oligonucleotide alone is an adequate probe to determine concentration of target present. It was also indicated that this system has a potential to identify mismatches and insertions. The aim of this work was to investigate experimental structures and conditions that permit strong exciplex emission for nucleic acid detectors, and show how such exciplexes can register the presence of mismatches as required in SNP analysis. This study revealed that the hybridisation of 5'-bispyrenyl fluorophore to a DNA target results in formation of a fluorescent probe with high signal intensity change and specificity for detecting a complementary target in a homogeneous system. Detection of SNP mutations using this split-probe system is a highly specific, simple, and accessible method to meet the rigorous requirements of pharmacogenomic studies. Thus, it is possible for the system to act as SNP detectors and it shows promise for future applications in genetic testing.

  10. Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

    DTIC Science & Technology

    2002-07-25

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth

  11. Electron-beam-induced-current and active secondary-electron voltage-contrast with aberration-corrected electron probes

    DOE PAGES

    Han, Myung-Geun; Garlow, Joseph A.; Marshall, Matthew S. J.; ...

    2017-03-23

    The ability to map out electrostatic potentials in materials is critical for the development and the design of nanoscale electronic and spintronic devices in modern industry. Electron holography has been an important tool for revealing electric and magnetic field distributions in microelectronics and magnetic-based memory devices, however, its utility is hindered by several practical constraints, such as charging artifacts and limitations in sensitivity and in field of view. In this article, we report electron-beam-induced-current (EBIC) and secondary-electron voltage-contrast (SE-VC) with an aberration-corrected electron probe in a transmission electron microscope (TEM), as complementary techniques to electron holography, to measure electric fieldsmore » and surface potentials, respectively. These two techniques were applied to ferroelectric thin films, multiferroic nanowires, and single crystals. Electrostatic potential maps obtained by off-axis electron holography were compared with EBIC and SE-VC to show that these techniques can be used as a complementary approach to validate quantitative results obtained from electron holography analysis.« less

  12. Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.

    PubMed

    Mahajan, Virendra N

    2012-06-20

    In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as

  13. Creating and virtually screening databases of fluorescently-labelled compounds for the discovery of target-specific molecular probes

    NASA Astrophysics Data System (ADS)

    Kamstra, Rhiannon L.; Dadgar, Saedeh; Wigg, John; Chowdhury, Morshed A.; Phenix, Christopher P.; Floriano, Wely B.

    2014-11-01

    Our group has recently demonstrated that virtual screening is a useful technique for the identification of target-specific molecular probes. In this paper, we discuss some of our proof-of-concept results involving two biologically relevant target proteins, and report the development of a computational script to generate large databases of fluorescence-labelled compounds for computer-assisted molecular design. The virtual screening of a small library of 1,153 fluorescently-labelled compounds against two targets, and the experimental testing of selected hits reveal that this approach is efficient at identifying molecular probes, and that the screening of a labelled library is preferred over the screening of base compounds followed by conjugation of confirmed hits. The automated script for library generation explores the known reactivity of commercially available dyes, such as NHS-esters, to create large virtual databases of fluorescence-tagged small molecules that can be easily synthesized in a laboratory. A database of 14,862 compounds, each tagged with the ATTO680 fluorophore was generated with the automated script reported here. This library is available for downloading and it is suitable for virtual ligand screening aiming at the identification of target-specific fluorescent molecular probes.

  14. From sequences to polynomials and back, via operator orderings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu

    2013-12-15

    Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.

  15. Flat bases of invariant polynomials and P-matrices of E{sub 7} and E{sub 8}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamini, Vittorino

    2010-02-15

    Let G be a compact group of linear transformations of a Euclidean space V. The G-invariant C{sup {infinity}} functions can be expressed as C{sup {infinity}} functions of a finite basic set of G-invariant homogeneous polynomials, sometimes called an integrity basis. The mathematical description of the orbit space V/G depends on the integrity basis too: it is realized through polynomial equations and inequalities expressing rank and positive semidefiniteness conditions of the P-matrix, a real symmetric matrix determined by the integrity basis. The choice of the basic set of G-invariant homogeneous polynomials forming an integrity basis is not unique, so it ismore » not unique the mathematical description of the orbit space too. If G is an irreducible finite reflection group, Saito et al. [Commun. Algebra 8, 373 (1980)] characterized some special basic sets of G-invariant homogeneous polynomials that they called flat. They also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups E{sub 7} and E{sub 8}. In this paper the flat basic sets of invariant homogeneous polynomials of E{sub 7} and E{sub 8} and the corresponding P-matrices are determined explicitly. Using the results here reported one is able to determine easily the P-matrices corresponding to any other integrity basis of E{sub 7} or E{sub 8}. From the P-matrices one may then write down the equations and inequalities defining the orbit spaces of E{sub 7} and E{sub 8} relatively to a flat basis or to any other integrity basis. The results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups E{sub 7} and E{sub 8} or one of the Lie groups E{sub 7} and E{sub 8} in their adjoint representations.« less

  16. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  17. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  18. Probing Planckian Corrections at the Horizon Scale with LISA Binaries

    NASA Astrophysics Data System (ADS)

    Maselli, Andrea; Pani, Paolo; Cardoso, Vitor; Abdelsalhin, Tiziano; Gualtieri, Leonardo; Ferrari, Valeria

    2018-02-01

    Several quantum-gravity models of compact objects predict microscopic or even Planckian corrections at the horizon scale. We explore the possibility of measuring two model-independent, smoking-gun effects of these corrections in the gravitational waveform of a compact binary, namely, the absence of tidal heating and the presence of tidal deformability. For events detectable by the future space-based interferometer LISA, we show that the effect of tidal heating dominates and allows one to constrain putative corrections down to the Planck scale. The measurement of the tidal Love numbers with LISA is more challenging but, in optimistic scenarios, it allows us to constrain the compactness of a supermassive exotic compact object down to the Planck scale. Our analysis suggests that highly spinning, supermassive binaries at 1-20 Gpc provide unparalleled tests of quantum-gravity effects at the horizon scale.

  19. Probing Planckian Corrections at the Horizon Scale with LISA Binaries.

    PubMed

    Maselli, Andrea; Pani, Paolo; Cardoso, Vitor; Abdelsalhin, Tiziano; Gualtieri, Leonardo; Ferrari, Valeria

    2018-02-23

    Several quantum-gravity models of compact objects predict microscopic or even Planckian corrections at the horizon scale. We explore the possibility of measuring two model-independent, smoking-gun effects of these corrections in the gravitational waveform of a compact binary, namely, the absence of tidal heating and the presence of tidal deformability. For events detectable by the future space-based interferometer LISA, we show that the effect of tidal heating dominates and allows one to constrain putative corrections down to the Planck scale. The measurement of the tidal Love numbers with LISA is more challenging but, in optimistic scenarios, it allows us to constrain the compactness of a supermassive exotic compact object down to the Planck scale. Our analysis suggests that highly spinning, supermassive binaries at 1-20 Gpc provide unparalleled tests of quantum-gravity effects at the horizon scale.

  20. Prior-based artifact correction (PBAC) in computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heußer, Thorsten, E-mail: thorsten.heusser@dkfz-heidelberg.de; Brehm, Marcus; Ritschl, Ludwig

    2014-02-15

    Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form ofmore » a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data.« less

  1. Inequalities for a polynomial and its derivative

    NASA Astrophysics Data System (ADS)

    Chanam, Barchand; Dewan, K. K.

    2007-12-01

    Let , 1[less-than-or-equals, slant][mu][less-than-or-equals, slant]n, be a polynomial of degree n such that p(z)[not equal to]0 in z0, then for 0polynomial and its derivative, Math. Inequal. Appl. 2 (2) (1999) 203-205] proved Equality holds for the polynomial where n is a multiple of [mu]E In this paper, we obtain an improvement of the above inequality by involving some of the coefficients. As an application of our result, we further improve upon a result recently proved by Aziz and Shah [A. Aziz, W.M. Shah, Inequalities for a polynomial and its derivative, Math. Inequal. Appl. 7 (3) (2004) 379-391].

  2. Tympanometry in infants: a study of the sensitivity and specificity of 226-Hz and 1,000-Hz probe tones.

    PubMed

    Carmo, Michele Picanço; Costa, Nayara Thais de Oliveira; Momensohn-Santos, Teresa Maria

    2013-10-01

    Introduction For infants under 6 months, the literature recommends 1,000-Hz tympanometry, which has a greater sensitivity for the correct identification of middle ear disorders in this population. Objective To systematically analyze national and international publications found in electronic databases that used tympanometry with 226-Hz and 1,000-Hz probe tones. Data Synthesis Initially, we identified 36 articles in the SciELO database, 11 in the Latin American and Caribbean Literature on the Health Sciences (LILACS) database, 199 in MEDLINE, 0 in the Cochrane database, 16 in ISI Web of Knowledge, and 185 in the Scopus database. We excluded 433 articles because they did not fit the selection criteria, leaving 14 publications that were analyzed in their entirety. Conclusions The 1,000-Hz tone test has greater sensitivity and specificity for the correct identification of tympanometric curve changes. However, it is necessary to clarify the doubts that still exist regarding the use of this test frequency. Improved methods for rating curves, standardization of normality criteria, and the types of curves found in infants should be addressed.

  3. Tympanometry in Infants: A Study of the Sensitivity and Specificity of 226-Hz and 1,000-Hz Probe Tones

    PubMed Central

    Carmo, Michele Picanço; Costa, Nayara Thais de Oliveira; Momensohn-Santos, Teresa Maria

    2013-01-01

    Introduction For infants under 6 months, the literature recommends 1,000-Hz tympanometry, which has a greater sensitivity for the correct identification of middle ear disorders in this population. Objective To systematically analyze national and international publications found in electronic databases that used tympanometry with 226-Hz and 1,000-Hz probe tones. Data Synthesis Initially, we identified 36 articles in the SciELO database, 11 in the Latin American and Caribbean Literature on the Health Sciences (LILACS) database, 199 in MEDLINE, 0 in the Cochrane database, 16 in ISI Web of Knowledge, and 185 in the Scopus database. We excluded 433 articles because they did not fit the selection criteria, leaving 14 publications that were analyzed in their entirety. Conclusions The 1,000-Hz tone test has greater sensitivity and specificity for the correct identification of tympanometric curve changes. However, it is necessary to clarify the doubts that still exist regarding the use of this test frequency. Improved methods for rating curves, standardization of normality criteria, and the types of curves found in infants should be addressed. PMID:25992044

  4. Astigmatism-corrected echelle spectrometer using an off-the-shelf cylindrical lens.

    PubMed

    Fu, Xiao; Duan, Fajie; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Lv, Changrong

    2017-10-01

    As a special kind of spectrometer with the Czerny-Turner structure, the echelle spectrometer features two-dimensional dispersion, which leads to a complex astigmatic condition. In this work, we propose an optical design of astigmatism-corrected echelle spectrometer using an off-the-shelf cylindrical lens. The mathematical model considering astigmatism introduced by the off-axis mirrors, the echelle grating, and the prism is established. Our solution features simplified calculation and low-cost construction, which is capable of overall compensation of the astigmatism in a wide spectral range (200-600 nm). An optical simulation utilizing ZEMAX software, astigmatism assessment based on Zernike polynomials, and an instrument experiment is implemented to validate the effect of astigmatism correction. The results demonstrated that astigmatism of the echelle spectrometer was corrected to a large extent, and high spectral resolution better than 0.1 nm was achieved.

  5. Construction of specific magnetic resonance imaging/optical dual-modality molecular probe used for imaging angiogenesis of gastric cancer.

    PubMed

    Yan, Xuejie; Song, Xiaoyan; Wang, Zhenbo

    2017-05-01

    The purpose of the study was to construct specific magnetic resonance imaging (MRI)/optical dual-modality molecular probe. Tumor-bearing animal models were established. MRI/optical dual-modality molecular probe was construed by coupling polyethylene glycol (PEG)-modified nano-Fe 3 O 4 with specific targeted cyclopeptide GX1 and near-infrared fluorescent dyes Cy5.5. MRI/optical imaging effects of the probe were observed and the feasibility of in vivo double-modality imaging was discussed. It was found that, the double-modality probe was of high stability; tumor signal of the experimental group tended to be weak after injection of the probe, but rose to a level which was close to the previous level after 18 h (p > 0.05). We successively completed the construction of an ideal MRI/optical dual-modality molecular probe. MRI/optical dual-modality molecular probe which can selectively gather in gastric cancer is expected to be a novel probe used for diagnosing gastric cancer in the early stage.

  6. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    NASA Astrophysics Data System (ADS)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  7. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  8. A detailed transcript-level probe annotation reveals alternative splicing based microarray platform differences

    PubMed Central

    Lee, Joseph C; Stiles, David; Lu, Jun; Cam, Margaret C

    2007-01-01

    Background Microarrays are a popular tool used in experiments to measure gene expression levels. Improving the reproducibility of microarray results produced by different chips from various manufacturers is important to create comparable and combinable experimental results. Alternative splicing has been cited as a possible cause of differences in expression measurements across platforms, though no study to this point has been conducted to show its influence in cross-platform differences. Results Using probe sequence data, a new microarray probe/transcript annotation was created based on the AceView Aug05 release that allowed for the categorization of genes based on their expression measurements' susceptibility to alternative splicing differences across microarray platforms. Examining gene expression data from multiple platforms in light of the new categorization, genes unsusceptible to alternative splicing differences showed higher signal agreement than those genes most susceptible to alternative splicing differences. The analysis gave rise to a different probe-level visualization method that can highlight probe differences according to transcript specificity. Conclusion The results highlight the need for detailed probe annotation at the transcriptome level. The presence of alternative splicing within a given sample can affect gene expression measurements and is a contributing factor to overall technical differences across platforms. PMID:17708771

  9. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  10. A fast tumor-targeting near-infrared fluorescent probe based on bombesin analog for in vivo tumor imaging.

    PubMed

    Chen, Haiyan; Wan, Shunan; Zhu, Fenxia; Wang, Chuan; Cui, Sisi; Du, Changli; Ma, Yuxiang; Gu, Yueqing

    2014-01-01

    Bombesin (BBN), an analog of gastrin-releasing peptide (GRP), of which the receptors are over-expressed on various tumor cells, is able to bind to GRP receptor specifically. In this study, a near-infrared fluorescent dye (MPA) and polyethylene glycol (PEG) were conjugated to BBN analog to form BBN[7-14]-MPA and BBN[7-14]-SA-PEG-MPA. The successful synthesis of the two probes was proved by the characterization via sodium dodecylsulfate-polyacrylamide gel electrophoresis, infrared and optical spectra. Cellular uptakes studies indicated that BBN-based probes were mediated by gastrin-releasing peptide receptors (GRPR) on tumor cells and the PEG modified probe had higher affinity. The dynamic distribution and clearance investigations showed that the BBN-based probes were eliminated by the liver-kidney pathway. Furthermore, both of the BBN-based probes displayed tumor-targeting ability in GRPR over-expressed tumor-bearing mice. The PEG modified probe exhibited faster and higher tumor targeting capability than BBN[7-14]-MPA. The results implied that BBN[7-14]-SA-PEG-MPA could act as an effective fluorescence probe for tumor imaging. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Principal polynomial analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus

    2014-11-01

    This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.

  12. Tutorial on Reed-Solomon error correction coding

    NASA Technical Reports Server (NTRS)

    Geisel, William A.

    1990-01-01

    This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.

  13. Recent Progress in Fluorescent Imaging Probes

    PubMed Central

    Pak, Yen Leng; Swamy, K. M. K.; Yoon, Juyoung

    2015-01-01

    Due to the simplicity and low detection limit, especially the bioimaging ability for cells, fluorescence probes serve as unique detection methods. With the aid of molecular recognition and specific organic reactions, research on fluorescent imaging probes has blossomed during the last decade. Especially, reaction based fluorescent probes have been proven to be highly selective for specific analytes. This review highlights our recent progress on fluorescent imaging probes for biologically important species, such as biothiols, reactive oxygen species, reactive nitrogen species, metal ions including Zn2+, Hg2+, Cu2+ and Au3+, and anions including cyanide and adenosine triphosphate (ATP). PMID:26402684

  14. Recent Progress in Fluorescent Imaging Probes.

    PubMed

    Pak, Yen Leng; Swamy, K M K; Yoon, Juyoung

    2015-09-22

    Due to the simplicity and low detection limit, especially the bioimaging ability for cells, fluorescence probes serve as unique detection methods. With the aid of molecular recognition and specific organic reactions, research on fluorescent imaging probes has blossomed during the last decade. Especially, reaction based fluorescent probes have been proven to be highly selective for specific analytes. This review highlights our recent progress on fluorescent imaging probes for biologically important species, such as biothiols, reactive oxygen species, reactive nitrogen species, metal ions including Zn(2+), Hg(2+), Cu(2+) and Au(3+), and anions including cyanide and adenosine triphosphate (ATP).

  15. Robust distortion correction of endoscope

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Nie, Sixiang; Soto-Thompson, Marcelo; Chen, Chao-I.; A-Rahim, Yousif I.

    2008-03-01

    Endoscopic images suffer from a fundamental spatial distortion due to the wide angle design of the endoscope lens. This barrel-type distortion is an obstacle for subsequent Computer Aided Diagnosis (CAD) algorithms and should be corrected. Various methods and research models for the barrel-type distortion correction have been proposed and studied. For industrial applications, a stable, robust method with high accuracy is required to calibrate the different types of endoscopes in an easy of use way. The correction area shall be large enough to cover all the regions that the physicians need to see. In this paper, we present our endoscope distortion correction procedure which includes data acquisition, distortion center estimation, distortion coefficients calculation, and look-up table (LUT) generation. We investigate different polynomial models used for modeling the distortion and propose a new one which provides correction results with better visual quality. The method has been verified with four types of colonoscopes. The correction procedure is currently being applied on human subject data and the coefficients are being utilized in a subsequent 3D reconstruction project of colon.

  16. Universal Racah matrices and adjoint knot polynomials: Arborescent knots

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.

    2016-04-01

    By now it is well established that the quantum dimensions of descendants of the adjoint representation can be described in a universal form, independent of a particular family of simple Lie algebras. The Rosso-Jones formula then implies a universal description of the adjoint knot polynomials for torus knots, which in particular unifies the HOMFLY (SUN) and Kauffman (SON) polynomials. For E8 the adjoint representation is also fundamental. We suggest to extend the universality from the dimensions to the Racah matrices and this immediately produces a unified description of the adjoint knot polynomials for all arborescent (double-fat) knots, including twist, 2-bridge and pretzel. Technically we develop together the universality and the "eigenvalue conjecture", which expresses the Racah and mixing matrices through the eigenvalues of the quantum R-matrix, and for dealing with the adjoint polynomials one has to extend it to the previously unknown 6 × 6 case. The adjoint polynomials do not distinguish between mutants and therefore are not very efficient in knot theory, however, universal polynomials in higher representations can probably be better in this respect.

  17. Quantum models with energy-dependent potentials solvable in terms of exceptional orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: axgeschu@iun.edu; Department of Physics, Indiana University Northwest, 3400 Broadway, Gary IN 46408; Roy, Pinaki, E-mail: pinaki@isical.ac.in

    We construct energy-dependent potentials for which the Schrödinger equations admit solutions in terms of exceptional orthogonal polynomials. Our method of construction is based on certain point transformations, applied to the equations of exceptional Hermite, Jacobi and Laguerre polynomials. We present several examples of boundary-value problems with energy-dependent potentials that admit a discrete spectrum and the corresponding normalizable solutions in closed form.

  18. Phage & phosphatase: a novel phage-based probe for rapid, multi-platform detection of bacteria.

    PubMed

    Alcaine, S D; Pacitto, D; Sela, D A; Nugen, S R

    2015-11-21

    Genetic engineering of bacteriophages allows for the development of rapid, highly specific, and easily manufactured probes for the detection of bacterial pathogens. A challenge for novel probes is the ease of their adoption in real world laboratories. We have engineered the bacteriophage T7, which targets Escherichia coli, to carry the alkaline phosphatase gene, phoA. This inclusion results in phoA overexpression following phage infection of E. coli. Alkaline phosphatase is commonly used in a wide range of diagnostics, and thus a signal produced by our phage-based probe could be detected using common laboratory equipment. Our work demonstrates the successful: (i) modification of T7 phage to carry phoA; (ii) overexpression of alkaline phosphatase in E. coli; and (iii) detection of this T7-induced alkaline phosphatase activity using commercially available colorimetric and chemilumiscent methods. Furthermore, we demonstrate the application of our phage-based probe to rapidly detect low levels of bacteria and discern the antibiotic resistance of E. coli isolates. Using our bioengineered phage-based probe we were able to detect 10(3) CFU per mL of E. coli in 6 hours using a chemiluminescent substrate and 10(4) CFU per mL within 7.5 hours using a colorimetric substrate. We also show the application of this phage-based probe for antibiotic resistance testing. We were able to determine whether an E. coli isolate was resistant to ampicillin within 4.5 hours using chemiluminescent substrate and within 6 hours using a colorimetric substrate. This phage-based scheme could be readily adopted in labs without significant capital investments and can be translated to other phage-bacteria pairs for further detection.

  19. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound

    PubMed Central

    Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-01-01

    Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The

  20. New Concepts of Fluorescent Probes for Specific Detection of DNA Sequences: Bis-Modified Oligonucleotides in Excimer and Exciplex Detection

    PubMed Central

    Gbaj, A; Bichenkova, EV; Walsh, L; Savage, HE; Sardarian, AR; Etchells, LL; Gulati, A; Hawisa, S; Douglas, KT

    2009-01-01

    The detection of single base mismatches in DNA is important for diagnostics, treatment of genetic diseases, and identification of single nucleotide polymorphisms. Highly sensitive, specific assays are needed to investigate genetic samples from patients. The use of a simple fluorescent nucleoside analogue in detection of DNA sequence and point mutations by hybridisation in solution is described in this study. The 5′-bispyrene and 3′-naphthalene oligonucleotide probes form an exciplex on hybridisation to target in water and the 5′-bispyrene oligonucleotide alone is an adequate probe to determine concentration of target present. It was also indicated that this system has a potential to identify mismatches and insertions. The aim of this work was to investigate experimental structures and conditions that permit strong exciplex emission for nucleic acid detectors, and show how such exciplexes can register the presence of mismatches as required in SNP analysis. This study revealed that the hybridisation of 5′-bispyrenyl fluorophore to a DNA target results in formation of a fluorescent probe with high signal intensity change and specificity for detecting a complementary target in a homogeneous system. Detection of SNP mutations using this split-probe system is a highly specific, simple, and accessible method to meet the rigorous requirements of pharmacogenomic studies. Thus, it is possible for the system to act as SNP detectors and it shows promise for future applications in genetic testing. PMID:21483539

  1. Hydrophobic fluorescent probes introduce artifacts into single molecule tracking experiments due to non-specific binding.

    PubMed

    Zanetti-Domingues, Laura C; Tynan, Christopher J; Rolfe, Daniel J; Clarke, David T; Martin-Fernandez, Marisa

    2013-01-01

    Single-molecule techniques are powerful tools to investigate the structure and dynamics of macromolecular complexes; however, data quality can suffer because of weak specific signal, background noise and dye bleaching and blinking. It is less well-known, but equally important, that non-specific binding of probe to substrates results in a large number of immobile fluorescent molecules, introducing significant artifacts in live cell experiments. Following from our previous work in which we investigated glass coating substrates and demonstrated that the main contribution to this non-specific probe adhesion comes from the dye, we carried out a systematic investigation of how different dye chemistries influence the behaviour of spectrally similar fluorescent probes. Single-molecule brightness, bleaching and probe mobility on the surface of live breast cancer cells cultured on a non-adhesive substrate were assessed for anti-EGFR affibody conjugates with 14 different dyes from 5 different manufacturers, belonging to 3 spectrally homogeneous bands (491 nm, 561 nm and 638 nm laser lines excitation). Our results indicate that, as well as influencing their photophysical properties, dye chemistry has a strong influence on the propensity of dye-protein conjugates to adhere non-specifically to the substrate. In particular, hydrophobicity has a strong influence on interactions with the substrate, with hydrophobic dyes showing much greater levels of binding. Crucially, high levels of non-specific substrate binding result in calculated diffusion coefficients significantly lower than the true values. We conclude that the physic-chemical properties of the dyes should be considered carefully when planning single-molecule experiments. Favourable dye characteristics such as photostability and brightness can be offset by the propensity of a conjugate for non-specific adhesion.

  2. Hydrophobic Fluorescent Probes Introduce Artifacts into Single Molecule Tracking Experiments Due to Non-Specific Binding

    PubMed Central

    Rolfe, Daniel J.; Clarke, David T.; Martin-Fernandez, Marisa

    2013-01-01

    Single-molecule techniques are powerful tools to investigate the structure and dynamics of macromolecular complexes; however, data quality can suffer because of weak specific signal, background noise and dye bleaching and blinking. It is less well-known, but equally important, that non-specific binding of probe to substrates results in a large number of immobile fluorescent molecules, introducing significant artifacts in live cell experiments. Following from our previous work in which we investigated glass coating substrates and demonstrated that the main contribution to this non-specific probe adhesion comes from the dye, we carried out a systematic investigation of how different dye chemistries influence the behaviour of spectrally similar fluorescent probes. Single-molecule brightness, bleaching and probe mobility on the surface of live breast cancer cells cultured on a non-adhesive substrate were assessed for anti-EGFR affibody conjugates with 14 different dyes from 5 different manufacturers, belonging to 3 spectrally homogeneous bands (491 nm, 561 nm and 638 nm laser lines excitation). Our results indicate that, as well as influencing their photophysical properties, dye chemistry has a strong influence on the propensity of dye-protein conjugates to adhere non-specifically to the substrate. In particular, hydrophobicity has a strong influence on interactions with the substrate, with hydrophobic dyes showing much greater levels of binding. Crucially, high levels of non-specific substrate binding result in calculated diffusion coefficients significantly lower than the true values. We conclude that the physic-chemical properties of the dyes should be considered carefully when planning single-molecule experiments. Favourable dye characteristics such as photostability and brightness can be offset by the propensity of a conjugate for non-specific adhesion. PMID:24066121

  3. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients

    PubMed Central

    Chen, Weitian; Sica, Christopher T.; Meyer, Craig H.

    2008-01-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method. PMID:18956462

  4. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients.

    PubMed

    Chen, Weitian; Sica, Christopher T; Meyer, Craig H

    2008-11-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.

  5. Effect of chelate type and radioisotope on the imaging efficacy of four fibrin-specific PET probes

    PubMed Central

    Blasi, Francesco; Oliveira, Bruno L.; Rietz, Tyson A.; Rotile, Nicholas J.; Day, Helen; Looby, Richard J.; Ay, Ilknur; Caravan, Peter

    2014-01-01

    Thrombus formation plays a major role in cardiovascular diseases, but noninvasive thrombus imaging is still challenging. Fibrin is a major component of both arterial and venous thrombi, and represents an ideal candidate for imaging of thrombosis. Recently we showed that 64Cu-DOTA-labeled PET probes based on fibrin-specific peptides are suitable for thrombus imaging in vivo, however the metabolic stability of these probes was limited. Here we describe four new probes using either 64Cu or Al18F chelated to two NOTA derivatives. Methods Probes were synthesized using a known fibrin-specific peptide conjugated to either NODAGA (FBP8, FBP10) or NOTA-monoamide (FBP9, FBP11) as chelators, followed by labeling with 64Cu (FBP8 and FBP9) or Al18F (FBP10 and FBP11). PET imaging efficacy, pharmacokinetics, biodistribution and metabolic stability were assessed in a rat model of arterial thrombosis. Results All probes had similar nanomolar affinity (435–760 nM) for the soluble fibrin fragment DD(E). PET imaging allowed clear visualization of thrombus by all probes, with a 5-fold or higher thrombus-to-background ratio. Compared to the previous DOTA derivative, the new 64Cu probes FBP8 and FBP9 showed substantially improved metabolic stability (>85% intact in blood at 4h post-injection) which resulted in high uptake at the target site (0.5–0.8% ID/g) that persisted over 5h, producing increasingly greater target-to-background ratios. The thrombus uptake was 5- to 20-fold higher than the uptake in the contralateral artery, blood, muscle, lungs, bone, spleen, large intestine and heart at 2h post-injection, and 10 to 40-fold higher at 5h. The Al18F derivatives FBP10 and FBP11 were less stable, in particular the NODAGA conjugate (FBP10, <30% intact in blood at 4h post-injection) which showed high bone uptake and low thrombus:background ratios that decreased over time. The high thrombus:contralateral ratios for all probes were confirmed by ex vivo biodistribution and autoradiography

  6. Hetero-site-specific X-ray pump-probe spectroscopy for femtosecond intramolecular dynamics

    PubMed Central

    Picón, A.; Lehmann, C. S.; Bostedt, C.; Rudenko, A.; Marinelli, A.; Osipov, T.; Rolles, D.; Berrah, N.; Bomme, C.; Bucher, M.; Doumy, G.; Erk, B.; Ferguson, K. R.; Gorkhover, T.; Ho, P. J.; Kanter, E. P.; Krässig, B.; Krzywinski, J.; Lutman, A. A.; March, A. M.; Moonshiram, D.; Ray, D.; Young, L.; Pratt, S. T.; Southworth, S. H.

    2016-01-01

    New capabilities at X-ray free-electron laser facilities allow the generation of two-colour femtosecond X-ray pulses, opening the possibility of performing ultrafast studies of X-ray-induced phenomena. Particularly, the experimental realization of hetero-site-specific X-ray-pump/X-ray-probe spectroscopy is of special interest, in which an X-ray pump pulse is absorbed at one site within a molecule and an X-ray probe pulse follows the X-ray-induced dynamics at another site within the same molecule. Here we show experimental evidence of a hetero-site pump-probe signal. By using two-colour 10-fs X-ray pulses, we are able to observe the femtosecond time dependence for the formation of F ions during the fragmentation of XeF2 molecules following X-ray absorption at the Xe site. PMID:27212390

  7. Hetero-site-specific X-ray pump-probe spectroscopy for femtosecond intramolecular dynamics.

    PubMed

    Picón, A; Lehmann, C S; Bostedt, C; Rudenko, A; Marinelli, A; Osipov, T; Rolles, D; Berrah, N; Bomme, C; Bucher, M; Doumy, G; Erk, B; Ferguson, K R; Gorkhover, T; Ho, P J; Kanter, E P; Krässig, B; Krzywinski, J; Lutman, A A; March, A M; Moonshiram, D; Ray, D; Young, L; Pratt, S T; Southworth, S H

    2016-05-23

    New capabilities at X-ray free-electron laser facilities allow the generation of two-colour femtosecond X-ray pulses, opening the possibility of performing ultrafast studies of X-ray-induced phenomena. Particularly, the experimental realization of hetero-site-specific X-ray-pump/X-ray-probe spectroscopy is of special interest, in which an X-ray pump pulse is absorbed at one site within a molecule and an X-ray probe pulse follows the X-ray-induced dynamics at another site within the same molecule. Here we show experimental evidence of a hetero-site pump-probe signal. By using two-colour 10-fs X-ray pulses, we are able to observe the femtosecond time dependence for the formation of F ions during the fragmentation of XeF2 molecules following X-ray absorption at the Xe site.

  8. Protein-based stable isotope probing.

    PubMed

    Jehmlich, Nico; Schmidt, Frank; Taubert, Martin; Seifert, Jana; Bastida, Felipe; von Bergen, Martin; Richnow, Hans-Hermann; Vogt, Carsten

    2010-12-01

    We describe a stable isotope probing (SIP) technique that was developed to link microbe-specific metabolic function to phylogenetic information. Carbon ((13)C)- or nitrogen ((15)N)-labeled substrates (typically with >98% heavy label) were used in cultivation experiments and the heavy isotope incorporation into proteins (protein-SIP) on growth was determined. The amount of incorporation provides a measure for assimilation of a substrate, and the sequence information from peptide analysis obtained by mass spectrometry delivers phylogenetic information about the microorganisms responsible for the metabolism of the particular substrate. In this article, we provide guidelines for incubating microbial cultures with labeled substrates and a protocol for protein-SIP. The protocol guides readers through the proteomics pipeline, including protein extraction, gel-free and gel-based protein separation, the subsequent mass spectrometric analysis of peptides and the calculation of the incorporation of stable isotopes into peptides. Extraction of proteins and the mass fingerprint measurements of unlabeled and labeled fractions can be performed in 2-3 d.

  9. Pedagogical Knowledge Base Underlying EFL Teachers' Provision of Oral Corrective Feedback in Grammar Instruction

    ERIC Educational Resources Information Center

    Atai, Mahmood Reza; Shafiee, Zahra

    2017-01-01

    The present study investigated the pedagogical knowledge base underlying EFL teachers' provision of oral corrective feedback in grammar instruction. More specifically, we explored the consistent thought patterns guiding the decisions of three Iranian teachers regarding oral corrective feedback on grammatical errors. We also examined the potential…

  10. Community Analysis of Biofilters Using Fluorescence In Situ Hybridization Including a New Probe for the Xanthomonas Branch of the Class Proteobacteria

    PubMed Central

    Friedrich, Udo; Naismith, Michèle M.; Altendorf, Karlheinz; Lipski, André

    1999-01-01

    Domain-, class-, and subclass-specific rRNA-targeted probes were applied to investigate the microbial communities of three industrial and three laboratory-scale biofilters. The set of probes also included a new probe (named XAN818) specific for the Xanthomonas branch of the class Proteobacteria; this probe is described in this study. The members of the Xanthomonas branch do not hybridize with previously developed rRNA-targeted oligonucleotide probes for the α-, β-, and γ-Proteobacteria. Bacteria of the Xanthomonas branch accounted for up to 4.5% of total direct counts obtained with 4′,6-diamidino-2-phenylindole. In biofilter samples, the relative abundance of these bacteria was similar to that of the γ-Proteobacteria. Actinobacteria (gram-positive bacteria with a high G+C DNA content) and α-Proteobacteria were the most dominant groups. Detection rates obtained with probe EUB338 varied between about 40 and 70%. For samples with high contents of gram-positive bacteria, these percentages were substantially improved when the calculations were corrected for the reduced permeability of gram-positive bacteria when formaldehyde was used as a fixative. The set of applied bacterial class- and subclass-specific probes yielded, on average, 58.5% (± a standard deviation of 23.0%) of the corrected eubacterial detection rates, thus indicating the necessity of additional probes for studies of biofilter communities. The Xanthomonas-specific probe presented here may serve as an efficient tool for identifying potential phytopathogens. In situ hybridization proved to be a practical tool for microbiological studies of biofiltration systems. PMID:10427047

  11. A micromachined membrane-based active probe for biomolecular mechanics measurement

    NASA Astrophysics Data System (ADS)

    Torun, H.; Sutanto, J.; Sarangapani, K. K.; Joseph, P.; Degertekin, F. L.; Zhu, C.

    2007-04-01

    A novel micromachined, membrane-based probe has been developed and fabricated as assays to enable parallel measurements. Each probe in the array can be individually actuated, and the membrane displacement can be measured with high resolution using an integrated diffraction-based optical interferometer. To illustrate its application in single-molecule mechanics experiments, this membrane probe was used to measure unbinding forces between L-selectin reconstituted in a polymer-cushioned lipid bilayer on the probe membrane and an antibody adsorbed on an atomic force microscope cantilever. Piconewton range forces between single pairs of interacting molecules were measured from the cantilever bending while using the membrane probe as an actuator. The integrated diffraction-based optical interferometer of the probe was demonstrated to have <10 fm Hz-1/2 noise floor for frequencies as low as 3 Hz with a differential readout scheme. With soft probe membranes, this low noise level would be suitable for direct force measurements without the need for a cantilever. Furthermore, the probe membranes were shown to have 0.5 µm actuation range with a flat response up to 100 kHz, enabling measurements at fast speeds.

  12. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  13. Polynomial solution of quantum Grassmann matrices

    NASA Astrophysics Data System (ADS)

    Tierz, Miguel

    2017-05-01

    We study a model of quantum mechanical fermions with matrix-like index structure (with indices N and L) and quartic interactions, recently introduced by Anninos and Silva. We compute the partition function exactly with q-deformed orthogonal polynomials (Stieltjes-Wigert polynomials), for different values of L and arbitrary N. From the explicit evaluation of the thermal partition function, the energy levels and degeneracies are determined. For a given L, the number of states of different energy is quadratic in N, which implies an exponential degeneracy of the energy levels. We also show that at high-temperature we have a Gaussian matrix model, which implies a symmetry that swaps N and L, together with a Wick rotation of the spectral parameter. In this limit, we also write the partition function, for generic L and N, in terms of a single generalized Hermite polynomial.

  14. Morpholine Derivative-Functionalized Carbon Dots-Based Fluorescent Probe for Highly Selective Lysosomal Imaging in Living Cells.

    PubMed

    Wu, Luling; Li, Xiaolin; Ling, Yifei; Huang, Chusen; Jia, Nengqin

    2017-08-30

    The development of a suitable fluorescent probe for the specific labeling and imaging of lysosomes through the direct visual fluorescent signal is extremely important for understanding the dysfunction of lysosomes, which might induce various pathologies, including neurodegenerative diseases, cancer, and Alzheimer's disease. Herein, a new carbon dot-based fluorescent probe (CDs-PEI-ML) was designed and synthesized for highly selective imaging of lysosomes in live cells. In this probe, PEI (polyethylenimine) is introduced to improve water solubility and provide abundant amine groups for the as-prepared CDs-PEI, and the morpholine group (ML) serves as a targeting unit for lysosomes. More importantly, passivation with PEI could dramatically increase the fluorescence quantum yield of CDs-PEI-ML as well as their stability in fluorescence emission under different excitation wavelength. Consequently, experimental data demonstrated that the target probe CDs-PEI-ML has low cytotoxicity and excellent photostability. Additionally, further live cell imaging experiment indicated that CDs-PEI-ML is a highly selective fluorescent probe for lysosomes. We speculate the mechanism for selective staining of lysosomes that CDs-PEI-ML was initially taken up by lysosomes through the endocytic pathway and then accumulated in acidic lysosomes. It is notable that there was less diffusion of CDs-PEI-ML into cytoplasm, which could be ascribed to the presence of lysosome target group morpholine on surface of CDs-PEI-ML. The blue emission wavelength combined with the high photo stability and ability of long-lasting cell imaging makes CDs-PEI-ML become an alternative fluorescent probe for multicolor labeling and long-term tracking of lysosomes in live cells and the potential application in super-resolution imaging. To best of our knowledge, there are still limited carbon dots-based fluorescent probes that have been studied for specific lysosomal imaging in live cells. The concept of surface

  15. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR

    PubMed Central

    Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong

    2016-01-01

    Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973

  16. Social Security Polynomials.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    1992-01-01

    Demonstrates how the uniqueness and anonymity of a student's Social Security number can be utilized to create individualized polynomial equations that students can investigate using computers or graphing calculators. Students write reports of their efforts to find and classify all real roots of their equation. (MDH)

  17. Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs. NBER Working Paper No. 20405

    ERIC Educational Resources Information Center

    Gelman, Andrew; Imbens, Guido

    2014-01-01

    It is common in regression discontinuity analysis to control for high order (third, fourth, or higher) polynomials of the forcing variable. We argue that estimators for causal effects based on such methods can be misleading, and we recommend researchers do not use them, and instead use estimators based on local linear or quadratic polynomials or…

  18. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  19. Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy

    NASA Technical Reports Server (NTRS)

    Edwards, Lawrence G.; Haberbusch, Mark

    1993-01-01

    The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.

  20. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    NASA Astrophysics Data System (ADS)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification

  1. Arrays of nucleic acid probes on biological chips

    DOEpatents

    Chee, Mark; Cronin, Maureen T.; Fodor, Stephen P. A.; Huang, Xiaohua X.; Hubbell, Earl A.; Lipshutz, Robert J.; Lobban, Peter E.; Morris, MacDonald S.; Sheldon, Edward L.

    1998-11-17

    DNA chips containing arrays of oligonucleotide probes can be used to determine whether a target nucleic acid has a nucleotide sequence identical to or different from a specific reference sequence. The array of probes comprises probes exactly complementary to the reference sequence, as well as probes that differ by one or more bases from the exactly complementary probes.

  2. Conformational Ensembles of Calmodulin Revealed by Nonperturbing Site-Specific Vibrational Probe Groups.

    PubMed

    Kelly, Kristen L; Dalton, Shannon R; Wai, Rebecca B; Ramchandani, Kanika; Xu, Rosalind J; Linse, Sara; Londergan, Casey H

    2018-03-22

    Seven native residues on the regulatory protein calmodulin, including three key methionine residues, were replaced (one by one) by the vibrational probe amino acid cyanylated cysteine, which has a unique CN stretching vibration that reports on its local environment. Almost no perturbation was caused by this probe at any of the seven sites, as reported by CD spectra of calcium-bound and apo calmodulin and binding thermodynamics for the formation of a complex between calmodulin and a canonical target peptide from skeletal muscle myosin light chain kinase measured by isothermal titration. The surprising lack of perturbation suggests that this probe group could be applied directly in many protein-protein binding interfaces. The infrared absorption bands for the probe groups reported many dramatic changes in the probes' local environments as CaM went from apo- to calcium-saturated to target peptide-bound conditions, including large frequency shifts and a variety of line shapes from narrow (interpreted as a rigid and invariant local environment) to symmetric to broad and asymmetric (likely from multiple coexisting and dynamically exchanging structures). The fast intrinsic time scale of infrared spectroscopy means that the line shapes report directly on site-specific details of calmodulin's variable structural distribution. Though quantitative interpretation of the probe line shapes depends on a direct connection between simulated ensembles and experimental data that does not yet exist, formation of such a connection to data such as that reported here would provide a new way to evaluate conformational ensembles from data that directly contains the structural distribution. The calmodulin probe sites developed here will also be useful in evaluating the binding mode of calmodulin with many uncharacterized regulatory targets.

  3. Optimization of Designs for Nanotube-based Scanning Probes

    NASA Technical Reports Server (NTRS)

    Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.

  4. Supersymmetric quantum mechanics: Engineered hierarchies of integrable potentials and related orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balondo Iyela, Daddy; Centre for Cosmology, Particle Physics and Phenomenology; Département de Physique, Université de Kinshasa

    2013-09-15

    Within the context of supersymmetric quantum mechanics and its related hierarchies of integrable quantum Hamiltonians and potentials, a general programme is outlined and applied to its first two simplest illustrations. Going beyond the usual restriction of shape invariance for intertwined potentials, it is suggested to require a similar relation for Hamiltonians in the hierarchy separated by an arbitrary number of levels, N. By requiring further that these two Hamiltonians be in fact identical up to an overall shift in energy, a periodic structure is installed in the hierarchy which should allow for its resolution. Specific classes of orthogonal polynomials characteristicmore » of such periodic hierarchies are thereby generated, while the methods of supersymmetric quantum mechanics then lead to generalised Rodrigues formulae and recursion relations for such polynomials. The approach also offers the practical prospect of quantum modelling through the engineering of quantum potentials from experimental energy spectra. In this paper, these ideas are presented and solved explicitly for the cases N= 1 and N= 2. The latter case is related to the generalised Laguerre polynomials, for which indeed new results are thereby obtained. In the context of dressing chains and deformed polynomial Heisenberg algebras, some partial results for N⩾ 3 also exist in the literature, which should be relevant to a complete study of the N⩾ 3 general periodic hierarchies.« less

  5. On a Family of Multivariate Modified Humbert Polynomials

    PubMed Central

    Aktaş, Rabia; Erkuş-Duman, Esra

    2013-01-01

    This paper attempts to present a multivariable extension of generalized Humbert polynomials. The results obtained here include various families of multilinear and multilateral generating functions, miscellaneous properties, and also some special cases for these multivariable polynomials. PMID:23935411

  6. Gene correction in patient-specific iPSCs for therapy development and disease modeling

    PubMed Central

    Jang, Yoon-Young

    2018-01-01

    The discovery that mature cells can be reprogrammed to become pluripotent and the development of engineered endonucleases for enhancing genome editing are two of the most exciting and impactful technology advances in modern medicine and science. Human pluripotent stem cells have the potential to establish new model systems for studying human developmental biology and disease mechanisms. Gene correction in patient-specific iPSCs can also provide a novel source for autologous cell therapy. Although historically challenging, precise genome editing in human iPSCs is becoming more feasible with the development of new genome-editing tools, including ZFNs, TALENs, and CRISPR. iPSCs derived from patients of a variety of diseases have been edited to correct disease-associated mutations and to generate isogenic cell lines. After directed differentiation, many of the corrected iPSCs showed restored functionality and demonstrated their potential in cell replacement therapy. Genome-wide analyses of gene-corrected iPSCs have collectively demonstrated a high fidelity of the engineered endonucleases. Remaining challenges in clinical translation of these technologies include maintaining genome integrity of the iPSC clones and the differentiated cells. Given the rapid advances in genome-editing technologies, gene correction is no longer the bottleneck in developing iPSC-based gene and cell therapies; generating functional and transplantable cell types from iPSCs remains the biggest challenge needing to be addressed by the research field. PMID:27256364

  7. A computerized Langmuir probe system

    NASA Astrophysics Data System (ADS)

    Pilling, L. S.; Bydder, E. L.; Carnegie, D. A.

    2003-07-01

    For low pressure plasmas it is important to record entire single or double Langmuir probe characteristics accurately. For plasmas with a depleted high energy tail, the accuracy of the recorded ion current plays a critical role in determining the electron temperature. Even for high density Maxwellian distributions, it is necessary to accurately model the ion current to obtain the correct electron density. Since the electron and ion current saturation values are, at best, orders of magnitude apart, a single current sensing resistor cannot provide the required resolution to accurately record these values. We present an automated, personal computer based data acquisition system for the determination of fundamental plasma properties in low pressure plasmas. The system is designed for single and double Langmuir probes, whose characteristics can be recorded over a bias voltage range of ±70 V with 12 bit resolution. The current flowing through the probes can be recorded within the range of 5 nA-100 mA. The use of a transimpedance amplifier for current sensing eliminates the requirement for traditional current sensing resistors and hence the need to correct the raw data. The large current recording range is realized through the use of a real time gain switching system in the negative feedback loop of the transimpedance amplifier.

  8. Symmetric polynomials in information theory: Entropy and subentropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jozsa, Richard; Mitchison, Graeme

    2015-06-15

    Entropy and other fundamental quantities of information theory are customarily expressed and manipulated as functions of probabilities. Here we study the entropy H and subentropy Q as functions of the elementary symmetric polynomials in the probabilities and reveal a series of remarkable properties. Derivatives of all orders are shown to satisfy a complete monotonicity property. H and Q themselves become multivariate Bernstein functions and we derive the density functions of their Levy-Khintchine representations. We also show that H and Q are Pick functions in each symmetric polynomial variable separately. Furthermore, we see that H and the intrinsically quantum informational quantitymore » Q become surprisingly closely related in functional form, suggesting a special significance for the symmetric polynomials in quantum information theory. Using the symmetric polynomials, we also derive a series of further properties of H and Q.« less

  9. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  10. Nested polynomial trends for the improvement of Gaussian process-based predictors

    NASA Astrophysics Data System (ADS)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  11. On the Waring problem for polynomial rings

    PubMed Central

    Fröberg, Ralf; Ottaviani, Giorgio; Shapiro, Boris

    2012-01-01

    In this note we discuss an analog of the classical Waring problem for . Namely, we show that a general homogeneous polynomial of degree divisible by k≥2 can be represented as a sum of at most kn k-th powers of homogeneous polynomials in . Noticeably, kn coincides with the number obtained by naive dimension count. PMID:22460787

  12. Dual exponential polynomials and linear differential equations

    NASA Astrophysics Data System (ADS)

    Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne

    2018-01-01

    We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.

  13. Polyhydric polymer-functionalized fluorescent probe with enhanced aqueous solubility and specific ion recognition: A test strips-based fluorimetric strategy for the rapid and visual detection of Fe3+ ions.

    PubMed

    Duan, Zhiqiang; Zhang, Chunxian; Qiao, Yuchun; Liu, Fengjuan; Wang, Deyan; Wu, Mengfan; Wang, Ke; Lv, Xiaoxia; Kong, Xiangmu; Wang, Hua

    2017-08-01

    A polyhydric polymer-functionalized probe with enhanced aqueous solubility was designed initially by coupling 1-pyrenecarboxyaldehyde (Pyr) onto poly(vinyl alcohol) (PVA) via the one-step condensation reaction. Polyhydric PVA polymer chains could facilitate the Pyr fluorophore with largely improved aqueous solubility and especially strong cyan fluorescence. Importantly, the fluorescence of the PVA-Pyr probes could thereby be quenched specifically by Fe 3+ ions through the strong PVA-Fe 3+ interaction triggering the polymeric probe aggregation. Furthermore, a test strips-based fluorimetric method was developed with the stable and uniform probe distribution by taking advantage of the unique film-forming ability and the depression capacity of "coffee-stain" effects of PVA matrix. The as-developed test strips could allow for the rapid and visual detections of Fe 3+ ions simply by a dipping way, showing a linear concentration range of 5.00-300μM, with the detection limit of 0.73μM. Moreover, the proposed method was applied to the evaluation of Fe 3+ ions in natural water samples, showing the analysis performances better or comparable to those of current detection techniques. This test strips-based fluorimetric strategy promises the extensive applications for the rapid on-site monitoring of Fe 3+ ions in environmental water and the outdoor finding of the potential iron mines. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Planar harmonic polynomials of type B

    NASA Astrophysics Data System (ADS)

    Dunkl, Charles F.

    1999-11-01

    The hyperoctahedral group acting on icons/Journals/Common/BbbR" ALT="BbbR" ALIGN="TOP"/>N is the Weyl group of type B and is associated with a two-parameter family of differential-difference operators {Ti:1icons/Journals/Common/leq" ALT="leq" ALIGN="TOP"/> iicons/Journals/Common/leq" ALT="leq" ALIGN="TOP"/> N}. These operators are analogous to partial derivative operators. This paper finds all the polynomials h on icons/Journals/Common/BbbR" ALT="BbbR" ALIGN="TOP"/>N which are harmonic, icons/Journals/Common/Delta" ALT="Delta" ALIGN="TOP"/>Bh = 0 and annihilated by Ti for i>2, where the Laplacian 0305-4470/32/46/308/img1" ALT="(sum). They are given explicitly in terms of a novel basis of polynomials, defined by generating functions. The harmonic polynomials can be used to find wavefunctions for the quantum many-body spin Calogero model.

  15. Georeferencing CAMS data: Polynomial rectification and beyond

    NASA Astrophysics Data System (ADS)

    Yang, Xinghe

    The Calibrated Airborne Multispectral Scanner (CAMS) is a sensor used in the commercial remote sensing program at NASA Stennis Space Center. In geographic applications of the CAMS data, accurate geometric rectification is essential for the analysis of the remotely sensed data and for the integration of the data into Geographic Information Systems (GIS). The commonly used rectification techniques such as the polynomial transformation and ortho rectification have been very successful in the field of remote sensing and GIS for most remote sensing data such as Landsat imagery, SPOT imagery and aerial photos. However, due to the geometric nature of the airborne line scanner which has high spatial frequency distortions, the polynomial model and the ortho rectification technique in current commercial software packages such as Erdas Imagine are not adequate for obtaining sufficient geometric accuracy. In this research, the geometric nature, especially the major distortions, of the CAMS data has been described. An analytical step-by-step geometric preprocessing has been utilized to deal with the potential high frequency distortions of the CAMS data. A generic sensor-independent photogrammetric model has been developed for the ortho-rectification of the CAMS data. Three generalized kernel classes and directional elliptical basis have been formulated into a rectification model of summation of multisurface functions, which is a significant extension to the traditional radial basis functions. The preprocessing mechanism has been fully incorporated into the polynomial, the triangle-based finite element analysis as well as the summation of multisurface functions. While the multisurface functions and the finite element analysis have the characteristics of localization, piecewise logic has been applied to the polynomial and photogrammetric methods, which can produce significant accuracy improvement over the global approach. A software module has been implemented with full

  16. Phylogenetic group- and species-specific oligonucleotide probes for single-cell detection of lactic acid bacteria in oral biofilms

    PubMed Central

    2011-01-01

    Background The purpose of this study was to design and evaluate fluorescent in situ hybridization (FISH) probes for the single-cell detection and enumeration of lactic acid bacteria, in particular organisms belonging to the major phylogenetic groups and species of oral lactobacilli and to Abiotrophia/Granulicatella. Results As lactobacilli are known for notorious resistance to probe penetration, probe-specific assay protocols were experimentally developed to provide maximum cell wall permeability, probe accessibility, hybridization stringency, and fluorescence intensity. The new assays were then applied in a pilot study to three biofilm samples harvested from variably demineralized bovine enamel discs that had been carried in situ for 10 days by different volunteers. Best probe penetration and fluorescent labeling of reference strains were obtained after combined lysozyme and achromopeptidase treatment followed by exposure to lipase. Hybridization stringency had to be established strictly for each probe. Thereafter all probes showed the expected specificity with reference strains and labeled the anticipated morphotypes in dental plaques. Applied to in situ grown biofilms the set of probes detected only Lactobacillus fermentum and bacteria of the Lactobacillus casei group. The most cariogenic biofilm contained two orders of magnitude higher L. fermentum cell numbers than the other biofilms. Abiotrophia/Granulicatella and streptococci from the mitis group were found in all samples at high levels, whereas Streptococcus mutans was detected in only one sample in very low numbers. Conclusions Application of these new group- and species-specific FISH probes to oral biofilm-forming lactic acid bacteria will allow a clearer understanding of the supragingival biome, its spatial architecture and of structure-function relationships implicated during plaque homeostasis and caries development. The probes should prove of value far beyond the field of oral microbiology, as many of

  17. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  18. A note on the zeros of Freud-Sobolev orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Moreno-Balcazar, Juan J.

    2007-10-01

    We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.

  19. Extended substrate specificity and first potent irreversible inhibitor/activity-based probe design for Zika virus NS2B-NS3 protease.

    PubMed

    Rut, Wioletta; Zhang, Linlin; Kasperkiewicz, Paulina; Poreba, Marcin; Hilgenfeld, Rolf; Drąg, Marcin

    2017-03-01

    Zika virus is spread by Aedes mosquitoes and is linked to acute neurological disorders, especially to microcephaly in newborn children and Guillan-Barré Syndrome. The NS2B-NS3 protease of this virus is responsible for polyprotein processing and therefore considered an attractive drug target. In this study, we have used the Hybrid Combinatorial Substrate Library (HyCoSuL) approach to determine the substrate specificity of ZIKV NS2B-NS3 protease in the P4-P1 positions using natural and a large spectrum of unnatural amino acids. Obtained data demonstrate a high level of specificity of the S3-S1 subsites, especially for basic amino acids. However, the S4 site exhibits a very broad preference toward natural and unnatural amino acids with selected D-amino acids being favored over L enantiomers. This information was used for the design of a very potent phosphonate inhibitor/activity-based probe of ZIKV NS2B-NS3 protease. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Verification of the FtCayuga fault-tolerant microprocessor system. Volume 2: Formal specification and correctness theorems

    NASA Technical Reports Server (NTRS)

    Bickford, Mark; Srivas, Mandayam

    1991-01-01

    Presented here is a formal specification and verification of a property of a quadruplicately redundant fault tolerant microprocessor system design. A complete listing of the formal specification of the system and the correctness theorems that are proved are given. The system performs the task of obtaining interactive consistency among the processors using a special instruction on the processors. The design is based on an algorithm proposed by Pease, Shostak, and Lamport. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, providing certain preconditions hold, using a computer aided design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.

  1. Polynomial Supertree Methods Revisited

    PubMed Central

    Brinkmeyer, Malte; Griebel, Thasso; Böcker, Sebastian

    2011-01-01

    Supertree methods allow to reconstruct large phylogenetic trees by combining smaller trees with overlapping leaf sets into one, more comprehensive supertree. The most commonly used supertree method, matrix representation with parsimony (MRP), produces accurate supertrees but is rather slow due to the underlying hard optimization problem. In this paper, we present an extensive simulation study comparing the performance of MRP and the polynomial supertree methods MinCut Supertree, Modified MinCut Supertree, Build-with-distances, PhySIC, PhySIC_IST, and super distance matrix. We consider both quality and resolution of the reconstructed supertrees. Our findings illustrate the tradeoff between accuracy and running time in supertree construction, as well as the pros and cons of voting- and veto-based supertree approaches. Based on our results, we make some general suggestions for supertree methods yet to come. PMID:22229028

  2. Flux-corrected transport algorithms for continuous Galerkin methods based on high order Bernstein finite elements

    NASA Astrophysics Data System (ADS)

    Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso

    2017-09-01

    This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.

  3. Total Water Content Measurements with an Isokinetic Sampling Probe

    NASA Technical Reports Server (NTRS)

    Reehorst, Andrew L.; Miller, Dean R.; Bidwell, Colin S.

    2010-01-01

    The NASA Glenn Research Center has developed a Total Water Content (TWC) Isokinetic Sampling Probe. Since it is not sensitive to cloud water particle phase nor size, it is particularly attractive to support super-cooled large droplet and high ice water content aircraft icing studies. The instrument is comprised of the Sampling Probe, Sample Flow Control, and Water Vapor Measurement subsystems. Analysis and testing have been conducted on the subsystems to ensure their proper function and accuracy. End-to-end bench testing has also been conducted to ensure the reliability of the entire instrument system. A Stokes Number based collection efficiency correction was developed to correct for probe thickness effects. The authors further discuss the need to ensure that no condensation occurs within the instrument plumbing. Instrument measurements compared to facility calibrations from testing in the NASA Glenn Icing Research Tunnel are presented and discussed. There appears to be liquid water content and droplet size effects in the differences between the two measurement techniques.

  4. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  5. Fast beampattern evaluation by polynomial rooting

    NASA Astrophysics Data System (ADS)

    Häcker, P.; Uhlich, S.; Yang, B.

    2011-07-01

    Current automotive radar systems measure the distance, the relative velocity and the direction of objects in their environment. This information enables the car to support the driver. The direction estimation capabilities of a sensor array depend on its beampattern. To find the array configuration leading to the best angle estimation by a global optimization algorithm, a huge amount of beampatterns have to be calculated to detect their maxima. In this paper, a novel algorithm is proposed to find all maxima of an array's beampattern fast and reliably, leading to accelerated array optimizations. The algorithm works for arrays having the sensors on a uniformly spaced grid. We use a general version of the gcd (greatest common divisor) function in order to write the problem as a polynomial. We differentiate and root the polynomial to get the extrema of the beampattern. In addition, we show a method to reduce the computational burden even more by decreasing the order of the polynomial.

  6. Quantitative Electron Probe Microanalysis: State of the Art

    NASA Technical Reports Server (NTRS)

    Carpernter, P. K.

    2005-01-01

    Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete

  7. Whole-body multicolor spectrally resolved fluorescence imaging for development of target-specific optical contrast agents using genetically engineered probes

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hisataka; Hama, Yukihiro; Koyama, Yoshinori; Barrett, Tristan; Urano, Yasuteru; Choyke, Peter L.

    2007-02-01

    Target-specific contrast agents are being developed for the molecular imaging of cancer. Optically detectable target-specific agents are promising for clinical applications because of their high sensitivity and specificity. Pre clinical testing is needed, however, to validate the actual sensitivity and specificity of these agents in animal models, and involves both conventional histology and immunohistochemistry, which requires large numbers of animals and samples with costly handling. However, a superior validation tool takes advantage of genetic engineering technology whereby cell lines are transfected with genes that induce the target cell to produce fluorescent proteins with characteristic emission spectra thus, identifying them as cancer cells. Multicolor fluorescence imaging of these genetically engineered probes can provide rapid validation of newly developed exogenous probes that fluoresce at different wavelengths. For example, the plasmid containing the gene encoding red fluorescent protein (RFP) was transfected into cell lines previously developed to either express or not-express specific cell surface receptors. Various antibody-based or receptor ligand-based optical contrast agents with either green or near infrared fluorophores were developed to concurrently target and validate cancer cells and their positive and negative controls, such as β-D-galactose receptor, HER1 and HER2 in a single animal/organ. Spectrally resolved fluorescence multicolor imaging was used to detect separate fluorescent emission spectra from the exogenous agents and RFP. Therefore, using this in vivo imaging technique, we were able to demonstrate the sensitivity and specificity of the target-specific optical contrast agents, thus reducing the number of animals needed to conduct these experiments.

  8. In situ temperature measurements with thermocouple probes during laser interstitial thermotherapy (LITT): quantification and correction of a measurement artifact.

    PubMed

    Manns, F; Milne, P J; Gonzalez-Cirre, X; Denham, D B; Parel, J M; Robinson, D S

    1998-01-01

    The purpose of this work was to quantify the magnitude of an artifact induced by stainless steel thermocouple probes in temperature measurements made in situ during experimental laser interstitial thermo-therapy (LITT). A procedure for correction of this observational error is outlined. A CW Nd:YAG laser system emitting 20W for 25-30 s delivered through a fiber-optic probe was used to create localized heating. The temperature field around the fiber-optic probe during laser irradiation was measured every 0.3 s in air, water, 0.4% intralipid solution, and fatty cadaver pig tissue, with a field of up to fifteen needle thermocouple probes. Direct absorption of Nd:YAG laser radiation by the thermocouple probes induced an overestimation of the temperature, ranging from 1.8 degrees C to 118.6 degrees C in air, 2.2 degrees C to 9.9 degrees C in water, 0.7 C to 4.7 C in intralipid and 0.3 C to 17.9 C in porcine tissue after irradiation at 20W for 30 s and depending on the thermocouple location. The artifact in porcine tissue was removed by applying exponential and linear fits to the measured temperature curves. Light absorption by thermocouple probes can induce a significant artifact in the measurement of laser-induced temperature increases. When the time constant of the thermocouple effect is much smaller than the thermal relaxation time of the surrounding tissue, the artifact can be accurately quantified. During LITT experiments where temperature differences of a few degrees are significant, the thermocouple artifact must be removed in order to be able accurately to predict the treatment outcome.

  9. Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.

    PubMed

    Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng

    2011-10-01

    This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.

  10. Computing Tutte polynomials of contact networks in classrooms

    NASA Astrophysics Data System (ADS)

    Hincapié, Doracelly; Ospina, Juan

    2013-05-01

    Objective: The topological complexity of contact networks in classrooms and the potential transmission of an infectious disease were analyzed by sex and age. Methods: The Tutte polynomials, some topological properties and the number of spanning trees were used to algebraically compute the topological complexity. Computations were made with the Maple package GraphTheory. Published data of mutually reported social contacts within a classroom taken from primary school, consisting of children in the age ranges of 4-5, 7-8 and 10-11, were used. Results: The algebraic complexity of the Tutte polynomial and the probability of disease transmission increases with age. The contact networks are not bipartite graphs, gender segregation was observed especially in younger children. Conclusion: Tutte polynomials are tools to understand the topology of the contact networks and to derive numerical indexes of such topologies. It is possible to establish relationships between the Tutte polynomial of a given contact network and the potential transmission of an infectious disease within such network

  11. Polynomial interpolation and sums of powers of integers

    NASA Astrophysics Data System (ADS)

    Cereceda, José Luis

    2017-02-01

    In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, Pk(n) and Qk(n), such that Pk(n) = Qk(n) = fk(n) for n = 1, 2,… , k, where fk(1), fk(2),… , fk(k) are k arbitrarily chosen (real or complex) values. Then, we focus on the case that fk(n) is given by the sum of powers of the first n positive integers Sk(n) = 1k + 2k + ṡṡṡ + nk, and show that Sk(n) admits the polynomial representations Sk(n) = Pk(n) and Sk(n) = Qk(n) for all n = 1, 2,… , and k ≥ 1, where the first representation involves the Eulerian numbers, and the second one the Stirling numbers of the second kind. Finally, we consider yet another polynomial formula for Sk(n) alternative to the well-known formula of Bernoulli.

  12. Combinatorial theory of Macdonald polynomials I: proof of Haglund's formula.

    PubMed

    Haglund, J; Haiman, M; Loehr, N

    2005-02-22

    Haglund recently proposed a combinatorial interpretation of the modified Macdonald polynomials H(mu). We give a combinatorial proof of this conjecture, which establishes the existence and integrality of H(mu). As corollaries, we obtain the cocharge formula of Lascoux and Schutzenberger for Hall-Littlewood polynomials, a formula of Sahi and Knop for Jack's symmetric functions, a generalization of this result to the integral Macdonald polynomials J(mu), a formula for H(mu) in terms of Lascoux-Leclerc-Thibon polynomials, and combinatorial expressions for the Kostka-Macdonald coefficients K(lambda,mu) when mu is a two-column shape.

  13. Higher order derivatives of R-Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Das, Sourav; Swaminathan, A.

    2016-06-01

    In this work, the R-Jacobi polynomials defined on the nonnegative real axis related to F-distribution are considered. Using their Sturm-Liouville system higher order derivatives are constructed. Orthogonality property of these higher ordered R-Jacobi polynomials are obtained besides their normal form, self-adjoint form and hypergeometric representation. Interesting results on the Interpolation formula and Gaussian quadrature formulae are obtained with numerical examples.

  14. Measurement of electron density using reactance cutoff probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, K. H.; Seo, B. H.; Kim, J. H.

    2016-05-15

    This paper proposes a new measurement method of electron density using the reactance spectrum of the plasma in the cutoff probe system instead of the transmission spectrum. The highly accurate reactance spectrum of the plasma-cutoff probe system, as expected from previous circuit simulations [Kim et al., Appl. Phys. Lett. 99, 131502 (2011)], was measured using the full two-port error correction and automatic port extension methods of the network analyzer. The electron density can be obtained from the analysis of the measured reactance spectrum, based on circuit modeling. According to the circuit simulation results, the reactance cutoff probe can measure themore » electron density more precisely than the previous cutoff probe at low densities or at higher pressure. The obtained results for the electron density are presented and discussed for a wide range of experimental conditions, and this method is compared with previous methods (a cutoff probe using the transmission spectrum and a single Langmuir probe).« less

  15. TALEN-mediated generation and genetic correction of disease-specific human induced pluripotent stem cells.

    PubMed

    Ramalingam, Sivaprakash; Annaluru, Narayana; Kandavelou, Karthikeyan; Chandrasegaran, Srinivasan

    2014-01-01

    Generation and precise genetic correction of patient-derived hiPSCs have great potential in regenerative medicine. Such targeted genetic manipulations can now be achieved using gene-editing nucleases. Here, we report generation of cystic fibrosis (CF) and Gaucher's disease (GD) hiPSCs respectively from CF (homozygous for CFTRΔF508 mutation) and Type II GD [homozygous for β-glucocerebrosidase (GBA) 1448T>C mutation] patient fibroblasts, using CCR5- specific TALENs. Site-specific addition of loxP-flanked Oct4/Sox2/Klf4/Lin28/Nanog/eGFP gene cassette at the endogenous CCR5 site of patient-derived disease-specific primary fibroblasts induced reprogramming, giving rise to both monoallele (heterozygous) and biallele CCR5-modified hiPSCs. Subsequent excision of the donor cassette was done by treating CCR5-modified CF and GD hiPSCs with Cre. We also demonstrate site-specific correction of sickle cell disease (SCD) mutations at the endogenous HBB locus of patient-specific hiPSCs [TNC1 line that is homozygous for mutated β- globin alleles (βS/βS)], using HBB-specific TALENs. SCD-corrected hiPSC lines showed gene conversion of the mutated βS to the wild-type βA in one of the HBB alleles, while the other allele remained a mutant phenotype. After excision of the loxP-flanked DNA cassette from the SCD-corrected hiPSC lines using Cre, we obtained secondary heterozygous βS/βA hiPSCs, which express the wild-type (βA) transcript to 30-40% level as compared to uncorrected (βS/βS) SCD hiPSCs when differentiated into erythroid cells. Furthermore, we also show that TALEN-mediated generation and genetic correction of disease-specific hiPSCs did not induce any off-target mutations at closely related sites.

  16. Subclass-specific labeling of protein-reactive natural products with customized nucleophilic probes.

    PubMed

    Rudolf, Georg C; Koch, Maximilian F; Mandl, Franziska A M; Sieber, Stephan A

    2015-02-23

    Natural products represent a rich source of bioactive compounds that constitute a large fraction of approved drugs. Among those are molecules with electrophilic scaffolds, such as Michael acceptors, β-lactams, and epoxides that irreversibly inhibit essential enzymes based on their catalytic mechanism. In the search for novel bioactive molecules, current methods are challenged by the frequent rediscovery of known chemical entities. Herein small nucleophilic probes that attack electrophilic natural products and enhance their detection by HPLC-UV and HPLC-MS are introduced. A screen of diverse probe designs revealed one compound with a desired selectivity for epoxide- and maleimide-based antibiotics. Correspondingly, the natural products showdomycin and phosphomycin could be selectively targeted in extracts of their natural producing organism, in which the probe-modified molecules exhibited superior retention and MS detection relative to their unmodified counterparts. This method may thus help to discover small, electrophilic molecules that might otherwise easily elude detection in complex samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Torus Knot Polynomials and Susy Wilson Loops

    NASA Astrophysics Data System (ADS)

    Giasemidis, Georgios; Tierz, Miguel

    2014-12-01

    We give, using an explicit expression obtained in (Jones V, Ann Math 126:335, 1987), a basic hypergeometric representation of the HOMFLY polynomial of ( n, m) torus knots, and present a number of equivalent expressions, all related by Heine's transformations. Using this result, the symmetry and the leading polynomial at large N are explicit. We show the latter to be the Wilson loop of 2d Yang-Mills theory on the plane. In addition, after taking one winding to infinity, it becomes the Wilson loop in the zero instanton sector of the 2d Yang-Mills theory, which is known to give averages of Wilson loops in = 4 SYM theory. We also give, using matrix models, an interpretation of the HOMFLY polynomial and the corresponding Jones-Rosso representation in terms of q-harmonic oscillators.

  18. Y chromosome specific nucleic acid probe and method for determining the Y chromosome in situ

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich

    1998-01-01

    A method for producing a Y chromosome specific probe selected from highly repeating sequences on that chromosome is described. There is little or no nonspecific binding to autosomal and X chromosomes, and a very large signal is provided. Inventive primers allowing the use of PCR for both sample amplification and probe production are described, as is their use in producing large DNA chromosome painting sequences.

  19. Y chromosome specific nucleic acid probe and method for identifying the Y chromosome in SITU

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich

    1999-01-01

    A method for producing a Y chromosome specific probe selected from highly repeating sequences on that chromosome is described. There is little or no nonspecific binding to autosomal and X chromosomes, and a very large signal is provided. Inventive primers allowing the use of PCR for both sample amplification and probe production are described, as is their use in producing large DNA chromosome painting sequences.

  20. Y chromosome specific nucleic acid probe and method for determining the Y chromosome in situ

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich

    2001-01-01

    A method for producing a Y chromosome specific probe selected from highly repeating sequences on that chromosome is described. There is little or no nonspecific binding to autosomal and X chromosomes, and a very large signal is provided. Inventive primers allowing the use of PCR for both sample amplification and probe production are described, as is their use in producing large DNA chromosome painting sequences.

  1. Y chromosome specific nucleic acid probe and method for determining the Y chromosome in situ

    DOEpatents

    Gray, J.W.; Weier, H.U.

    1998-11-24

    A method for producing a Y chromosome specific probe selected from highly repeating sequences on that chromosome is described. There is little or no nonspecific binding to autosomal and X chromosomes, and a very large signal is provided. Inventive primers allowing the use of PCR for both sample amplification and probe production are described, as is their use in producing large DNA chromosome painting sequences. 9 figs.

  2. Y chromosome specific nucleic acid probe and method for identifying the Y chromosome in SITU

    DOEpatents

    Gray, J.W.; Weier, H.U.

    1999-03-30

    A method for producing a Y chromosome specific probe selected from highly repeating sequences on that chromosome is described. There is little or no nonspecific binding to autosomal and X chromosomes, and a very large signal is provided. Inventive primers allowing the use of PCR for both sample amplification and probe production are described, as is their use in producing large DNA chromosome painting sequences. 9 figs.

  3. M-Polynomials and topological indices of V-Phenylenic Nanotubes and Nanotori.

    PubMed

    Kwun, Young Chel; Munir, Mobeen; Nazeer, Waqas; Rafique, Shazia; Min Kang, Shin

    2017-08-18

    V-Phenylenic nanotubes and nanotori are most comprehensively studied nanostructures due to widespread applications in the production of catalytic, gas-sensing and corrosion-resistant materials. Representing chemical compounds with M-polynomial is a recent idea and it produces nice formulas of degree-based topological indices which correlate chemical properties of the material under investigation. These indices are used in the development of quantitative structure-activity relationships (QSARs) in which the biological activity and other properties of molecules like boiling point, stability, strain energy etc. are correlated with their structures. In this paper, we determine general closed formulae for M-polynomials of V-Phylenic nanotubes and nanotori. We recover important topological degree-based indices. We also give different graphs of topological indices and their relations with the parameters of structures.

  4. In-Process Atomic-Force Microscopy (AFM) Based Inspection

    PubMed Central

    Mekid, Samir

    2017-01-01

    A new in-process atomic-force microscopy (AFM) based inspection is presented for nanolithography to compensate for any deviation such as instantaneous degradation of the lithography probe tip. Traditional method used the AFM probes for lithography work and retract to inspect the obtained feature but this practice degrades the probe tip shape and hence, affects the measurement quality. This paper suggests a second dedicated lithography probe that is positioned back-to-back to the AFM probe under two synchronized controllers to correct any deviation in the process compared to specifications. This method shows that the quality improvement of the nanomachining, in progress probe tip wear, and better understanding of nanomachining. The system is hosted in a recently developed nanomanipulator for educational and research purposes. PMID:28561747

  5. The Fixed-Links Model in Combination with the Polynomial Function as a Tool for Investigating Choice Reaction Time Data

    ERIC Educational Resources Information Center

    Schweizer, Karl

    2006-01-01

    A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…

  6. Identities associated with Milne-Thomson type polynomials and special numbers.

    PubMed

    Simsek, Yilmaz; Cakic, Nenad

    2018-01-01

    The purpose of this paper is to give identities and relations including the Milne-Thomson polynomials, the Hermite polynomials, the Bernoulli numbers, the Euler numbers, the Stirling numbers, the central factorial numbers, and the Cauchy numbers. By using fermionic and bosonic p -adic integrals, we derive some new relations and formulas related to these numbers and polynomials, and also the combinatorial sums.

  7. Detector to detector corrections: a comprehensive experimental study of detector specific correction factors for beam output measurements for small radiotherapy beams.

    PubMed

    Azangwe, Godfrey; Grochowska, Paulina; Georg, Dietmar; Izewska, Joanna; Hopfgartner, Johannes; Lechner, Wolfgang; Andersen, Claus E; Beierholm, Anders R; Helt-Hansen, Jakob; Mizuno, Hideyuki; Fukumura, Akifumi; Yajima, Kaori; Gouldstone, Clare; Sharpe, Peter; Meghzifene, Ahmed; Palmans, Hugo

    2014-07-01

    The aim of the present study is to provide a comprehensive set of detector specific correction factors for beam output measurements for small beams, for a wide range of real time and passive detectors. The detector specific correction factors determined in this study may be potentially useful as a reference data set for small beam dosimetry measurements. Dose response of passive and real time detectors was investigated for small field sizes shaped with a micromultileaf collimator ranging from 0.6 × 0.6 cm(2) to 4.2 × 4.2 cm(2) and the measurements were extended to larger fields of up to 10 × 10 cm(2). Measurements were performed at 5 cm depth, in a 6 MV photon beam. Detectors used included alanine, thermoluminescent dosimeters (TLDs), stereotactic diode, electron diode, photon diode, radiophotoluminescent dosimeters (RPLDs), radioluminescence detector based on carbon-doped aluminium oxide (Al2O3:C), organic plastic scintillators, diamond detectors, liquid filled ion chamber, and a range of small volume air filled ionization chambers (volumes ranging from 0.002 cm(3) to 0.3 cm(3)). All detector measurements were corrected for volume averaging effect and compared with dose ratios determined from alanine to derive a detector correction factors that account for beam perturbation related to nonwater equivalence of the detector materials. For the detectors used in this study, volume averaging corrections ranged from unity for the smallest detectors such as the diodes, 1.148 for the 0.14 cm(3) air filled ionization chamber and were as high as 1.924 for the 0.3 cm(3) ionization chamber. After applying volume averaging corrections, the detector readings were consistent among themselves and with alanine measurements for several small detectors but they differed for larger detectors, in particular for some small ionization chambers with volumes larger than 0.1 cm(3). The results demonstrate how important it is for the appropriate corrections to be applied to give

  8. Optimization and control of perfusion cultures using a viable cell probe and cell specific perfusion rates.

    PubMed

    Dowd, Jason E; Jubb, Anthea; Kwok, K Ezra; Piret, James M

    2003-05-01

    Consistent perfusion culture production requires reliable cell retention and control of feed rates. An on-line cell probe based on capacitance was used to assay viable biomass concentrations. A constant cell specific perfusion rate controlled medium feed rates with a bioreactor cell concentration of approximately 5 x 10(6) cells mL(-1). Perfusion feeding was automatically adjusted based on the cell concentration signal from the on-line biomass sensor. Cell specific perfusion rates were varied over a range of 0.05 to 0.4 nL cell(-1) day(-1). Pseudo-steady-state bioreactor indices (concentrations, cellular rates and yields) were correlated to cell specific perfusion rates investigated to maximize recombinant protein production from a Chinese hamster ovary cell line. The tissue-type plasminogen activator concentration was maximized ( approximately 40 mg L(-1)) at 0.2 nL cell(-1) day(-1). The volumetric protein productivity ( approximately 60 mg L(-1) day(-1) was maximized above 0.3 nL cell(-1) day(-1). The use of cell specific perfusion rates provided a straightforward basis for controlling, modeling and optimizing perfusion cultures.

  9. Chemical synthesis of oligonucleotides containing a free sulphydryl group and subsequent attachment of thiol specific probes.

    PubMed Central

    Connolly, B A; Rider, P

    1985-01-01

    Oligonucleotides containing a free sulphydryl group at their 5'-termini have been synthesised and further derivatised with thiol specific probes. The nucleotide sequence required is prepared using standard solid phase phosphoramidite techniques and an extra round of synthesis is then performed using the S-triphenylmethyl O-methoxymorpholinophosphite derivatives of 2-mercaptoethanol, 3-mercaptopropan (1) ol or 6-mercaptohexan (1) ol. After cleavage from the resin and removal of the phosphate and base protecting groups, this yields an oligonucleotide containing an S-triphenylmethyl group attached to the 5'-phosphate group via a two, three or six carbon chain. The triphenylmethyl group can be readily removed with silver nitrate to give the free thiol. With the three and six carbon chain oligonucleotides, this thiol can be used, at pH 8, for the attachment of thiol specific probes as illustrated by the reaction with fluorescent conjugates of iodoacetates and maleiimides. However, oligonucleotides containing a thiol attached to the 5'-phosphate group via a two carbon chain are unstable at pH 8 decomposing to the free 5'-phosphate and so are unsuitable for further derivatisation. PMID:4011448

  10. Activatable clinical fluorophore-quencher antibody pairs as dual molecular probes for the enhanced specificity of image-guided surgery

    NASA Astrophysics Data System (ADS)

    Obaid, Girgis; Spring, Bryan Q.; Bano, Shazia; Hasan, Tayyaba

    2017-12-01

    The emergence of fluorescently labeled therapeutic antibodies has given rise to molecular probes for image-guided surgery. However, the extraneous interstitial presence of an unbound and nonspecifically accumulated probe gives rise to false-positive detection of tumor tissue and margins. Thus, the concept of tumor-cell activation of smart probes provides a potentially superior mechanism of delineating tumor margins as well as small tumor deposits. The combination of molecular targeting with intracellular activation circumvents the presence of extracellular, nonspecific signals of targeted probe accumulation. Here, we present a demonstration of the clinical antibodies cetuximab (cet, anti-EGFR mAb) and trastuzumab (trast, anti-HER-2 mAb) conjugated to Alexa Fluor molecules and IRDye QC-1 quencher optimized at the ratio of 1∶2∶6 to provide the greatest degree of proteolytic fluorescence activation, synonymous with intracellular lysosomal degradation. The cet-AF-Q-C1 conjugate (1∶2∶6) provides up to 9.8-fold proteolytic fluorescence activation. By preparing a spectrally distinct, irrelevant sham IgG-AF-QC-1 conjugate, a dual-activatable probe approach is shown to enhance the specificity of imaging within an orthotopic AsPC-1 pancreatic cancer xenograft model. The dual-activatable approach warrants expedited clinical translation to improve the specificity of image-guided surgery by spectrally decomposing specific from nonspecific probe accumulation, binding, and internalization.

  11. Overestimation of Mach number due to probe shadow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosselin, J. J.; Thakur, S. C.; Tynan, G. R.

    2016-07-15

    Comparisons of the plasma ion flow speed measurements from Mach probes and laser induced fluorescence were performed in the Controlled Shear Decorrelation Experiment. We show the presence of the probe causes a low density geometric shadow downstream of the probe that affects the current density collected by the probe in collisional plasmas if the ion-neutral mean free path is shorter than the probe shadow length, L{sub g} = w{sup 2} V{sub drift}/D{sub ⊥}, resulting in erroneous Mach numbers. We then present a simple correction term that provides the corrected Mach number from probe data when the sound speed, ion-neutral mean free path,more » and perpendicular diffusion coefficient of the plasma are known. The probe shadow effect must be taken into account whenever the ion-neutral mean free path is on the order of the probe shadow length in linear devices and the open-field line region of fusion devices.« less

  12. Langmuir Probe Spacecraft Potential End Item Specification Document

    NASA Technical Reports Server (NTRS)

    Gilchrist, Brian; Curtis, Leslie (Technical Monitor)

    2001-01-01

    This document describes the Langmuir Probe Spacecraft Potential (LPSP) investigation of the plasma environment in the vicinity of the ProSEDS Delta II spacecraft. This investigation will employ a group of three (3) Langmuir Probe Assemblies, LPAs, mounted on the Delta II second stage to measure the electron density and temperature (n(sub e) and T(sub e)), the ion density (n(sub i)), and the spacecraft potential (V(sub s)) relative to the surrounding ionospheric plasma. This document is also intended to define the technical requirements and flight-vehicle installation interfaces for the design, development, assembly, testing, qualification, and operation of the LPSP subsystem for the Propulsive Small Expendable Deployer System (ProSEDS) and its associated Ground Support Equipment (GSE). This document also defines the interfaces between the LPSP instrument and the ProSEDS Delta II spacecraft, as well as the design, fabrication, operation, and other requirements established to meet the mission objectives. The LPSP is the primary measurement instrument designed to characterize the background plasma environment and is a supporting instrument for measuring spacecraft potential of the Delta II vehicle used for the ProSEDS mission. Specifically, the LPSP will use the three LPAs equally spaced around the Delta II body to make measurements of the ambient ionospheric plasma during passive operations to aid in validating existing models of electrodynamic-tether propulsion. These same probes will also be used to measure Delta II spacecraft potential when active operations occur. When the electron emitting plasma contractor is on, dense neutral plasma is emitted. Effective operation of the plasma contactor (PC) will mean a low potential difference between the Delta II second stage and the surrounding plasma and represents one of the voltage parameters needed to fully characterize the electrodynamic-tether closed circuit. Given that the LP already needs to be well away from any

  13. Development and evaluation of probe based real time loop mediated isothermal amplification for Salmonella: A new tool for DNA quantification.

    PubMed

    Mashooq, Mohmad; Kumar, Deepak; Niranjan, Ankush Kiran; Agarwal, Rajesh Kumar; Rathore, Rajesh

    2016-07-01

    A one step, single tube, accelerated probe based real time loop mediated isothermal amplification (RT LAMP) assay was developed for detecting the invasion gene (InvA) of Salmonella. The probe based RT LAMP is a novel method of gene amplification that amplifies nucleic acid with high specificity and rapidity under isothermal conditions with a set of six primers. The whole procedure is very simple and rapid, and amplification can be obtained in 20min. Detection of gene amplification was accomplished by amplification curve, turbidity and addition of DNA binding dye at the end of the reaction results in colour difference and can be visualized under normal day light and in UV. The sensitivity of developed assay was found 10 fold higher than taqman based qPCR. The specificity of the RT LAMP assay was validated by the absence of any cross reaction with other members of enterobacteriaceae family and other gram negative bacteria. These results indicate that the probe based RT LAMP assay is extremely rapid, cost effective, highly specific and sensitivity and has potential usefulness for rapid Salmonella surveillance. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  14. FRET-based small-molecule fluorescent probes: rational design and bioimaging applications.

    PubMed

    Yuan, Lin; Lin, Weiying; Zheng, Kaibo; Zhu, Sasa

    2013-07-16

    Fluorescence imaging has emerged as a powerful tool for monitoring biomolecules within the context of living systems with high spatial and temporal resolution. Researchers have constructed a large number of synthetic intensity-based fluorescent probes for bio-imaging. However, intensity-based fluorescent probes have some limitations: variations in probe concentration, probe environment, and excitation intensity may influence the fluorescence intensity measurements. In principle, the use of ratiometric fluorescent probes can alleviate this shortcoming. Förster resonance energy transfer (FRET) is one of the most widely used sensing mechanisms for ratiometric fluorescent probes. However, the development of synthetic FRET probes with favorable photophysical properties that are also suitable for biological imaging applications remains challenging. In this Account, we review the rational design and biological applications of synthetic FRET probes, focusing primarily on studies from our laboratory. To construct useful FRET probes, it is a pre-requisite to develop a FRET platform with favorable photophysical properties. The design criteria of a FRET platform include (1) well-resolved absorption spectra of the donor and acceptor, (2) well-separated emission spectra of the donor and acceptor, (3) donors and acceptors with comparable brightness, (4) rigid linkers, and (5) near-perfect efficiency in energy transfer. With an efficient FRET platform in hand, it is then necessary to modulate the donor-acceptor distance or spectral overlap integral in an analyte-dependent fashion for development of FRET probes. Herein, we emphasize our most recent progress on the development of FRET probes by spectral overlap integral, in particular by changing the molar absorption coefficient of the donor dyes such as rhodamine dyes, which undergo unique changes in the absorption profiles during the ring-opening and -closing processes. Although partial success has been obtained in design of

  15. Genetic Correction and Hepatic Differentiation of Hemophilia B-specific Human Induced Pluripotent Stem Cells.

    PubMed

    He, Qiong; Wang, Hui-Hui; Cheng, Tao; Yuan, Wei-Ping; Ma, Yu-Po; Jiang, Yong-Ping; Ren, Zhi-Hua

    2017-09-27

    Objective To genetically correct a disease-causing point mutation in human induced pluripotent stem cells (iPSCs) derived from a hemophilia B patient. Methods First, the disease-causing mutation was detected by sequencing the encoding area of human coagulation factor IX (F IX) gene. Genomic DNA was extracted from the iPSCs, and the primers were designed to amplify the eight exons of F IX. Next, the point mutation in those iPSCs was genetically corrected using CRISPR/Cas9 technology in the presence of a 129-nucleotide homologous repair template that contained two synonymous mutations. Then, top 8 potential off-target sites were subsequently analyzed using Sanger sequencing. Finally, the corrected clones were differentiated into hepatocyte-like cells, and the secretion of F IX was validated by immunocytochemistry and ELISA assay. Results The cell line bore a missense mutation in the 6 th coding exon (c.676 C>T) of F IX gene. Correction of the point mutation was achieved via CRISPR/Cas9 technology in situ with a high efficacy at about 22% (10/45) and no off-target effects detected in the corrected iPSC clones. F IX secretion, which was further visualized by immunocytochemistry and quantified by ELISA in vitro, reached about 6 ng/ml on day 21 of differentiation procedure. Conclusions Mutations in human disease-specific iPSCs could be precisely corrected by CRISPR/Cas9 technology, and corrected cells still maintained hepatic differentiation capability. Our findings might throw a light on iPSC-based personalized therapies in the clinical application, especially for hemophilia B.

  16. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  17. Detection of Helicobacter Pylori Genome with an Optical Biosensor Based on Hybridization of Urease Gene with a Gold Nanoparticles-Labeled Probe

    NASA Astrophysics Data System (ADS)

    Shahrashoob, M.; Mohsenifar, A.; Tabatabaei, M.; Rahmani-Cherati, T.; Mobaraki, M.; Mota, A.; Shojaei, T. R.

    2016-05-01

    A novel optics-based nanobiosensor for sensitive determination of the Helicobacter pylori genome using a gold nanoparticles (AuNPs)-labeled probe is reported. Two specific thiol-modified capture and signal probes were designed based on a single-stranded complementary DNA (cDNA) region of the urease gene. The capture probe was immobilized on AuNPs, which were previously immobilized on an APTES-activated glass, and the signal probe was conjugated to different AuNPs as well. The presence of the cDNA in the reaction mixture led to the hybridization of the AuNPs-labeled capture probe and the signal probe with the cDNA, and consequently the optical density of the reaction mixture (AuNPs) was reduced proportionally to the cDNA concentration. The limit of detection was measured at 0.5 nM.

  18. Functionals of Gegenbauer polynomials and D-dimensional hydrogenic momentum expectation values

    NASA Astrophysics Data System (ADS)

    Van Assche, W.; Yáñez, R. J.; González-Férez, R.; Dehesa, Jesús S.

    2000-09-01

    The system of Gegenbauer or ultraspherical polynomials {Cnλ(x);n=0,1,…} is a classical family of polynomials orthogonal with respect to the weight function ωλ(x)=(1-x2)λ-1/2 on the support interval [-1,+1]. Integral functionals of Gegenbauer polynomials with integrand f(x)[Cnλ(x)]2ωλ(x), where f(x) is an arbitrary function which does not depend on n or λ, are considered in this paper. First, a general recursion formula for these functionals is obtained. Then, the explicit expression for some specific functionals of this type is found in a closed and compact form; namely, for the functionals with f(x) equal to (1-x)α(1+x)β, log(1-x2), and (1+x)log(1+x), which appear in numerous physico-mathematical problems. Finally, these functionals are used in the explicit evaluation of the momentum expectation values and of the D-dimensional hydrogenic atom with nuclear charge Z⩾1. The power expectation values are given by means of a terminating 5F4 hypergeometric function with unit argument, which is a considerable improvement with respect to Hey's expression (the only one existing up to now) which requires a double sum.

  19. On direct theorems for best polynomial approximation

    NASA Astrophysics Data System (ADS)

    Auad, A. A.; AbdulJabbar, R. S.

    2018-05-01

    This paper is to obtain similarity for the best approximation degree of functions, which are unbounded in L p,α (A = [0,1]), which called weighted space by algebraic polynomials. {E}nH{(f)}p,α and the best approximation degree in the same space on the interval [0,2π] by trigonometric polynomials {E}nT{(f)}p,α of direct wellknown theorems in forms the average modules.

  20. Glowing locked nucleic acids: brightly fluorescent probes for detection of nucleic acids in cells.

    PubMed

    Østergaard, Michael E; Cheguru, Pallavi; Papasani, Madhusudhan R; Hill, Rodney A; Hrdlicka, Patrick J

    2010-10-13

    Fluorophore-modified oligonucleotides have found widespread use in genomics and enable detection of single-nucleotide polymorphisms, real-time monitoring of PCR, and imaging of mRNA in living cells. Hybridization probes modified with polarity-sensitive fluorophores and molecular beacons (MBs) are among the most popular approaches to produce hybridization-induced increases in fluorescence intensity for nucleic acid detection. In the present study, we demonstrate that the 2'-N-(pyren-1-yl)carbonyl-2'-amino locked nucleic acid (LNA) monomer X is a highly versatile building block for generation of efficient hybridization probes and quencher-free MBs. The hybridization and fluorescence properties of these Glowing LNA probes are efficiently modulated and optimized by changes in probe backbone chemistry and architecture. Correctly designed probes are shown to exhibit (a) high affinity toward RNA targets, (b) excellent mismatch discrimination, (c) high biostability, and (d) pronounced hybridization-induced increases in fluorescence intensity leading to formation of brightly fluorescent duplexes with unprecedented emission quantum yields (Φ(F) = 0.45-0.89) among pyrene-labeled oligonucleotides. Finally, specific binding between messenger RNA and multilabeled quencher-free MBs based on Glowing LNA monomers is demonstrated (a) using in vitro transcription assays and (b) by quantitative fluorometric assays and direct microscopic observation of probes bound to mRNA in its native form. These features render Glowing LNA as promising diagnostic probes for biomedical applications.

  1. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Cieplak, Agnieszka; Slosar, Anze

    2017-01-01

    The Lyman-alpha forest has become a powerful cosmological probe of the underlying matter distribution at high redshift. It is a highly non-linear field with much information present beyond the two-point statistics of the power spectrum. The flux probability distribution function (PDF) in particular has been used as a successful probe of small-scale physics. In addition to the cosmological evolution however, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over the binned PDF as is commonly done. Since the n-th coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. In addition, we use hydrodynamic cosmological simulations to demonstrate that in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a finite small number of well-measured quantities.

  2. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Cieplak, Agnieszka; Slosar, Anze

    2018-01-01

    The Lyman-alpha forest has become a powerful cosmological probe at intermediate redshift. It is a highly non-linear field with much information present beyond the power spectrum. The flux probability flux distribution (PDF) in particular has been a successful probe of small scale physics. However, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring the coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. Since the n-th Legendre coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. Additionally, in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a small amount of well-measured quantities. Finally, we find that measuring fewer quasars with high signal-to-noise produces a higher amount of recoverable information.

  3. Engineering a lifetime-based activatable probe for photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Morgounova, Ekaterina; Shao, Qi; Hackel, Benjamin; Ashkenazi, Shai

    2013-02-01

    High-resolution, high-penetration depth activatable probes are needed for in-vivo imaging of enzyme activity. In this paper, we will describe the contrast mechanism of a new photoacoustic activatable probe that changes its excitation lifetime upon activation. The excitation decay of methylene blue (MB), a chromophore commonly used in therapeutic and diagnostic applications, is probed by photoacoustic lifetime contrast imaging (PLCI). The monomer of the dye presents a high-quantum yield of intersystem-crossing and long lifetime (70 μs) whereas the dimer is statically quenched with a short lifetime (a few ns). This forms the basis of a highly sensitive contrast mechanism between monomers and dimers. Two dimerization models - one using sodium sulfate, the other using sodium dodecyl sulfate - were applied to control the monomer-to-dimer ratio in MB solutions. Preliminary results show that the photoacoustic signal of a dimer solution is efficiently suppressed (< 20 dB) due to their short lifetime compared to the monomer sample. Flash-photolysis of the same solutions reveals a 99% decrease in transient absorption confirming PLCI results. This contrast mechanism can be applied to design a MB dual-labeled activatable probe bound by an enzyme-specific cleavable peptide linker. When the probe is cleaved by its target, MB molecules will separate by molecular diffusion and recover their long excitation lifetime enabling their detection by PLCI. Our long-term goal is to investigate enzyme-specific imaging in small animals and establish pre-clinical data for translational research and implementation of the technology in clinical applications.

  4. Reaction-based small-molecule fluorescent probes for chemoselective bioimaging

    PubMed Central

    Chan, Jefferson; Dodani, Sheel C.; Chang, Christopher J.

    2014-01-01

    The dynamic chemical diversity of elements, ions and molecules that form the basis of life offers both a challenge and an opportunity for study. Small-molecule fluorescent probes can make use of selective, bioorthogonal chemistries to report on specific analytes in cells and in more complex biological specimens. These probes offer powerful reagents to interrogate the physiology and pathology of reactive chemical species in their native environments with minimal perturbation to living systems. This Review presents a survey of tools and tactics for using such probes to detect biologically important chemical analytes. We highlight design criteria for effective chemical tools for use in biological applications as well as gaps for future exploration. PMID:23174976

  5. Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O

    2009-04-01

    This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.

  6. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  7. Delay correction model for estimating bus emissions at signalized intersections based on vehicle specific power distributions.

    PubMed

    Song, Guohua; Zhou, Xixi; Yu, Lei

    2015-05-01

    The intersection is one of the biggest emission points for buses and also the high exposure site for people. Several traffic performance indexes have been developed and widely used for intersection evaluations. However, few studies have focused on the relationship between these indexes and emissions at intersections. This paper intends to propose a model that relates emissions to the two commonly used measures of effectiveness (i.e. delay time and number of stops) by using bus activity data and emission data at intersections. First, with a large number of field instantaneous emission data and corresponding activity data collected by the Portable Emission Measurement System (PEMS), emission rates are derived for different vehicle specific power (VSP) bins. Then, 2002 sets of trajectory data, an equivalent of about 140,000 sets of second-by-second activity data, are obtained from Global Position Systems (GPSs)-equipped diesel buses in Beijing. The delay and the emission factors of each trajectory are estimated. Then, by using baseline emission factors for two types of intersections, e.g. the Arterial @ Arterial Intersection and the Arterial @ Collector, delay correction factors are calculated for the two types of intersections at different congestion levels. Finally, delay correction models are established for adjusting emission factors for each type of intersections and different numbers of stops. A comparative analysis between estimated and field emission factors demonstrates that the delay correction model is reliable. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Recent Progress in Aptamer-Based Functional Probes for Bioanalysis and Biomedicine.

    PubMed

    Zhang, Huimin; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong

    2016-07-11

    Nucleic acid aptamers are short synthetic DNA or RNA sequences that can bind to a wide range of targets with high affinity and specificity. In recent years, aptamers have attracted increasing research interest due to their unique features of high binding affinity and specificity, small size, excellent chemical stability, easy chemical synthesis, facile modification, and minimal immunogenicity. These properties make aptamers ideal recognition ligands for bioanalysis, disease diagnosis, and cancer therapy. This review highlights the recent progress in aptamer selection and the latest applications of aptamer-based functional probes in the fields of bioanalysis and biomedicine. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Maximal aggregation of polynomial dynamical systems

    PubMed Central

    Cardelli, Luca; Tschaikowski, Max

    2017-01-01

    Ordinary differential equations (ODEs) with polynomial derivatives are a fundamental tool for understanding the dynamics of systems across many branches of science, but our ability to gain mechanistic insight and effectively conduct numerical evaluations is critically hindered when dealing with large models. Here we propose an aggregation technique that rests on two notions of equivalence relating ODE variables whenever they have the same solution (backward criterion) or if a self-consistent system can be written for describing the evolution of sums of variables in the same equivalence class (forward criterion). A key feature of our proposal is to encode a polynomial ODE system into a finitary structure akin to a formal chemical reaction network. This enables the development of a discrete algorithm to efficiently compute the largest equivalence, building on approaches rooted in computer science to minimize basic models of computation through iterative partition refinements. The physical interpretability of the aggregation is shown on polynomial ODE systems for biochemical reaction networks, gene regulatory networks, and evolutionary game theory. PMID:28878023

  10. Animating Nested Taylor Polynomials to Approximate a Function

    ERIC Educational Resources Information Center

    Mazzone, Eric F.; Piper, Bruce R.

    2010-01-01

    The way that Taylor polynomials approximate functions can be demonstrated by moving the center point while keeping the degree fixed. These animations are particularly nice when the Taylor polynomials do not intersect and form a nested family. We prove a result that shows when this nesting occurs. The animations can be shown in class or…

  11. Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity

    NASA Astrophysics Data System (ADS)

    Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.

    2018-07-01

    The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.

  12. A mass spectrometry-based multiplex SNP genotyping by utilizing allele-specific ligation and strand displacement amplification.

    PubMed

    Park, Jung Hun; Jang, Hyowon; Jung, Yun Kyung; Jung, Ye Lim; Shin, Inkyung; Cho, Dae-Yeon; Park, Hyun Gyu

    2017-05-15

    We herein describe a new mass spectrometry-based method for multiplex SNP genotyping by utilizing allele-specific ligation and strand displacement amplification (SDA) reaction. In this method, allele-specific ligation is first performed to discriminate base sequence variations at the SNP site within the PCR-amplified target DNA. The primary ligation probe is extended by a universal primer annealing site while the secondary ligation probe has base sequences as an overhang with a nicking enzyme recognition site and complementary mass marker sequence. The ligation probe pairs are ligated by DNA ligase only at specific allele in the target DNA and the resulting ligated product serves as a template to promote the SDA reaction using a universal primer. This process isothermally amplifies short DNA fragments, called mass markers, to be analyzed by mass spectrometry. By varying the sizes of the mass markers, we successfully demonstrated the multiplex SNP genotyping capability of this method by reliably identifying several BRCA mutations in a multiplex manner with mass spectrometry. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Improving the psychometric properties of dot-probe attention measures using response-based computation.

    PubMed

    Evans, Travis C; Britton, Jennifer C

    2018-09-01

    Abnormal threat-related attention in anxiety disorders is most commonly assessed and modified using the dot-probe paradigm; however, poor psychometric properties of reaction-time measures may contribute to inconsistencies across studies. Typically, standard attention measures are derived using average reaction-times obtained in experimentally-defined conditions. However, current approaches based on experimentally-defined conditions are limited. In this study, the psychometric properties of a novel response-based computation approach to analyze dot-probe data are compared to standard measures of attention. 148 adults (19.19 ± 1.42 years, 84 women) completed a standardized dot-probe task including threatening and neutral faces. We generated both standard and response-based measures of attention bias, attentional orientation, and attentional disengagement. We compared overall internal consistency, number of trials necessary to reach internal consistency, test-retest reliability (n = 72), and criterion validity obtained using each approach. Compared to standard attention measures, response-based measures demonstrated uniformly high levels of internal consistency with relatively few trials and varying improvements in test-retest reliability. Additionally, response-based measures demonstrated specific evidence of anxiety-related associations above and beyond both standard attention measures and other confounds. Future studies are necessary to validate this approach in clinical samples. Response-based attention measures demonstrate superior psychometric properties compared to standard attention measures, which may improve the detection of anxiety-related associations and treatment-related changes in clinical samples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Tsallis p, q-deformed Touchard polynomials and Stirling numbers

    NASA Astrophysics Data System (ADS)

    Herscovici, O.; Mansour, T.

    2017-01-01

    In this paper, we develop and investigate a new two-parametrized deformation of the Touchard polynomials, based on the definition of the NEXT q-exponential function of Tsallis. We obtain new generalizations of the Stirling numbers of the second kind and of the binomial coefficients and represent two new statistics for the set partitions.

  15. Aptamer-based electrochemical sensors with aptamer-complementary DNA oligonucleotides as probe.

    PubMed

    Lu, Ying; Li, Xianchan; Zhang, Limin; Yu, Ping; Su, Lei; Mao, Lanqun

    2008-03-15

    This study describes a facile and general strategy for the development of aptamer-based electrochemical sensors with a high specificity toward the targets and a ready regeneration feature. Very different from the existing strategies for the development of electrochemical aptasensors with the aptamers as the probes, the strategy proposed here is essentially based on the utilization of the aptamer-complementary DNA (cDNA) oligonucleotides as the probes for electrochemical sensing. In this context, the sequences at both ends of the cDNA are tailor-made to be complementary and both the redox moiety (i.e., ferrocene in this study) and thiol group are labeled onto the cDNA. The labeled cDNA are hybridized with their respective aptamers (i.e., ATP- and thrombin-binding aptamers in this study) to form double-stranded DNA (ds-DNA) and the electrochemical aptasensors are prepared by self-assembling the labeled ds-DNA onto Au electrodes. Upon target binding, the aptamers confined onto electrode surface dissociate from their respective cDNA oligonucleotides into the solution and the single-stranded cDNA could thus tend to form a hairpin structure through the hybridization of the complementary sequences at both its ends. Such a conformational change of the cDNA resulting from the target binding-induced dissociation of the aptamers essentially leads to the change in the voltammetric signal of the redox moiety labeled onto the cDNA and thus constitutes the mechanism for the electrochemical aptasensors for specific target sensing. The aptasensors demonstrated here with the cDNA as the probe are readily regenerated and show good responses toward the targets. This study may offer a new and relatively general approach to electrochemical aptasensors with good analytical properties and potential applications.

  16. Polynomial expansions of single-mode motions around equilibrium points in the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Lei, Hanlun; Xu, Bo; Circi, Christian

    2018-05-01

    In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.

  17. On the coefficients of differentiated expansions of ultraspherical polynomials

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1989-01-01

    A formula expressing the coefficients of an expression of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  18. Mapping Landslides in Lunar Impact Craters Using Chebyshev Polynomials and Dem's

    NASA Astrophysics Data System (ADS)

    Yordanov, V.; Scaioni, M.; Brunetti, M. T.; Melis, M. T.; Zinzi, A.; Giommi, P.

    2016-06-01

    Geological slope failure processes have been observed on the Moon surface for decades, nevertheless a detailed and exhaustive lunar landslide inventory has not been produced yet. For a preliminary survey, WAC images and DEM maps from LROC at 100 m/pixels have been exploited in combination with the criteria applied by Brunetti et al. (2015) to detect the landslides. These criteria are based on the visual analysis of optical images to recognize mass wasting features. In the literature, Chebyshev polynomials have been applied to interpolate crater cross-sections in order to obtain a parametric characterization useful for classification into different morphological shapes. Here a new implementation of Chebyshev polynomial approximation is proposed, taking into account some statistical testing of the results obtained during Least-squares estimation. The presence of landslides in lunar craters is then investigated by analyzing the absolute values off odd coefficients of estimated Chebyshev polynomials. A case study on the Cassini A crater has demonstrated the key-points of the proposed methodology and outlined the required future development to carry out.

  19. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for

  20. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  1. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Orthonormal vector polynomials in a unit circle, Part I: Basis set derived from gradients of Zernike polynomials.

    PubMed

    Zhao, Chunyu; Burge, James H

    2007-12-24

    Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.

  3. Design of thermocouple probes for measurement of rocket exhaust plume temperatures

    NASA Astrophysics Data System (ADS)

    Warren, R. C.

    1994-06-01

    This paper summarizes a literature survey on high temperature measurement and describes the design of probes used in plume measurements. There were no cases reported of measurements in extreme environments such as exist in solid rocket exhausts, but there were a number of thermocouple designs which had been used under less extreme conditions and which could be further developed. Tungsten-rhenium(W-Rh) thermocouples had the combined properties of strength at high temperatures, high thermoelectric emf, and resistance to chemical attack. A shielded probe was required, both to protect the thermocouple junction, and to minimise radiative heat losses. After some experimentation, a twin shielded design made from molybdenum gave acceptable results. Corrections for thermal conduction losses were made based on a method obtained from the literature. Radiation losses were minimized with this probe design, and corrections for these losses were too complex and unreliable to be included.

  4. Polynomial asymptotes of the second kind

    NASA Astrophysics Data System (ADS)

    Dobbs, David E.

    2011-03-01

    This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and conics. Prerequisites include the division algorithm for polynomials with coefficients in the field of real numbers and elementary facts about limits from calculus. This note could be used as enrichment material in courses ranging from Calculus to Real Analysis to Abstract Algebra.

  5. Rapid real-time diagnostic PCR for Trichophyton rubrum and Trichophyton mentagrophytes in patients with tinea unguium and tinea pedis using specific fluorescent probes.

    PubMed

    Miyajima, Yoshiharu; Satoh, Kazuo; Uchida, Takao; Yamada, Tsuyoshi; Abe, Michiko; Watanabe, Shin-ichi; Makimura, Miho; Makimura, Koichi

    2013-03-01

    Trichophyton rubrum and Trichophyton mentagrophytes human-type (synonym, Trichophyton interdigitale (anthropophilic)) are major causative pathogens of tinea unguium. For suitable diagnosis and treatment, rapid and accurate identification of etiologic agents in clinical samples using reliable molecular based method is required. For identification of organisms causing tinea unguium, we developed a new real-time polymerase chain reaction (PCR) with a pan-fungal primer set and probe, as well as specific primer sets and probes for T. rubrum and T. mentagrophytes human-type. We designed two sets of primers from the internal transcribed spacer 1 (ITS1) region of fungal ribosomal DNA (rDNA) and three quadruple fluorescent probes, one for detection wide range pathogenic fungi and two for classification of T. rubrum and T. mentagrophytes by specific binding to different sites in the ITS1 region. We investigated the specificity of these primer sets and probes using fungal genomic DNA, and also examined 42 clinical specimens with our real-time PCR. The primers and probes specifically detected T. rubrum, T. mentagrophytes, and a wide range of pathogenic fungi. The causative pathogens were identified in 42 nail and skin samples from 32 patients. The total time required for identification of fungal species in each clinical specimen was about 3h. The copy number of each fungal DNA in the clinical specimens was estimated from the intensity of fluorescence simultaneously. This PCR system is one of the most rapid and sensitive methods available for diagnosing dermatophytosis, including tinea unguium and tinea pedis. Copyright © 2012 Japanese Society for Investigative Dermatology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Development of Tethered Hsp90 Inhibitors Carrying Radioiodinated Probes to Specifically Discriminate and Kill Malignant Breast Tumor Cells

    DTIC Science & Technology

    2016-05-01

    AWARD NUMBER: W81XWH-15-1-0072 TITLE: Development of Tethered Hsp90 Inhibitors Carrying Radioiodinated Probes to Specifically Discriminate and...1 May 2015 - 30 Apr 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Development of Tethered Hsp90 Inhibitors Carrying Radioiodinated Probes to...inhibitor capable of carrying various radioactive iodine isotopes for early detection and ablation of metastatic breast cancers. These probes

  7. Quantization of gauge fields, graph polynomials and graph homology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van

    2013-09-15

    We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less

  8. From Chebyshev to Bernstein: A Tour of Polynomials Small and Large

    ERIC Educational Resources Information Center

    Boelkins, Matthew; Miller, Jennifer; Vugteveen, Benjamin

    2006-01-01

    Consider the family of monic polynomials of degree n having zeros at -1 and +1 and all their other real zeros in between these two values. This article explores the size of these polynomials using the supremum of the absolute value on [-1, 1], showing that scaled Chebyshev and Bernstein polynomials give the extremes.

  9. Design and development by direct polishing of the WFXT thin polynomial mirror shells

    NASA Astrophysics Data System (ADS)

    Proserpio, L.; Campana, S.; Citterio, O.; Civitani, M.; Combrinck, H.; Conconi, P.; Cotroneo, V.; Freeman, R.; Mattini, E.; Langstrof, P.; Morton, R.; Motta, G.; Oberle, O.; Pareschi, G.; Parodi, G.; Pels, C.; Schenk, C.; Stock, R.; Tagliaferri, G.

    2017-11-01

    The Wide Field X-ray Telescope (WFXT) is a medium class mission proposed to address key questions about cosmic origins and physics of the cosmos through an unprecedented survey of the sky in the soft X-ray band (0.2-6 keV) [1], [2]. In order to get the desired angular resolution of 10 arcsec (5 arcsec goal) on the entire 1 degrees Field Of View (FOV), the design of the optical system is based on nested grazing-incidence polynomial profiles mirrors, and assumes a focal plane curvature and plate scale corrections among the shells. This design guarantees an increased angular resolution also at large off-axis positions with respect to the usually adopted Wolter I configuration. In order to meet the requirements in terms of mass and effective area (less than 1200 kg, 6000 cm2 @ 1 keV), the nested shells are thin and made of quartz glass. The telescope assembly is composed by three identical modules of 78 nested shells each, with diameter up to 1.1 m, length in the range of 200-440 mm and thickness of less than 2.2 mm. At this regard, a deterministic direct polishing method is under investigation to manufacture the WFXT thin grazing-incidence mirrors made of quartz. The direct polishing method has already been used for past missions (as Einstein, Rosat, Chandra) but based on much thicker shells (10 mm ore more). The technological challenge for WFXT is to apply the same approach but for 510 times thinner shells. The proposed approach is based on two main steps: first, quartz glass tubes available on the market are ground to conical profiles; second the pre-shaped shells are polished to the required polynomial profiles using a CNC polishing machine. In this paper, preliminary results on the direct grinding and polishing of prototypes shells made by quartz glass with low thickness, representative of the WFXT optical design, are presented.

  10. Surface-enhanced Raman scattering based nonfluorescent probe for multiplex DNA detection.

    PubMed

    Sun, Lan; Yu, Chenxu; Irudayaraj, Joseph

    2007-06-01

    To provide rapid and accurate detection of DNA markers in a straightforward, inexpensive, and multiplex format, an alternative surface-enhanced Raman scattering based probe was designed and fabricated to covalently attach both DNA probing sequence and nonfluorescent Raman tags to the surface of gold nanoparticles (DNA-AuP-RTag). The intensity of Raman signal of the probes could be controlled through the surface coverage of the nonfluorescent Raman tags (RTags). Detection sensitivity of these probes could be optimized by fine-tuning the amount of DNA molecules and RTags on the probes. Long-term stability of the DNA-AuP-RTag probes was found to be good (over 3 months). Excellent multiplexing capability of the DNA-AuP-RTag scheme was demonstrated by simultaneous identification of up to eight probes in a mixture. Detection of hybridization of single-stranded DNA to its complementary targets was successfully accomplished with a long-term goal to use nonfluorescent RTags in a Raman-based DNA microarray platform.

  11. Novel quadrilateral elements based on explicit Hermite polynomials for bending of Kirchhoff-Love plates

    NASA Astrophysics Data System (ADS)

    Beheshti, Alireza

    2018-03-01

    The contribution addresses the finite element analysis of bending of plates given the Kirchhoff-Love model. To analyze the static deformation of plates with different loadings and geometries, the principle of virtual work is used to extract the weak form. Following deriving the strain field, stresses and resultants may be obtained. For constructing four-node quadrilateral plate elements, the Hermite polynomials defined with respect to the variables in the parent space are applied explicitly. Based on the approximated field of displacement, the stiffness matrix and the load vector in the finite element method are obtained. To demonstrate the performance of the subparametric 4-node plate elements, some known, classical examples in structural mechanics are solved and there are comparisons with the analytical solutions available in the literature.

  12. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  13. [Development of fluorescent probes for bone imaging in vivo ~Fluorescent probes for intravital imaging of osteoclast activity~.

    PubMed

    Minoshima, Masafumi; Kikuchi, Kazuya

    Fluorescent molecules are widely used as a tool to directly visualize target biomolecules in vivo. Fluorescent probes have the advantage that desired function can be rendered based on rational design. For bone-imaging fluorescent probes in vivo, they should be delivered to bone tissue upon administration. Recently, a fluorescent probe for detecting osteoclast activity was developed. The fluorescent probe has acid-sensitive fluorescence property, specific delivery to bone tissue, and durability against laser irradiation, which enabled real-time intravital imaging of bone-resorbing osteoclasts for a long period of time.

  14. Generalized Freud's equation and level densities with polynomial potential

    NASA Astrophysics Data System (ADS)

    Boobna, Akshat; Ghosh, Saugata

    2013-08-01

    We study orthogonal polynomials with weight $\\exp[-NV(x)]$, where $V(x)=\\sum_{k=1}^{d}a_{2k}x^{2k}/2k$ is a polynomial of order 2d. We derive the generalised Freud's equations for $d=3$, 4 and 5 and using this obtain $R_{\\mu}=h_{\\mu}/h_{\\mu -1}$, where $h_{\\mu}$ is the normalization constant for the corresponding orthogonal polynomials. Moments of the density functions, expressed in terms of $R_{\\mu}$, are obtained using Freud's equation and using this, explicit results of level densities as $N\\rightarrow\\infty$ are derived.

  15. Asymptotic formulae for the zeros of orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badkov, V M

    2012-09-30

    Let p{sub n}(t) be an algebraic polynomial that is orthonormal with weight p(t) on the interval [-1, 1]. When p(t) is a perturbation (in certain limits) of the Chebyshev weight of the first kind, the zeros of the polynomial p{sub n}( cos {tau}) and the differences between pairs of (not necessarily consecutive) zeros are shown to satisfy asymptotic formulae as n{yields}{infinity}, which hold uniformly with respect to the indices of the zeros. Similar results are also obtained for perturbations of the Chebyshev weight of the second kind. First, some preliminary results on the asymptotic behaviour of the difference between twomore » zeros of an orthogonal trigonometric polynomial, which are needed, are established. Bibliography: 15 titles.« less

  16. Algebraic calculations for spectrum of superintegrable system from exceptional orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Hoque, Md. Fazlul; Marquette, Ian; Post, Sarah; Zhang, Yao-Zhong

    2018-04-01

    We introduce an extended Kepler-Coulomb quantum model in spherical coordinates. The Schrödinger equation of this Hamiltonian is solved in these coordinates and it is shown that the wave functions of the system can be expressed in terms of Laguerre, Legendre and exceptional Jacobi polynomials (of hypergeometric type). We construct ladder and shift operators based on the corresponding wave functions and obtain their recurrence formulas. These recurrence relations are used to construct higher-order, algebraically independent integrals of motion to prove superintegrability of the Hamiltonian. The integrals form a higher rank polynomial algebra. By constructing the structure functions of the associated deformed oscillator algebras we derive the degeneracy of energy spectrum of the superintegrable system.

  17. Evaluating Protein Structure and Dynamics Using Co-Solvents, Photochemical Triggers, and Site-Specific Spectroscopic Probes

    NASA Astrophysics Data System (ADS)

    Abaskharon, Rachel M.

    As ubiquitous and diverse biopolymers, proteins are dynamic molecules that are constantly engaging in inter- and intramolecular interactions responsible for their structure, fold, and function. Because of this, gaining a comprehensive understanding of the factors that control protein conformation and dynamics remains elusive as current experimental techniques often lack the ability to initiate and probe a specific interaction or conformational transition. For this reason, this thesis aims to develop methods to control and monitor protein conformations, conformational transitions, and dynamics in a site-specific manner, as well as to understand how specific and non-specific interactions affect the protein folding energy landscape. First, by using the co-solvent, trifluoroethanol (TFE), we show that the rate at which a peptide folds can be greatly impacted and thus controlled by the excluded volume effect. Secondly, we demonstrate the utility of several light-responsive molecules and reactions as methods to manipulate and investigate protein-folding processes. Using an azobenzene linker as a photo-initiator, we are able to increase the folding rate of a protein system by an order of magnitude by channeling a sub-population through a parallel, faster folding pathway. Additionally, we utilize a tryptophan-mediated electron transfer process to a nearby disulfide bond to strategically unfold a protein molecule with ultraviolet light. We also demonstrate the potential of two ruthenium polypyridyl complexes as ultrafast phototriggers of protein reactions. Finally, we develop several site-specific spectroscopic probes of protein structure and environment. Specifically, we demonstrate that a 13C-labeled aspartic acid residue constitutes a useful site-specific infrared probe for investigating salt-bridges and hydration dynamics of proteins, particularly in proteins containing several acidic amino acids. We also show that a proline-derivative, 4-oxoproline, possesses novel

  18. Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2009-12-01

    We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.

  19. Polynomial compensation, inversion, and approximation of discrete time linear systems

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1987-01-01

    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.

  20. Corrective Osteotomy for Symptomatic Clavicle Malunion Using Patient-specific Osteotomy and Reduction Guides.

    PubMed

    Haefeli, Mathias; Schenkel, Matthias; Schumacher, Ralf; Eid, Karim

    2017-09-01

    Midshaft clavicular fractures are often treated nonoperatively with good reported clinical outcome in a majority of patients. However, malunion with shortening of the affected clavicle is not uncommon. Shortening of the clavicle has been shown to affect shoulder strength and kinematics with alteration of scapular position. Whereas the exact clinical impact of these factors is unknown, the deformity may lead to cosmetic and functional impairment as for example pain with weight-bearing on the shoulder girdle. Other reported complications of clavicular malunion include thoracic outlet syndrome, subclavicular vein thrombosis, and axillary plexus compression. Corrective osteotomy has therefore been recommended for symptomatic clavicular malunions, generally using plain x-rays for planning the necessary elongation. Particularly in malunited multifragmentary fractures it may be difficult to exactly determine the plane of osteotomy intraoperatively to restore the precise anatomic shape of the clavicle. We present a technique for corrective osteotomy using preoperative computer planning and 3-dimensional printed patient-specific intraoperative osteotomy and reduction guides based on the healthy contralateral clavicle.

  1. Fluoromodule-based reporter/probes designed for in vivo fluorescence imaging

    PubMed Central

    Zhang, Ming; Chakraborty, Subhasish K.; Sampath, Padma; Rojas, Juan J.; Hou, Weizhou; Saurabh, Saumya; Thorne, Steve H.; Bruchez, Marcel P.; Waggoner, Alan S.

    2015-01-01

    Optical imaging of whole, living animals has proven to be a powerful tool in multiple areas of preclinical research and has allowed noninvasive monitoring of immune responses, tumor and pathogen growth, and treatment responses in longitudinal studies. However, fluorescence-based studies in animals are challenging because tissue absorbs and autofluoresces strongly in the visible light spectrum. These optical properties drive development and use of fluorescent labels that absorb and emit at longer wavelengths. Here, we present a far-red absorbing fluoromodule–based reporter/probe system and show that this system can be used for imaging in living mice. The probe we developed is a fluorogenic dye called SC1 that is dark in solution but highly fluorescent when bound to its cognate reporter, Mars1. The reporter/probe complex, or fluoromodule, produced peak emission near 730 nm. Mars1 was able to bind a variety of structurally similar probes that differ in color and membrane permeability. We demonstrated that a tool kit of multiple probes can be used to label extracellular and intracellular reporter–tagged receptor pools with 2 colors. Imaging studies may benefit from this far-red excited reporter/probe system, which features tight coupling between probe fluorescence and reporter binding and offers the option of using an expandable family of fluorogenic probes with a single reporter gene. PMID:26348895

  2. The usefulness of “corrected” body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data

    PubMed Central

    2014-01-01

    Background National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. Methods The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18–65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. Results All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23–28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. Conclusions If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a

  3. Quadratically Convergent Method for Simultaneously Approaching the Roots of Polynomial Solutions of a Class of Differential Equations

    NASA Astrophysics Data System (ADS)

    Recchioni, Maria Cristina

    2001-12-01

    This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.

  4. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  5. Combining freeform optics and curved detectors for wide field imaging: a polynomial approach over squared aperture.

    PubMed

    Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe

    2017-06-26

    In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.

  6. Investigation of magneto-hemodynamic flow in a semi-porous channel using orthonormal Bernstein polynomials

    NASA Astrophysics Data System (ADS)

    Hosseini, E.; Loghmani, G. B.; Heydari, M.; Rashidi, M. M.

    2017-07-01

    In this paper, the problem of the magneto-hemodynamic laminar viscous flow of a conducting physiological fluid in a semi-porous channel under a transverse magnetic field is investigated numerically. Using a Berman's similarity transformation, the two-dimensional momentum conservation partial differential equations can be written as a system of nonlinear ordinary differential equations incorporating Lorentizian magneto-hydrodynamic body force terms. A new computational method based on the operational matrix of derivative of orthonormal Bernstein polynomials for solving the resulting differential systems is introduced. Moreover, by using the residual correction process, two types of error estimates are provided and reported to show the strength of the proposed method. Graphical and tabular results are presented to investigate the influence of the Hartmann number ( Ha) and the transpiration Reynolds number ( Re on velocity profiles in the channel. The results are compared with those obtained by previous works to confirm the accuracy and efficiency of the proposed scheme.

  7. Multi-indexed Meixner and little q-Jacobi (Laguerre) polynomials

    NASA Astrophysics Data System (ADS)

    Odake, Satoru; Sasaki, Ryu

    2017-04-01

    As the fourth stage of the project multi-indexed orthogonal polynomials, we present the multi-indexed Meixner and little q-Jacobi (Laguerre) polynomials in the framework of ‘discrete quantum mechanics’ with real shifts defined on the semi-infinite lattice in one dimension. They are obtained, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier, from the quantum mechanical systems corresponding to the original orthogonal polynomials by multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of virtual state vectors. The virtual state vectors are the solutions of the matrix Schrödinger equation on all the lattice points having negative energies and infinite norm. This is in good contrast to the (q-)Racah systems defined on a finite lattice, in which the ‘virtual state’ vectors satisfy the matrix Schrödinger equation except for one of the two boundary points.

  8. Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials

    PubMed Central

    Corteel, Sylvie; Williams, Lauren K.

    2010-01-01

    We introduce some combinatorial objects called staircase tableaux, which have cardinality 4nn !, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities α and γ, and they may exit and enter at the right with probabilities β and δ. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials. PMID:20348417

  9. Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials.

    PubMed

    Corteel, Sylvie; Williams, Lauren K

    2010-04-13

    We introduce some combinatorial objects called staircase tableaux, which have cardinality 4(n)n!, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities alpha and gamma, and they may exit and enter at the right with probabilities beta and delta. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials.

  10. An experimental comparison of ETM+ image geometric correction methods in the mountainous areas of Yunnan Province, China

    NASA Astrophysics Data System (ADS)

    Wang, Jinliang; Wu, Xuejiao

    2010-11-01

    Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.

  11. New separated polynomial solutions to the Zernike system on the unit disk and interbasis expansion.

    PubMed

    Pogosyan, George S; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation proposed by Frits Zernike to obtain a basis of polynomial orthogonal solutions on the unit disk to classify wavefront aberrations in circular pupils is shown to have a set of new orthonormal solution bases involving Legendre and Gegenbauer polynomials in nonorthogonal coordinates, close to Cartesian ones. We find the overlaps between the original Zernike basis and a representative of the new set, which turn out to be Clebsch-Gordan coefficients.

  12. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  13. Specificity Bio-identification of CNT-Based Transistor

    NASA Astrophysics Data System (ADS)

    Wang, Sheng-Yu; Wu, Hue-Min

    2017-12-01

    In this research, we report a simple and general approach to π-π stacking functionalization of the sidewalls of CNTs by 1-pyrenebutanoic acid, succinimidyl ester (PSE), and subsequent immobilization of insulin-like growth factor 1 receptor (IGF1R) onto SWNTs with a high degree of control and specificity. The selection of PSE provides visualization and characterization of individual CNTs based on its strong luminescence. In addition, we designed a simple and efficient electrode with a staggered pattern to increase the effect of electrophoresis by using electric field for the macroscopic alignment of CNTs to complete a field-effect device for CNT-based biosensors. Scanning Electron Microscopy (SEM) was used to investigate the morphology of the biosensors. The results of four-point probe method demonstrated high selectivity and sensitivity of detection. The functionalization of SWNTs was investigated by Fourier transform infrared spectroscopy (FTIR). Experimental results imply that specific binding between IGF1R and its specific mAb results in a dramatic change in electrical current of CNT-based devices, and suggest that the devices are very promising biosensor candidates to detect circulating cancer cells.

  14. Bohm-criterion approximation versus optimal matched solution for a cylindrical probe in radial-motion theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Din, Alif

    2016-08-15

    The theory of positive-ion collection by a probe immersed in a low-pressure plasma was reviewed and extended by Allen et al. [Proc. Phys. Soc. 70, 297 (1957)]. The numerical computations for cylindrical and spherical probes in a sheath region were presented by F. F. Chen [J. Nucl. Energy C 7, 41 (1965)]. Here, in this paper, the sheath and presheath solutions for a cylindrical probe are matched through a numerical matching procedure to yield “matched” potential profile or “M solution.” The solution based on the Bohm criterion approach “B solution” is discussed for this particular problem. The comparison of cylindricalmore » probe characteristics obtained from the correct potential profile (M solution) and the approximated Bohm-criterion approach are different. This raises questions about the correctness of cylindrical probe theories relying only on the Bohm-criterion approach. Also the comparison between theoretical and experimental ion current characteristics shows that in an argon plasma the ions motion towards the probe is almost radial.« less

  15. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform

    PubMed Central

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-01-01

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317

  16. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.

    PubMed

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-02-13

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.

  17. Control of magnetic bearing systems via the Chebyshev polynomial-based unified model (CPBUM) neural network.

    PubMed

    Jeng, J T; Lee, T T

    2000-01-01

    A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  18. TMG-chitotriomycin as a probe for the prediction of substrate specificity of β-N-acetylhexosaminidases.

    PubMed

    Shiota, Hiroto; Kanzaki, Hiroshi; Hatanaka, Tadashi; Nitoda, Teruhiko

    2013-06-28

    TMG-chitotriomycin (1) produced by the actinomycete Streptomyces annulatus NBRC13369 was examined as a probe for the prediction of substrate specificity of β-N-acetylhexosaminidases (HexNAcases). According to the results of inhibition assays, 14 GH20 HexNAcases from various organisms were divided into 1-sensitive and 1-insensitive enzymes. Three representatives of each group were investigated for their substrate specificity. The 1-sensitive HexNAcases hydrolyzed N-acetylchitooligosaccharides but not N-glycan-type oligosaccharides, whereas the 1-insensitive enzymes hydrolyzed N-glycan-type oligosaccharides but not N-acetylchitooligosaccharides, indicating that TMG-chitotriomycin can be used as a molecular probe to distinguish between chitin-degrading HexNAcases and glycoconjugate-processing HexNAcases. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction

    PubMed Central

    Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang

    2016-01-01

    Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638

  20. Aggregation-induced emission spectral shift as a measure of local concentration of a pH-activatable rhodamine-based smart probe

    NASA Astrophysics Data System (ADS)

    Arsov, Zoran; Urbančič, Iztok; Štrancar, Janez

    2018-02-01

    Generating activatable probes that report about molecular vicinity through contact-based mechanisms such as aggregation can be very convenient. Specifically, such probes change a particular spectral property only at the intended biologically relevant target. Xanthene derivatives, for example rhodamines, are able to form aggregates. It is typical to examine aggregation by absorption spectroscopy but for microscopy applications utilizing fluorescent probes it is very important to perform characterization by measuring fluorescence spectra. First we show that excitation spectra of aqueous solutions of rhodamine 6G can be very informative about the aggregation features. Next we establish the dependence of the fluorescence emission spectral maximum shift on the dimer concentration. The obtained information helped us confirm the possibility of aggregation of a recently designed and synthesized rhodamine 6G-based pH-activatable fluorescent probe and to study its pH and concentration dependence. The size of the aggregation-induced emission spectral shift at specific position on the sample can be measured by fluorescence microspectroscopy, which at particular pH allows estimation of the local concentration of the observed probe at microscopic level. Therefore, we show that besides aggregation-caused quenching and aggregation-induced emission also aggregation-induced emission spectral shift can be a useful photophysical phenomenon.

  1. Orthogonal Polynomials Associated with Complementary Chain Sequences

    NASA Astrophysics Data System (ADS)

    Behera, Kiran Kumar; Sri Ranga, A.; Swaminathan, A.

    2016-07-01

    Using the minimal parameter sequence of a given chain sequence, we introduce the concept of complementary chain sequences, which we view as perturbations of chain sequences. Using the relation between these complementary chain sequences and the corresponding Verblunsky coefficients, the para-orthogonal polynomials and the associated Szegő polynomials are analyzed. Two illustrations, one involving Gaussian hypergeometric functions and the other involving Carathéodory functions are also provided. A connection between these two illustrations by means of complementary chain sequences is also observed.

  2. Euler polynomials and identities for non-commutative operators

    NASA Astrophysics Data System (ADS)

    De Angelis, Valerio; Vignat, Christophe

    2015-12-01

    Three kinds of identities involving non-commutating operators and Euler and Bernoulli polynomials are studied. The first identity, as given by Bender and Bettencourt [Phys. Rev. D 54(12), 7710-7723 (1996)], expresses the nested commutator of the Hamiltonian and momentum operators as the commutator of the momentum and the shifted Euler polynomial of the Hamiltonian. The second one, by Pain [J. Phys. A: Math. Theor. 46, 035304 (2013)], links the commutators and anti-commutators of the monomials of the position and momentum operators. The third appears in a work by Figuieira de Morisson and Fring [J. Phys. A: Math. Gen. 39, 9269 (2006)] in the context of non-Hermitian Hamiltonian systems. In each case, we provide several proofs and extensions of these identities that highlight the role of Euler and Bernoulli polynomials.

  3. A Probe for Measuring Moisture Content in Dead Roundwood

    Treesearch

    Richard W. Blank; John S. Frost; James E. Eenigenburg

    1983-01-01

    This paper reports field test results of a wood moisture probe''s accuracy in measuring fuel moisture content of dead roundwood. Probe measurements, corrected for temperature, correlated well with observed fuel moistures of 1-inch dead jack pine branchwood.

  4. Surface-Enhanced Raman Scattering Based Nonfluorescent Probe for Multiplex DNA Detection

    PubMed Central

    Sun, Lan; Yu, Chenxu; Irudayaraj, Joseph

    2008-01-01

    To provide rapid and accurate detection of DNA markers in a straightforward, inexpensive and multiplex format, an alternative surface enhanced Raman scattering (SERS) based probe was designed and fabricated to covalently attach both DNA probing sequence and non-fluorescent Raman tags to the surface of gold nanoparticles (DNA-AuP-RTag). The intensity of Raman signal of the probes could be controlled through the surface coverage of the non-fluorescent Raman tags (RTags). Detection sensitivity of these probes could be optimized by fine-tuning the amount of DNA molecules and RTags on the probes. Long-term stability of the DNA-AuP-RTag probes was found to be good (over 3 months). Excellent multiplexing capability of the DNA-AuP-RTag scheme was demonstrated by simultaneous identification of up to eight probes in a mixture. Detection of hybridization of single-stranded DNA (ssDNA) to its complementary targets was successfully accomplished with a long-term goal to use non-fluorescent RTags in a Raman-based DNA microarray platform. PMID:17465531

  5. Quantum dots-based probes conjugated to Annexin V for photostable apoptosis detection and imaging

    NASA Astrophysics Data System (ADS)

    Le Gac, Séverine; Vermes, Istvan; van den Berg, Albert

    2008-02-01

    Quantum dots (Qdots) are nanoparticles exhibiting fluorescent properties that are widely applied for cell staining. We present here the development of quantum dots for specific targeting of apoptotic cells, for both apoptosis detection and staining of apoptotic "living" cells. These Qdots are functionalized with Annexin V, a 35-kDa protein that specifically interacts with the membrane of apoptotic cells: Annexin V recognizes and binds to phosphatidylserine (PS) moieties which are present on the outer membrane of apoptotic cells and not on this of healthy or necrotic cells. By using Annexin V, our Qdots probes are made specific for apoptotic cells. For that purpose, Qdots Streptavidin Conjugates are coupled to biotinylated Annexin V. Staining of apoptotic cells was checked using fluorescence and confocal microscopy techniques on nonfixed cells. It is shown here that Qdots are insensitive to bleaching after prolonged and frequent exposure as opposed to organic dyes and this makes them excellent candidates for time-lapse imaging purposes. We illustrate the application of our Qdots-based probes to continuously follow fast changes occurring on the membrane of apoptotic cells.

  6. On polynomial selection for the general number field sieve

    NASA Astrophysics Data System (ADS)

    Kleinjung, Thorsten

    2006-12-01

    The general number field sieve (GNFS) is the asymptotically fastest algorithm for factoring large integers. Its runtime depends on a good choice of a polynomial pair. In this article we present an improvement of the polynomial selection method of Montgomery and Murphy which has been used in recent GNFS records.

  7. Novel, Miniature Multi-Hole Probes and High-Accuracy Calibration Algorithms for their use in Compressible Flowfields

    NASA Technical Reports Server (NTRS)

    Rediniotis, Othon K.

    1999-01-01

    Two new calibration algorithms were developed for the calibration of non-nulling multi-hole probes in compressible, subsonic flowfields. The reduction algorithms are robust and able to reduce data from any multi-hole probe inserted into any subsonic flowfield to generate very accurate predictions of the velocity vector, flow direction, total pressure and static pressure. One of the algorithms PROBENET is based on the theory of neural networks, while the other is of a more conventional nature (polynomial approximation technique) and introduces a novel idea of local least-squares fits. Both algorithms have been developed to complete, user-friendly software packages. New technology was developed for the fabrication of miniature multi-hole probes, with probe tip diameters all the way down to 0.035". Several miniature 5- and 7-hole probes, with different probe tip geometries (hemispherical, conical, faceted) and different overall shapes (straight, cobra, elbow probes) were fabricated, calibrated and tested. Emphasis was placed on the development of four stainless-steel conical 7-hole probes, 1/16" in diameter calibrated at NASA Langley for the entire subsonic regime. The developed calibration algorithms were extensively tested with these probes demonstrating excellent prediction capabilities. The probes were used in the "trap wing" wind tunnel tests in the 14'x22' wind tunnel at NASA Langley, providing valuable information on the flowfield over the wing. This report is organized in the following fashion. It consists of a "Technical Achievements" section that summarizes the major achievements, followed by an assembly of journal articles that were produced from this project and ends with two manuals for the two probe calibration algorithms developed.

  8. Diketopyrrolopyrrole: brilliant red pigment dye-based fluorescent probes and their applications.

    PubMed

    Kaur, Matinder; Choi, Dong Hoon

    2015-01-07

    The development of fluorescent probes for the detection of biologically relevant species is a burgeoning topic in the field of supramolecular chemistry. A number of available dyes such as rhodamine, coumarin, fluorescein, and cyanine have been employed in the design and synthesis of new fluorescent probes. However, diketopyrrolopyrrole (DPP) and its derivatives have a distinguished role in supramolecular chemistry for the design of fluorescent dyes. DPP dyes offer distinctive advantages relative to other organic dyes, including high fluorescence quantum yields and good light and thermal stability. Significant advancements have been made in the development of new fluorescent probes based on DPP in recent years as a result of tireless research efforts by the chemistry scientific community. In this tutorial review, we highlight the recent progress in the development of DPP-based fluorescent probes for the period spanning 2009 to the present time and the applications of these probes to recognition of biologically relevant species including anions, cations, reactive oxygen species, thiols, gases and other miscellaneous applications. This review is targeted toward providing the readers with deeper understanding for the future design of DPP-based fluorogenic probes for chemical and biological applications.

  9. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  10. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  11. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  12. Calculators and Polynomial Evaluation.

    ERIC Educational Resources Information Center

    Weaver, J. F.

    The intent of this paper is to suggest and illustrate how electronic hand-held calculators, especially non-programmable ones with limited data-storage capacity, can be used to advantage by students in one particular aspect of work with polynomial functions. The basic mathematical background upon which calculator application is built is summarized.…

  13. Peptide Probe for Crystalline Hydroxyapatite: In Situ Detection of Biomineralization

    NASA Astrophysics Data System (ADS)

    Cicerone, Marcus; Becker, Matthew; Simon, Carl; Chatterjee, Kaushik

    2009-03-01

    While cells template mineralization in vitro and in vivo, specific detection strategies that impart chemical and structural information on this process have proven elusive. Recently we have developed an in situ based peptide probe via phage display methods that is specific to crystalline hydroxyapatite (HA). We are using this in fluorescence based assays to characterize mineralization. One application being explored is the screening of tissue engineering scaffolds for their ability to support osteogenesis. Specifically, osteoblasts are being cultured in hydrogel scaffolds possessing property gradients to provide a test bed for the HA peptide probe. Hydrogel properties that support osteogenesis and HA deposition will be identified using the probe to demonstrate its utility in optimizing design of tissue scaffolds.

  14. Generic expansion of the Jastrow correlation factor in polynomials satisfying symmetry and cusp conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lüchow, Arne, E-mail: luechow@rwth-aachen.de; Jülich Aachen Research Alliance; Sturm, Alexander

    2015-02-28

    Jastrow correlation factors play an important role in quantum Monte Carlo calculations. Together with an orbital based antisymmetric function, they allow the construction of highly accurate correlation wave functions. In this paper, a generic expansion of the Jastrow correlation function in terms of polynomials that satisfy both the electron exchange symmetry constraint and the cusp conditions is presented. In particular, an expansion of the three-body electron-electron-nucleus contribution in terms of cuspless homogeneous symmetric polynomials is proposed. The polynomials can be expressed in fairly arbitrary scaling function allowing a generic implementation of the Jastrow factor. It is demonstrated with a fewmore » examples that the new Jastrow factor achieves 85%–90% of the total correlation energy in a variational quantum Monte Carlo calculation and more than 90% of the diffusion Monte Carlo correlation energy.« less

  15. Roots of polynomials by ratio of successive derivatives

    NASA Technical Reports Server (NTRS)

    Crouse, J. E.; Putt, C. W.

    1972-01-01

    An order of magnitude study of the ratios of successive polynomial derivatives yields information about the number of roots at an approached root point and the approximate location of a root point from a nearby point. The location approximation improves as a root is approached, so a powerful convergence procedure becomes available. These principles are developed into a computer program which finds the roots of polynomials with real number coefficients.

  16. Polynomial Interpolation and Sums of Powers of Integers

    ERIC Educational Resources Information Center

    Cereceda, José Luis

    2017-01-01

    In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, P[subscript k](n) and Q[subscript k](n), such that P[subscript k](n) = Q[subscript k](n) = f[subscript k](n) for n = 1, 2,… , k, where f[subscript k](1), f[subscript k](2),… , f[subscript k](k) are k arbitrarily chosen…

  17. Asymptotically extremal polynomials with respect to varying weights and application to Sobolev orthogonality

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2008-10-01

    We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.

  18. A Riemann-Hilbert approach to asymptotic questions for orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Deift, P.; Kriecherbauer, T.; McLaughlin, K. T.-R.; Venakides, S.; Zhou, X.

    2001-08-01

    A few years ago the authors introduced a new approach to study asymptotic questions for orthogonal polynomials. In this paper we give an overview of our method and review the results which have been obtained in Deift et al. (Internat. Math. Res. Notices (1997) 759, Comm. Pure Appl. Math. 52 (1999) 1491, 1335), Deift (Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, Vol. 3, New York University, 1999), Kriecherbauer and McLaughlin (Internat. Math. Res. Notices (1999) 299) and Baik et al. (J. Amer. Math. Soc. 12 (1999) 1119). We mainly consider orthogonal polynomials with respect to weights on the real line which are either (1) Freud-type weights d[alpha](x)=e-Q(x) dx (Q polynomial or Q(x)=x[beta], [beta]>0), or (2) varying weights d[alpha]n(x)=e-nV(x) dx (V analytic, limx-->[infinity] V(x)/logx=[infinity]). We obtain Plancherel-Rotach-type asymptotics in the entire complex plane as well as asymptotic formulae with error estimates for the leading coefficients, for the recurrence coefficients, and for the zeros of the orthogonal polynomials. Our proof starts from an observation of Fokas et al. (Comm. Math. Phys. 142 (1991) 313) that the orthogonal polynomials can be determined as solutions of certain matrix valued Riemann-Hilbert problems. We analyze the Riemann-Hilbert problems by a steepest descent type method introduced by Deift and Zhou (Ann. Math. 137 (1993) 295) and further developed in Deift and Zhou (Comm. Pure Appl. Math. 48 (1995) 277) and Deift et al. (Proc. Nat. Acad. Sci. USA 95 (1998) 450). A crucial step in our analysis is the use of the well-known equilibrium measure which describes the asymptotic distribution of the zeros of the orthogonal polynomials.

  19. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  20. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.