Sample records for background correction technique

  1. Background correction in separation techniques hyphenated to high-resolution mass spectrometry - Thorough correction with mass spectrometry scans recorded as profile spectra.

    PubMed

    Erny, Guillaume L; Acunha, Tanize; Simó, Carolina; Cifuentes, Alejandro; Alves, Arminda

    2017-04-07

    Separation techniques hyphenated with high-resolution mass spectrometry have been a true revolution in analytical separation techniques. Such instruments not only provide unmatched resolution, but they also allow measuring the peaks accurate masses that permit identifying monoisotopic formulae. However, data files can be large, with a major contribution from background noise and background ions. Such unnecessary contribution to the overall signal can hide important features as well as decrease the accuracy of the centroid determination, especially with minor features. Thus, noise and baseline correction can be a valuable pre-processing step. The methodology that is described here, unlike any other approach, is used to correct the original dataset with the MS scans recorded as profiles spectrum. Using urine metabolic studies as examples, we demonstrate that this thorough correction reduces the data complexity by more than 90%. Such correction not only permits an improved visualisation of secondary peaks in the chromatographic domain, but it also facilitates the complete assignment of each MS scan which is invaluable to detect possible comigration/coeluting species. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    PubMed

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Elementary review of electron microprobe techniques and correction requirements

    NASA Technical Reports Server (NTRS)

    Hart, R. K.

    1968-01-01

    Report contains requirements for correction of instrumented data on the chemical composition of a specimen, obtained by electron microprobe analysis. A condensed review of electron microprobe techniques is presented, including background material for obtaining X ray intensity data corrections and absorption, atomic number, and fluorescence corrections.

  5. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz

  6. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement.

    PubMed

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.

  7. A Comparative Study on Preprocessing Techniques in Diabetic Retinopathy Retinal Images: Illumination Correction and Contrast Enhancement

    PubMed Central

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940

  8. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Dilution air background emission...

  9. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Dilution air background emission...

  10. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Dilution air background emission...

  11. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Dilution air background emission...

  12. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Dilution air background emission...

  13. [Raman spectroscopy fluorescence background correction and its application in clustering analysis of medicines].

    PubMed

    Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei

    2010-08-01

    During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.

  14. Laser line illumination scheme allowing the reduction of background signal and the correction of absorption heterogeneities effects for fluorescence reflectance imaging.

    PubMed

    Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc

    2015-10-01

    Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality as it allows to noninvasively monitor the fluorescence targeted tumors located below the tissue surface. Some drawbacks of this technique are the background fluorescence decreasing the contrast and absorption heterogeneities leading to misinterpretations concerning fluorescence concentrations. We propose a correction technique based on a laser line scanning illumination scheme. We scan the medium with the laser line and acquire, at each position of the line, both fluorescence and excitation images. We then use the finding that there is a relationship between the excitation intensity profile and the background fluorescence one to predict the amount of signal to subtract from the fluorescence images to get a better contrast. As the light absorption information is contained both in fluorescence and excitation images, this method also permits us to correct the effects of absorption heterogeneities. This technique has been validated on simulations and experimentally. Fluorescent inclusions are observed in several configurations at depths ranging from 1 mm to 1 cm. Results obtained with this technique are compared with those obtained with a classical wide-field detection scheme for contrast enhancement and with the fluorescence by an excitation ratio approach for absorption correction.

  15. Correction of Stahl ear deformity using a suture technique.

    PubMed

    Khan, Muhammad Adil Abbas; Jose, Rajive M; Ali, Syed Nadir; Yap, Lok Huei

    2010-09-01

    Correction of partial ear deformities can be a challenging task for the plastic surgeon. There are no standard techniques for correcting many of these deformities, and several different techniques are described in literature. Stahl ear is one such anomaly, characterized by an accessory third crus in the ear cartilage, giving rise to an irregular helical rim. The conventional techniques of correcting this deformity include either excision of the cartilage, repositioning of the cartilage, or scoring techniques. We recently encountered a case of Stahl ear deformity and undertook correction using internal sutures with very good results. The technical details of the surgery are described along with a review of literature on correcting similar anomalies.

  16. Computational technique for stepwise quantitative assessment of equation correctness

    NASA Astrophysics Data System (ADS)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  17. GafChromic EBT film dosimetry with flatbed CCD scanner: a novel background correction method and full dose uncertainty analysis.

    PubMed

    Saur, Sigrun; Frengen, Jomar

    2008-07-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution

  18. Correction of Phase Distortion by Nonlinear Optical Techniques

    DTIC Science & Technology

    1981-05-01

    I I I I ifi 00 o o \\] CORRECTION OF PHASE DISTORTION BY NONLINEAR OPTICAL TECHNIQUES op Hughes Research Laboratories 3011 Malibu Canyon...CORRECTION OF PHASE DISTORTION BY NONLINEAR OPTICAL TECHNIQUES • , — •■ FBiMowmln»"Own. we^owr^wwcw n R.C./Lind| W.B./Browne C.R. Giuliano, R.K... phase conjugation. Adaptive optics , Laser compensation, SBS, Four-wave mixing. 20. ABSTRACT (ConllmM on i tmrr and Identity bv block number

  19. Correcting the lobule in otoplasty using the fillet technique.

    PubMed

    Sadick, Haneen; Artinger, Verena M; Haubner, Frank; Gassner, Holger G

    2014-01-01

    Correction of the protruded lobule in otoplasty continues to represent an important challenge. The lack of skeletal elements within the lobule makes a controlled lobule repositioning less predictable. OBJECTIVE To present a new surgical technique for lobule correction in otoplasty. Human cadaver studies were performed for detailed anatomical analysis of lobule deformities. In addition, we evaluated a novel algorithmic approach to correction of the lobule in 12 consecutive patients. INTERVENTIONS/EXPOSURES: Otoplasty with surgical correction of lobule using the fillet technique. The surgical outcome in the 12 most recent consecutive patients with at least 3 months of follow-up was assessed retrospectively. The postsurgical results were independently reviewed by a panel of noninvolved experts. The 3 major anatomic components of lobular deformities are the axial angular protrusion, the coronal angular protrusion, and the inherent shape. The fillet technique described in the present report addressed all 3 aspects in an effective way. Clinical data analysis revealed no immediate or long-term complications associated with this new surgical method. The patients' subjective rating and the panel's objective rating revealed "good" to "very good" postoperative results. This newly described fillet technique represents a safe and efficient method to correct protruded ear lobules in otoplasty. It allows precise and predictable positioning of the lobule with an excellent safety profile. 4.

  20. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra.

    PubMed

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen

    2017-07-27

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.

  1. Continuous Glucose Monitoring in Subjects with Type 1 Diabetes: Improvement in Accuracy by Correcting for Background Current

    PubMed Central

    Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth

    2010-01-01

    Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968

  2. Training for Corrections: Rationale and Techniques.

    ERIC Educational Resources Information Center

    Southern Illinois Univ., Carbondale. Center for the Study of Crime, Delinquency and Corrections.

    A manual focuses on how to teach in inservice training programs for professional personnel in correctional agencies. A chapter on rationale discusses training objectives and curriculum. A second chapter covers learning environment, lesson plans, and learning problems. One, on teaching techniques, covers lecture, group discussion, case study,…

  3. Correcting prominent ears with the island technique.

    PubMed

    DeMoura, L F

    1977-01-01

    A surgical procedure is described which corrects the ansiform ear by repositioning and reconstructing the anthelix and the anterior crus with the formation of the triangular fossa. This corrects the scaphoconchal angle and improves the cephaloauricular angle, overcoming the problem of prominent ears. Correction in early childhood is recommended in order to avoid personality problems that may result from the deformity, particularly in boys. The technique employed yields important advantages: (1) prolonged use of the helmet-type of surgical dressing is unnecessary; (2) scars are less conspicuous; (3) the outcome is attractive and normal; (4) bleeding and inflammatory complications are avoided; and (5) recurrence of the malformation is unlikely.

  4. Correction techniques for depth errors with stereo three-dimensional graphic displays

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Holden, Anthony; Williams, Steven P.

    1992-01-01

    Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.

  5. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  6. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra

    PubMed Central

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen

    2017-01-01

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450

  7. Background correction in forensic photography. II. Photography of blood under conditions of non-uniform illumination or variable substrate color--practical aspects and limitations.

    PubMed

    Wagner, John H; Miskelly, Gordon M

    2003-05-01

    The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.

  8. Cooperative Learning as a Correction and Grammar Revision Technique: Communicative Exchanges, Self-Correction Rates and Scores

    ERIC Educational Resources Information Center

    Servetti, Sara

    2010-01-01

    This paper focuses on cooperative learning (CL) used as a correction and grammar revision technique and considers the data collected in six Italian parallel classes, three of which (sample classes) corrected mistakes and revised grammar through cooperative learning, while the other three (control classes) in a traditional way. All the classes…

  9. Encouraging junior community netball players to learn correct safe landing technique.

    PubMed

    White, Peta E; Ullah, Shahid; Donaldson, Alex; Otago, Leonie; Saunders, Natalie; Romiti, Maria; Finch, Caroline F

    2012-01-01

    Behavioural factors and beliefs are important determinants of the adoption of sports injury interventions. This study aimed to understand behavioural factors associated with junior community netball players' intentions to learn correct landing technique during coach-led training sessions, proposed as a means of reducing their risk of lower limb injury. Cross-sectional survey. 287 female players from 58 junior netball teams in the 2007/2008-summer competition completed a 13-item questionnaire developed from the Theory of Planned Behaviour (TPB). This assessed players' attitudes (four items), subjective norms (four), perceived behavioural control (four) and intentions (one) around the safety behaviour of learning correct landing technique at netball training. All items were rated on a seven-point bipolar scale. Cluster-adjusted logistic regression was used to assess which TPB constructs were most associated with strong intentions. Players had positive intentions and attitudes towards learning safe landing technique and perceived positive social pressure from significant others. They also perceived themselves to have considerable control over engaging (or not) in this behaviour. Players' attitudes (p<0.001) and subjective norms (p<0.001), but not perceived behavioural control (p=0.49), were associated with strong intentions to learn correct landing technique at training. Injury prevention implementation strategies aimed at maximising junior players' participation in correct landing training programs should emphasise the benefits of learning correct landing technique (i.e. change attitudes) and involve significant others and role models whom junior players admire (i.e. capitalise on social norms) in the promotion of such programs. Copyright © 2011 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  10. A square-wave wavelength modulation system for automatic background correction in carbon furnace atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Bezur, L.; Marshall, J.; Ottaway, J. M.

    A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.

  11. A New Correction Technique for Strain-Gage Measurements Acquired in Transient-Temperature Environments

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1996-01-01

    Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.

  12. Proof of concept of a simple computer-assisted technique for correcting bone deformities.

    PubMed

    Ma, Burton; Simpson, Amber L; Ellis, Randy E

    2007-01-01

    We propose a computer-assisted technique for correcting bone deformities using the Ilizarov method. Our technique is an improvement over prior art in that it does not require a tracking system, navigation hardware and software, or intraoperative registration. Instead, we rely on a postoperative CT scan to obtain all of the information necessary to plan the correction and compute a correction schedule for the patient. Our laboratory experiments using plastic phantoms produced deformity corrections accurate to within 3.0 degrees of rotation and 1 mm of lengthening.

  13. Optimized distortion correction technique for echo planar imaging.

    PubMed

    Chen , N K; Wyrwicz, A M

    2001-03-01

    A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.

  14. Complete NLO corrections to W+W+ scattering and its irreducible background at the LHC

    NASA Astrophysics Data System (ADS)

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-10-01

    The process pp → μ +ν μ e+νejj receives several contributions of different orders in the strong and electroweak coupling constants. Using appropriate event selections, this process is dominated by vector-boson scattering (VBS) and has recently been measured at the LHC. It is thus of prime importance to estimate precisely each contribution. In this article we compute for the first time the full NLO QCD and electroweak corrections to VBS and its irreducible background processes with realistic experimental cuts. We do not rely on approximations but use complete amplitudes involving two different orders at tree level and three different orders at one-loop level. Since we take into account all interferences, at NLO level the corrections to the VBS process and to the QCD-induced irreducible background process contribute at the same orders. Hence the two processes cannot be unambiguously distinguished, and all contributions to the μ +ν μ e+νejj final state should be preferably measured together.

  15. PET attenuation correction for rigid MR Tx/Rx coils from 176Lu background activity

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Kaltsas, Theodoris; Caldeira, Liliana; Scheins, Jürgen; Rota Kops, Elena; Tellmann, Lutz; Pietrzyk, Uwe; Herzog, Hans; Shah, N. Jon

    2018-02-01

    One challenge for PET-MR hybrid imaging is the correction for attenuation of the 511 keV annihilation radiation by the required RF transmit and/or RF receive coils. Although there are strategies for building PET transparent Tx/Rx coils, such optimised coils still cause significant attenuation of the annihilation radiation leading to artefacts and biases in the reconstructed activity concentrations. We present a straightforward method to measure the attenuation of Tx/Rx coils in simultaneous MR-PET imaging based on the natural 176Lu background contained in the scintillator of the PET detector without the requirement of an external CT scanner or PET scanner with transmission source. The method was evaluated on a prototype 3T MR-BrainPET produced by Siemens Healthcare GmbH, both with phantom studies and with true emission images from patient/volunteer examinations. Furthermore, the count rate stability of the PET scanner and the x-ray properties of the Tx/Rx head coil were investigated. Even without energy extrapolation from the two dominant γ energies of 176Lu to 511 keV, the presented method for attenuation correction, based on the measurement of 176Lu background attenuation, shows slightly better performance than the coil attenuation correction currently used. The coil attenuation correction currently used is based on an external transmission scan with rotating 68Ge sources acquired on a Siemens ECAT HR  +  PET scanner. However, the main advantage of the presented approach is its straightforwardness and ready availability without the need for additional accessories.

  16. Determination of serum aluminum by electrothermal atomic absorption spectrometry: A comparison between Zeeman and continuum background correction systems

    NASA Astrophysics Data System (ADS)

    Kruger, Pamela C.; Parsons, Patrick J.

    2007-03-01

    Excessive exposure to aluminum (Al) can produce serious health consequences in people with impaired renal function, especially those undergoing hemodialysis. Al can accumulate in the brain and in bone, causing dialysis-related encephalopathy and renal osteodystrophy. Thus, dialysis patients are routinely monitored for Al overload, through measurement of their serum Al. Electrothermal atomic absorption spectrometry (ETAAS) is widely used for serum Al determination. Here, we assess the analytical performances of three ETAAS instruments, equipped with different background correction systems and heating arrangements, for the determination of serum Al. Specifically, we compare (1) a Perkin Elmer (PE) Model 3110 AAS, equipped with a longitudinally (end) heated graphite atomizer (HGA) and continuum-source (deuterium) background correction, with (2) a PE Model 4100ZL AAS equipped with a transversely heated graphite atomizer (THGA) and longitudinal Zeeman background correction, and (3) a PE Model Z5100 AAS equipped with a HGA and transverse Zeeman background correction. We were able to transfer the method for serum Al previously established for the Z5100 and 4100ZL instruments to the 3110, with only minor modifications. As with the Zeeman instruments, matrix-matched calibration was not required for the 3110 and, thus, aqueous calibration standards were used. However, the 309.3-nm line was chosen for analysis on the 3110 due to failure of the continuum background correction system at the 396.2-nm line. A small, seemingly insignificant overcorrection error was observed in the background channel on the 3110 instrument at the 309.3-nm line. On the 4100ZL, signal oscillation was observed in the atomization profile. The sensitivity, or characteristic mass ( m0), for Al at the 309.3-nm line on the 3110 AAS was found to be 12.1 ± 0.6 pg, compared to 16.1 ± 0.7 pg for the Z5100, and 23.3 ± 1.3 pg for the 4100ZL at the 396.2-nm line. However, the instrumental detection limits (3

  17. A comprehensive numerical analysis of background phase correction with V-SHARP.

    PubMed

    Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand

    2017-04-01

    Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Percutaneous triplanar femoral osteotomy correction for developmental coxa vara: a new technique.

    PubMed

    Sabharwal, Sanjeev; Mittal, Rahul; Cox, Garrick

    2005-01-01

    Developmental coxa vara (DCV) is a well-known pediatric hip disorder that is associated with triplanar deformity of the proximal femur. Several techniques of proximal femur osteotomies have being cited in the literature, with variable outcomes. Recently, the authors have used a percutaneous technique with application of a low-profile Ilizarov external fixator for acute opening wedge correction of the femoral deformity associated with DCV. Five children (six affected hips) underwent the above procedure at an average age of 8 + 4 years. The average improvement in Hilgenreiner's epiphyseal angle was from 74 degrees before surgery to 33 degrees after surgery, the neck-shaft angle improved from 86 degrees to 137 degrees, and the articulo-trochanteric distance improved from -6 mm to +11 mm. Latest follow-up at a mean of 2.1 years after surgery showed satisfactory healing with no significant loss of correction in any case. This percutaneous technique offers several advantages over currently available methods for surgical correction of DCV.

  19. Analysis techniques for background rejection at the Majorana Demonstrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuestra, Clara; Rielage, Keith Robert; Elliott, Steven Ray

    2015-06-11

    The MAJORANA Collaboration is constructing the MAJORANA DEMONSTRATOR, an ultra-low background, 40-kg modular HPGe detector array to search for neutrinoless double beta decay in 76Ge. In view of the next generation of tonne-scale Ge-based 0νββ-decay searches that will probe the neutrino mass scale in the inverted-hierarchy region, a major goal of the MAJORANA DEMONSTRATOR is to demonstrate a path forward to achieving a background rate at or below 1 count/tonne/year in the 4 keV region of interest around the Q-value at 2039 keV. The background rejection techniques to be applied to the data include cuts based on data reduction, pulsemore » shape analysis, event coincidences, and time correlations. The Point Contact design of the DEMONSTRATOR's germanium detectors allows for significant reduction of gamma background.« less

  20. Background Characterization Techniques For Pattern Recognition Applications

    NASA Astrophysics Data System (ADS)

    Noah, Meg A.; Noah, Paul V.; Schroeder, John W.; Kessler, Bernard V.; Chernick, Julian A.

    1989-08-01

    The Department of Defense has a requirement to investigate technologies for the detection of air and ground vehicles in a clutter environment. The use of autonomous systems using infrared, visible, and millimeter wave detectors has the potential to meet DOD's needs. In general, however, the hard-ware technology (large detector arrays with high sensitivity) has outpaced the development of processing techniques and software. In a complex background scene the "problem" is as much one of clutter rejection as it is target detection. The work described in this paper has investigated a new, and innovative, methodology for background clutter characterization, target detection and target identification. The approach uses multivariate statistical analysis to evaluate a set of image metrics applied to infrared cloud imagery and terrain clutter scenes. The techniques are applied to two distinct problems: the characterization of atmospheric water vapor cloud scenes for the Navy's Infrared Search and Track (IRST) applications to support the Infrared Modeling Measurement and Analysis Program (IRAMMP); and the detection of ground vehicles for the Army's Autonomous Homing Munitions (AHM) problems. This work was sponsored under two separate Small Business Innovative Research (SBIR) programs by the Naval Surface Warfare Center (NSWC), White Oak MD, and the Army Material Systems Analysis Activity at Aberdeen Proving Ground MD. The software described in this paper will be available from the respective contract technical representatives.

  1. A technique for correction of equinus contracture using a wire fixator and elastic tension.

    PubMed

    Melvin, J Stuart; Dahners, Laurence E

    2006-02-01

    Equinus contracture often is a complication of trauma, burns, or neurologic deficit. Many patients with contractures secondary to trauma or burns have poor soft tissue, which makes invasive correction a less appealing option. The Ilizarov external fixator has been used as a less invasive attempt to correct equinus contracture. We describe our "dynamic" technique and present a clinical patient series using a variation of the unconstrained Ilizarov technique, which uses elastic bands rather than threaded rods to supply the corrective force.

  2. Two flaps and Z-plasty technique for correction of longitudinal ear lobe cleft.

    PubMed

    Lee, Paik-Kwon; Ju, Hong-Sil; Rhie, Jong-Won; Ahn, Sang-Tae

    2005-06-01

    Various surgical techniques have been reported for the correction of congenital ear lobe deformities. Our method, the two-flaps-and-Z-plasty technique, for correcting the longitudinal ear lobe cleft is presented. This technique is simple and easy to perform. It enables us to keep the bulkiness of the ear lobe with minimal tissue sacrifice, and to make a shorter operation scar. The small Z-plasty at the free ear lobe margin avoids notching deformity and makes the shape of the ear lobe smoother. The result is satisfactory in terms of matching the contralateral normal ear lobe in shape and symmetry.

  3. A technique for the correcting ERTS data for solar and atmospheric effects

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Peacock, K.

    1973-01-01

    A technique is described by which an ERTS investigator can obtain absolute target reflectances by correcting spacecraft radiance measurements for variable target irradiance, atmospheric attenuation, and atmospheric backscatter. A simple measuring instrument and the necessary atmospheric measurements are discussed, and examples demonstrate the nature and magnitude of the atmospheric corrections.

  4. New minimally invasive technique for correction of pectus carinatum.

    PubMed

    Pérez, David; Cano, Jose Ramón; Quevedo, Santiago; López, Luis

    2011-02-01

    We describe a new video-assisted operative technique for correction of pectus carinatum (PC) using a modified Nuss procedure. A new design of the steel bar was developed, so that it could be introduced and placed in a suitable position through very small skin incisions. Substantial modifications were introduced in the bar length and shape aimed at facilitating insertion and subsequent removal when required. All the surgical manoeuvres took place under direct vision using a 30° thoracoscope. Single unilateral fixation of the bar in a subpectoral pocket provided satisfactory stabilisation without the need for lateral stabilisers. Adequate correction of the deformity was achieved with minor postoperative scars. Our results support the view that minimally invasive surgical repair should be preferred over open surgery for correction of pectus carinatum in young adults and children. Copyright © 2010 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.

  5. Improvements in Technique of NMR Imaging and NMR Diffusion Measurements in the Presence of Background Gradients.

    NASA Astrophysics Data System (ADS)

    Lian, Jianyu

    In this work, modification of the cosine current distribution rf coil, PCOS, has been introduced and tested. The coil produces a very homogeneous rf magnetic field, and it is inexpensive to build and easy to tune for multiple resonance frequency. The geometrical parameters of the coil are optimized to produce the most homogeneous rf field over a large volume. To avoid rf field distortion when the coil length is comparable to a quarter wavelength, a parallel PCOS coil is proposed and discussed. For testing rf coils and correcting B _1 in NMR experiments, a simple, rugged and accurate NMR rf field mapping technique has been developed. The method has been tested and used in 1D, 2D, 3D and in vivo rf mapping experiments. The method has been proven to be very useful in the design of rf coils. To preserve the linear relation between rf output applied on an rf coil and modulating input for an rf modulating -amplifying system of NMR imaging spectrometer, a quadrature feedback loop is employed in an rf modulator with two orthogonal rf channels to correct the amplitude and phase non-linearities caused by the rf components in the rf system. The modulator is very linear over a large range and it can generate an arbitrary rf shape. A diffusion imaging sequence has been developed for measuring and imaging diffusion in the presence of background gradients. Cross terms between the diffusion sensitizing gradients and background gradients or imaging gradients can complicate diffusion measurement and make the interpretation of NMR diffusion data ambiguous, but these have been eliminated in this method. Further, the background gradients has been measured and imaged. A dipole random distribution model has been established to study background magnetic fields Delta B and background magnetic gradients G_0 produced by small particles in a sample when it is in a B_0 field. From this model, the minimum distance that a spin can approach a particle can be determined by measuring

  6. Summing coincidence correction for γ-ray measurements using the HPGe detector with a low background shielding system

    NASA Astrophysics Data System (ADS)

    He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.

    2018-02-01

    A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.

  7. Modulation/demodulation techniques for satellite communications. Part 1: Background

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1981-01-01

    Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.

  8. A New CT Reconstruction Technique Using Adaptive Deformation Recovery and Intensity Correction (ADRIC)

    PubMed Central

    Zhang, You; Ma, Jianhua; Iyengar, Puneeth; Zhong, Yuncheng; Wang, Jing

    2017-01-01

    Purpose Sequential same-patient CT images may involve deformation-induced and non-deformation-induced voxel intensity changes. An adaptive deformation recovery and intensity correction (ADRIC) technique was developed to improve the CT reconstruction accuracy, and to separate deformation from non-deformation-induced voxel intensity changes between sequential CT images. Materials and Methods ADRIC views the new CT volume as a deformation of a prior high-quality CT volume, but with additional non-deformation-induced voxel intensity changes. ADRIC first applies the 2D-3D deformation technique to recover the deformation field between the prior CT volume and the new, to-be-reconstructed CT volume. Using the deformation-recovered new CT volume, ADRIC further corrects the non-deformation-induced voxel intensity changes with an updated algebraic reconstruction technique (‘ART-dTV’). The resulting intensity-corrected new CT volume is subsequently fed back into the 2D-3D deformation process to further correct the residual deformation errors, which forms an iterative loop. By ADRIC, the deformation field and the non-deformation voxel intensity corrections are optimized separately and alternately to reconstruct the final CT. CT myocardial perfusion imaging scenarios were employed to evaluate the efficacy of ADRIC, using both simulated data of the extended-cardiac-torso (XCAT) digital phantom and experimentally acquired porcine data. The reconstruction accuracy of the ADRIC technique was compared to the technique using ART-dTV alone, and to the technique using 2D-3D deformation alone. The relative error metric and the universal quality index metric are calculated between the images for quantitative analysis. The relative error is defined as the square root of the sum of squared voxel intensity differences between the reconstructed volume and the ‘ground-truth’ volume, normalized by the square root of the sum of squared ‘ground-truth’ voxel intensities. In addition to

  9. Graviton propagator from background-independent quantum gravity.

    PubMed

    Rovelli, Carlo

    2006-10-13

    We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.

  10. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  11. Dual ring multilayer ionization chamber and theory-based correction technique for scanning proton therapy.

    PubMed

    Takayanagi, Taisuke; Nihongi, Hideaki; Nishiuchi, Hideaki; Tadokoro, Masahiro; Ito, Yuki; Nakashima, Chihiro; Fujitaka, Shinichiro; Umezawa, Masumi; Matsuda, Koji; Sakae, Takeji; Terunuma, Toshiyuki

    2016-07-01

    To develop a multilayer ionization chamber (MLIC) and a correction technique that suppresses differences between the MLIC and water phantom measurements in order to achieve fast and accurate depth dose measurements in pencil beam scanning proton therapy. The authors distinguish between a calibration procedure and an additional correction: 1-the calibration for variations in the air gap thickness and the electrometer gains is addressed without involving measurements in water; 2-the correction is addressed to suppress the difference between depth dose profiles in water and in the MLIC materials due to the nuclear interaction cross sections by a semiempirical model tuned by using measurements in water. In the correction technique, raw MLIC data are obtained for each energy layer and integrated after multiplying them by the correction factor because the correction factor depends on incident energy. The MLIC described here has been designed especially for pencil beam scanning proton therapy. This MLIC is called a dual ring multilayer ionization chamber (DRMLIC). The shape of the electrodes allows the DRMLIC to measure both the percentage depth dose (PDD) and integrated depth dose (IDD) because ionization electrons are collected from inner and outer air gaps independently. IDDs for which the beam energies were 71.6, 120.6, 159, 180.6, and 221.4 MeV were measured and compared with water phantom results. Furthermore, the measured PDDs along the central axis of the proton field with a nominal field size of 10 × 10 cm(2) were compared. The spread out Bragg peak was 20 cm for fields with a range of 30.6 and 3 cm for fields with a range of 6.9 cm. The IDDs measured with the DRMLIC using the correction technique were consistent with those that of the water phantom; except for the beam energy of 71.6 MeV, all of the points satisfied the 1% dose/1 mm distance to agreement criterion of the gamma index. The 71.6 MeV depth dose profile showed slight differences in the shallow

  12. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    NASA Astrophysics Data System (ADS)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  13. Enhanced identification and biological validation of differential gene expression via Illumina whole-genome expression arrays through the use of the model-based background correction methodology

    PubMed Central

    Ding, Liang-Hao; Xie, Yang; Park, Seongmi; Xiao, Guanghua; Story, Michael D.

    2008-01-01

    Despite the tremendous growth of microarray usage in scientific studies, there is a lack of standards for background correction methodologies, especially in single-color microarray platforms. Traditional background subtraction methods often generate negative signals and thus cause large amounts of data loss. Hence, some researchers prefer to avoid background corrections, which typically result in the underestimation of differential expression. Here, by utilizing nonspecific negative control features integrated into Illumina whole genome expression arrays, we have developed a method of model-based background correction for BeadArrays (MBCB). We compared the MBCB with a method adapted from the Affymetrix robust multi-array analysis algorithm and with no background subtraction, using a mouse acute myeloid leukemia (AML) dataset. We demonstrated that differential expression ratios obtained by using the MBCB had the best correlation with quantitative RT–PCR. MBCB also achieved better sensitivity in detecting differentially expressed genes with biological significance. For example, we demonstrated that the differential regulation of Tnfr2, Ikk and NF-kappaB, the death receptor pathway, in the AML samples, could only be detected by using data after MBCB implementation. We conclude that MBCB is a robust background correction method that will lead to more precise determination of gene expression and better biological interpretation of Illumina BeadArray data. PMID:18450815

  14. Dual ring multilayer ionization chamber and theory-based correction technique for scanning proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takayanagi, Taisuke, E-mail: taisuke.takayanagi.wd

    2016-07-15

    Purpose: To develop a multilayer ionization chamber (MLIC) and a correction technique that suppresses differences between the MLIC and water phantom measurements in order to achieve fast and accurate depth dose measurements in pencil beam scanning proton therapy. Methods: The authors distinguish between a calibration procedure and an additional correction: 1—the calibration for variations in the air gap thickness and the electrometer gains is addressed without involving measurements in water; 2—the correction is addressed to suppress the difference between depth dose profiles in water and in the MLIC materials due to the nuclear interaction cross sections by a semiempirical modelmore » tuned by using measurements in water. In the correction technique, raw MLIC data are obtained for each energy layer and integrated after multiplying them by the correction factor because the correction factor depends on incident energy. The MLIC described here has been designed especially for pencil beam scanning proton therapy. This MLIC is called a dual ring multilayer ionization chamber (DRMLIC). The shape of the electrodes allows the DRMLIC to measure both the percentage depth dose (PDD) and integrated depth dose (IDD) because ionization electrons are collected from inner and outer air gaps independently. Results: IDDs for which the beam energies were 71.6, 120.6, 159, 180.6, and 221.4 MeV were measured and compared with water phantom results. Furthermore, the measured PDDs along the central axis of the proton field with a nominal field size of 10 × 10 cm{sup 2} were compared. The spread out Bragg peak was 20 cm for fields with a range of 30.6 and 3 cm for fields with a range of 6.9 cm. The IDDs measured with the DRMLIC using the correction technique were consistent with those that of the water phantom; except for the beam energy of 71.6 MeV, all of the points satisfied the 1% dose/1 mm distance to agreement criterion of the gamma index. The 71.6 MeV depth dose profile

  15. A survey of provably correct fault-tolerant clock synchronization techniques

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1988-01-01

    Six provably correct fault-tolerant clock synchronization algorithms are examined. These algorithms are all presented in the same notation to permit easier comprehension and comparison. The advantages and disadvantages of the different techniques are examined and issues related to the implementation of these algorithms are discussed. The paper argues for the use of such algorithms in life-critical applications.

  16. Our Experience with Brow Ptosis Correction: A Comparison of 4 Techniques

    PubMed Central

    Pascali, Michele; Avantaggiato, Anna; Cervelli, Valerio

    2015-01-01

    Background: Brow elevation is one of the goals of surgical rejuvenation procedures. In this article, the authors reviewed their experience with brow lift, and they compared 4 different techniques: direct brow lift, brow lift with endotine ribbon device, brow lift with temporoparietalis imbrication, and brow lift with Mersilene mesh to provide long-lasting results. Methods: This is a retrospective study of 80 patients (20 for each group), aged between 48 and 75 years undergoing brow lift surgery, between January 2011 and January 2013. In all cases, the brow lift was associated with an upper blepharoplasty. The amount of brow elevation reduced was assessed by comparison of the preoperative and postoperative vertical distances between the superior eyebrow hairline and the midpupil and lateral and medial canthal angle. The average follow-up period was 18 months. Results: No incidences of infection, alopecia, or excessive scarring were noticed. The main complication associated with direct brow lift was visibility of the scar in 2 patients. One patient treated with brow lift with suture had recurrent eyebrow ptosis. Transient frontal paresthesia was noticed in 1 case treated with endotine ribbon device and in 1 case treated with Mersilene mesh, but this sensation returned by 6–12 weeks. Conclusions: In our experience, there does not exist a technique better than the other, but the best procedure depends on eyebrow contour, sex and age of the patient, magnitude of desired correction, presence or absence of patient’s hair, and patient’s expectations. PMID:25878922

  17. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  18. Scene-based nonuniformity correction technique for infrared focal-plane arrays.

    PubMed

    Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong

    2009-04-20

    A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.

  19. Simple and Efficient Technique for Correction of Unilateral Scissor Bite Using Straight Wire

    PubMed Central

    Dolas, Siddhesh Gajanan; Chitko, Shrikant Shrinivas; Kerudi, Veerendra Virupaxappa; Bonde, Prasad Vasudeo

    2016-01-01

    Unilateral scissor bite is a relatively rare malocclusion. However, its correction is often difficult and a challenge for the clinician. This article presents simple and efficient technique for the correction of severe unilateral scissor bite in a 14 year old boy, using 0.020 S.S. A. J. Wilcock wire (premium plus) out of the spool, with minimal adjustments and placed in mandibular arch. After about six weeks time, good amount of correction was seen in the lower arch and the lower molar had been relieved of scissor bite. PMID:27231682

  20. Simple and Efficient Technique for Correction of Unilateral Scissor Bite Using Straight Wire.

    PubMed

    Dolas, Siddhesh Gajanan; Chitko, Shrikant Shrinivas; Kerudi, Veerendra Virupaxappa; Patil, Harshal Ashok; Bonde, Prasad Vasudeo

    2016-03-01

    Unilateral scissor bite is a relatively rare malocclusion. However, its correction is often difficult and a challenge for the clinician. This article presents simple and efficient technique for the correction of severe unilateral scissor bite in a 14 year old boy, using 0.020 S.S. A. J. Wilcock wire (premium plus) out of the spool, with minimal adjustments and placed in mandibular arch. After about six weeks time, good amount of correction was seen in the lower arch and the lower molar had been relieved of scissor bite.

  1. Near-station terrain corrections for gravity data by a surface-integral technique

    USGS Publications Warehouse

    Gettings, M.E.

    1982-01-01

    A new method of computing gravity terrain corrections by use of a digitizer and digital computer can result in substantial savings in the time and manual labor required to perform such corrections by conventional manual ring-chart techniques. The method is typically applied to estimate terrain effects for topography near the station, for example within 3 km of the station, although it has been used successfully to a radius of 15 km to estimate corrections in areas where topographic mapping is poor. Points (about 20) that define topographic maxima, minima, and changes in the slope gradient are picked on the topographic map, within the desired radius of correction about the station. Particular attention must be paid to the area immediately surrounding the station to ensure a good topographic representation. The horizontal and vertical coordinates of these points are entered into the computer, usually by means of a digitizer. The computer then fits a multiquadric surface to the input points to form an analytic representation of the surface. By means of the divergence theorem, the gravity effect of an interior closed solid can be expressed as a surface integral, and the terrain correction is calculated by numerical evaluation of the integral over the surfaces of a cylinder, The vertical sides of which are at the correction radius about the station, the flat bottom surface at the topographic minimum, and the upper surface given by the multiquadric equation. The method has been tested with favorable results against models for which an exact result is available and against manually computed field-station locations in areas of rugged topography. By increasing the number of points defining the topographic surface, any desired degree of accuracy can be obtained. The method is more objective than manual ring-chart techniques because no average compartment elevations need be estimated ?

  2. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE PAGES

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...

    2016-03-03

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  3. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  4. Simplified projection technique to correct geometric and chromatic lens aberrations using plenoptic imaging.

    PubMed

    Dallaire, Xavier; Thibault, Simon

    2017-04-01

    Plenoptic imaging has been used in the past decade mainly for 3D reconstruction or digital refocusing. It was also shown that this technology has potential for correcting monochromatic aberrations in a standard optical system. In this paper, we present an algorithm for reconstructing images using a projection technique while correcting defects present in it that can apply to chromatic aberrations and wide-angle optical systems. We show that the impact of noise on the reconstruction procedure is minimal. Trade-offs between the sampling of the optical system needed for characterization and image quality are presented. Examples are shown for aberrations in a classic optical system and for chromatic aberrations. The technique is also applied to a wide-angle full field of view of 140° (FFOV 140°) optical system. This technique could be used in order to further simplify or minimize optical systems.

  5. Instrumental background in balloon-borne gamma-ray spectrometers and techniques for its reduction

    NASA Technical Reports Server (NTRS)

    Gehrels, N.

    1985-01-01

    Instrumental background in balloon-borne gamma-ray spectrometers is presented. The calculations are based on newly available interaction cross sections and new analytic techniques, and are the most detailed and accurate published to date. Results compare well with measurements made in the 20 keV to 10 MeV energy range by the Goddard Low Energy Gamma-ray Spectrometer (LEGS). The principal components of the continuum background in spectrometers with GE detectors and thick active shields are: (1) elastic neutron scattering of atmospheric neutrons on the Ge nuclei; (2) aperture flux of atmospheric and cosmic gamma rays; (3) beta decays of unstable nuclides produced by nuclear interactions of atmospheric protons and neutrons with Ge nuclei; and (4) shield leakage of atmospheric gamma rays. The improved understanding of these components leads to several recommended techniques for reducing the background.

  6. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual

  7. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    PubMed

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  9. Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.

  10. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  11. [Research on Residual Aberrations Correction with Adaptive Optics Technique in Patients Undergoing Orthokeratology].

    PubMed

    Gong, Rui; Yang, Bi; Liu, Longqian; Dai, Yun; Zhang, Yudong; Zhao, Haoxin

    2016-06-01

    We conducted this study to explore the influence of the ocular residual aberrations changes on contrast sensitivity(CS)function in eyes undergoing orthokeratology using adaptive optics technique.Nineteen subjects’ nineteen eyes were included in this study.The subjects were between 12 and 20years(14.27±2.23years)of age.An adaptive optics(AO)system was adopted to measure and compensate the residual aberrations through a 4-mm artificial pupil,and at the same time the contrast sensitivities were measured at five spatial frequencies(2,4,8,16,and 32 cycles per degree).The CS measurements with and without AO correction were completed.The sequence of the measurements with and without AO correction was randomly arranged without informing the observers.A two-interval forced-choice procedure was used for the CS measurements.The paired t-test was used to compare the contrast sensitivity with and without AO correction at each spatial frequency.The results revealed that the AO system decreased the mean total root mean square(RMS)from 0.356μm to 0.160μm(t=10.517,P<0.001),and the mean total higher-order RMS from 0.246μm to 0.095μm(t=10.113,P<0.001).The difference in log contrast sensitivity with and without AO correction was significant only at 8cpd(t=-2.51,P=0.02).Thereby we concluded that correcting the ocular residual aberrations using adaptive optics technique could improve the contrast sensitivity function at intermediate spatial frequency in patients undergoing orthokeratology.

  12. Correction of anterior mitral prolapse: the parachute technique.

    PubMed

    Zannis, Konstantinos; Mitchell-Heggs, Laurens; Di Nitto, Valentina; Kirsch, Matthias E W; Noghin, Milena; Ghorayeb, Gabriel; Lessana, Arrigo

    2012-04-01

    To evaluate a new surgical technique for the correction of anterior mitral leaflet prolapse. From October 2006 to November 2011, 44 consecutive patients (28 males, mean age 55 ± 13 years) underwent mitral valve repair because of anterior mitral leaflet prolapse. Echocardiography was performed to evaluate the distance from the tip of each papillary muscle to the annular plane. A specially designed caliper was used to manufacture a parachute-like device, by looping a 4-0 polytetrafluoroethylene suture between a Dacron strip and Teflon felt pledget, according to the preoperative echocardiographic measurements. This parachute was then used to resuspend the anterior mitral leaflet to the corresponding papillary muscle. Of the 44 patients, 35 (80%) required concomitant posterior leaflet repair. Additional procedures were required in 16 patients (36%). The preoperative logistic European System for Cardiac Operative Risk Evaluation was 4.3 ± 6.9. The clinical and echocardiographic follow-up were complete. The total follow-up was 1031 patient-months and averaged 23.4 ± 17.2 months per patient. The overall mortality rate was 4.5% (n = 2). Also, 2 patients (4.5%) with recurrent mitral regurgitation required mitral valve replacement, 1 on the first postoperative day and 1 after 13 months. In the latter patient, histologic analysis showed complete endothelialization of the Dacron strip. At follow-up, all non-reoperated survivors (n = 40) were in New York Heart Association class I, with no regurgitation in 40 patients (93%) and grade 2+ mitral regurgitation in 3 (7%). This technique offers a simple and reproducible solution for correction of anterior leaflet prolapse. Echocardiography can reliably evaluate the length of the chordae. However, the long-term results must be evaluated and compared with other surgical strategies. Copyright © 2012 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  13. Segmentation and pulse shape discrimination techniques for rejecting background in germanium detectors

    NASA Technical Reports Server (NTRS)

    Roth, J.; Primbsch, J. H.; Lin, R. P.

    1984-01-01

    The possibility of rejecting the internal beta-decay background in coaxial germanium detectors by distinguishing between the multi-site energy losses characteristic of photons and the single-site energy losses of electrons in the range 0.2 - 2 MeV is examined. The photon transport was modeled with a Monte Carlo routine. Background rejection by both multiple segmentation and pulse shape discrimination techniques is investigated. The efficiency of a six 1 cm-thick segment coaxial detector operating in coincidence mode alone is compared to that of a two-segment (1 cm and 5 cm) detector employing both front-rear coincidence and PSD in the rear segment to isolate photon events. Both techniques can provide at least 95 percent rejection of single-site events while accepting at least 80 percent of the multi-site events above 500 keV.

  14. Skin reduction technique for correction of lateral deviation of the erect straight penis.

    PubMed

    Shaeer, Osama

    2014-07-01

    Lateral deviation of the erect straight penis (LDESP) refers to a penis that despite being straight in the erect state, points laterally, yet can be directed forward manually without the use of force. While LDESP should not impose a negative impact on sexual function, it may have a negative cosmetic impact. This work describes skin reduction technique (SRT) for correction of LDESP. Counseling was offered to males with LDESP after excluding other abnormalities. Surgery was performed in case of failed counseling. In the erect state, the degree and direction of LDESP were noted. Skin on the base of the penis on the contralateral side of LDESP was excised from the base of the penis and the edges approximated to correct LDESP. Further excision was repeated if needed. The incision was closed in two layers. Long-term efficacy of SRT was the main outcome measure. Out of 183 males with LDESP, 66.7% were not sexually active. Counseling relieved 91.8% of cases. Fifteen patients insisted on surgery, mostly from among the sexually active where the complaint was mutual from the patient and partner. SRT resulted in full correction of the angle of erection in 12 cases out of 15. Two had minimal recurrence, and one had major recurrence indicating re-SRT. LDESP is more common a complaint among those who have not experienced coital relationship, and is mostly relieved by counseling. However, sexually active males with this complaint are more difficult to relieve by counseling. A minority of patients may opt for surgical correction. SRT achieves a forward erection in such patients, is minimally invasive, and relatively safe, provided the angle of erection can be corrected manually without force. Shaeer O. Skin reduction technique for correction of lateral deviation of the erect straight penis. © 2014 International Society for Sexual Medicine.

  15. Novel Principles and Techniques to Create a Natural Design in Female Hairline Correction Surgery

    PubMed Central

    2015-01-01

    Abstract Background: Female hairline correction surgery is becoming increasingly popular. However, no guidelines or methods of female hairline design have been introduced to date. Methods: The purpose of this study was to create an initial framework based on the novel principles of female hairline design and then use artistic ability and experience to fine tune this framework. An understanding of the concept of 5 areas (frontal area, frontotemporal recess area, temporal peak, infratemple area, and sideburns) and 5 points (C, A, B, T, and S) is required for female hairline correction surgery (the 5A5P principle). The general concepts of female hairline correction surgery and natural design methods are, herein, explained with a focus on the correlations between these 5 areas and 5 points. Results: A natural and aesthetic female hairline can be created with application of the above-mentioned concepts. Conclusion: The 5A5P principle of forming the female hairline is very useful in female hairline correction surgery. PMID:26894014

  16. PSF and MTF comparison of two different surface ablation techniques for laser visual correction

    NASA Astrophysics Data System (ADS)

    Cruz Félix, Angel Sinue; López Olazagasti, Estela; Rosales, Marco A.; Ibarra, Jorge; Tepichín Rodríguez, Eduardo

    2009-08-01

    It is well known that the Zernike expansion of the wavefront aberrations has been extensively used to evaluate the performance of image forming optical systems. Recently, these techniques were adopted in the field of Ophthalmology to evaluate the objective performance of the human ocular system. We have been working in the characterization and evaluation of the performance of normal human eyes; i.e., eyes which do not require any refractive correction (20/20 visual acuity). These data provide us a reference model to analyze Pre- and Post-Operated results from eyes that have been subjected to laser refractive surgery. Two different ablation techniques are analyzed in this work. These techniques were designed to correct the typical refractive errors known as myopia, hyperopia, and presbyopia. When applied to the corneal surface, these techniques provide a focal shift and, in principle, an improvement of the visual performance. These features can be suitably described in terms of the PSF and MTF of the corresponding Pre- and Post-Operated wavefront aberrations. We show the preliminary results of our comparison.

  17. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  18. Students' Preferences and Attitude toward Oral Error Correction Techniques at Yanbu University College, Saudi Arabia

    ERIC Educational Resources Information Center

    Alamri, Bushra; Fawzi, Hala Hassan

    2016-01-01

    Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…

  19. A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography

    PubMed Central

    Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.

    2007-01-01

    In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018

  20. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the

  1. Simultaneous double-rod rotation technique in posterior instrumentation surgery for correction of adolescent idiopathic scoliosis.

    PubMed

    Ito, Manabu; Abumi, Kuniyoshi; Kotani, Yoshihisa; Takahata, Masahiko; Sudo, Hideki; Hojo, Yoshihiro; Minami, Akio

    2010-03-01

    The authors present a new posterior correction technique consisting of simultaneous double-rod rotation using 2 contoured rods and polyaxial pedicle screws with or without Nesplon tapes. The purpose of this study is to introduce the basic principles and surgical procedures of this new posterior surgery for correction of adolescent idiopathic scoliosis. Through gradual rotation of the concave-side rod by 2 rod holders, the convex-side rod simultaneously rotates with the the concave-side rod. This procedure does not involve any force pushing down the spinal column around the apex. Since this procedure consists of upward pushing and lateral translation of the spinal column with simultaneous double-rod rotation maneuvers, it is simple and can obtain thoracic kyphosis as well as favorable scoliosis correction. This technique is applicable not only to a thoracic single curve but also to double major curves in cases of adolescent idiopathic scoliosis.

  2. Incisionless otoplasty: a reliable and replicable technique for the correction of prominauris.

    PubMed

    Mehta, Shaun; Gantous, Andres

    2014-01-01

    This study evaluates the postoperative outcomes achieved with incisionless otoplasty for the correction of prominauris. To determine whether incisionless otoplasty is a reliable and replicable technique in correcting prominauris. This study consisted of a retrospective electronic medical record review for 72 patients undergoing incisionless otoplasty for the correction of prominauris by a single surgeon from November 2006 to April 2013. Follow-up ranged from 1 to 87 months. The patients were operated on at both St Joseph's Health Centre (a community hospital) and The Cumberland Clinic (private practice) in Toronto, Ontario, Canada. All patients undergoing an incisionless otoplasty for the correction of prominauris were eligible. Participants' ages ranged from 3 to 55 years, with the majority being adults. Seventy patients were followed up for outcomes. Incisionless otoplasty. Number and type of sutures used, perioperative complications, and postoperative follow-up including complications and revisions. Complications included infection, hematoma, bleeding, perichondritis, suture granuloma, suture exposure, and suture failure. A mean (SD) 2.5 (0.8) sutures were used in the left ear, 2.48 (0.75) in the right ear, and 4.69 (1.75) in total. The number of sutures used in the left vs right ear was not significantly different (P = .60). All patients had horizontal mattress sutures placed for correction of prominauris. There were no serious perioperative complications such as infection, bleeding, hematoma, perichondritis, or cartilage necrosis. Follow-up data were extracted and analyzed in 70 patients, with a mean follow-up time of 31 months. Complications were seen in 10 patients (14%): 4 were due to suture failure, 3 were due to suture exposure, 2 were due to granuloma formation, and 1 was due to a Polysporin (bacitracin zinc/polymyxin B sulfate) reaction. Nine patients (13%) needed a revision to achieve a desirable result. The technique of incisionless otoplasty used

  3. Validation of motion correction techniques for liver CT perfusion studies

    PubMed Central

    Chandler, A; Wei, W; Anderson, E F; Herron, D H; Ye, Z; Ng, C S

    2012-01-01

    Objectives Motion in images potentially compromises the evaluation of temporally acquired CT perfusion (CTp) data; image registration should mitigate this, but first requires validation. Our objective was to compare the relative performance of manual, rigid and non-rigid registration techniques to correct anatomical misalignment in acquired liver CTp data sets. Methods 17 data sets in patients with liver tumours who had undergone a CTp protocol were evaluated. Each data set consisted of a cine acquisition during a breath-hold (Phase 1), followed by six further sets of cine scans (each containing 11 images) acquired during free breathing (Phase 2). Phase 2 images were registered to a reference image from Phase 1 cine using two semi-automated intensity-based registration techniques (rigid and non-rigid) and a manual technique (the only option available in the relevant vendor CTp software). The performance of each technique to align liver anatomy was assessed by four observers, independently and blindly, on two separate occasions, using a semi-quantitative visual validation study (employing a six-point score). The registration techniques were statistically compared using an ordinal probit regression model. Results 306 registrations (2448 observer scores) were evaluated. The three registration techniques were significantly different from each other (p=0.03). On pairwise comparison, the semi-automated techniques were significantly superior to the manual technique, with non-rigid significantly superior to rigid (p<0.0001), which in turn was significantly superior to manual registration (p=0.04). Conclusion Semi-automated registration techniques achieved superior alignment of liver anatomy compared with the manual technique. We hope this will translate into more reliable CTp analyses. PMID:22374283

  4. Refinements in pectus carinatum correction: the pectoralis muscle split technique.

    PubMed

    Schwabegger, Anton H; Jeschke, Johannes; Schuetz, Tanja; Del Frari, Barbara

    2008-04-01

    The standard approach for correction of pectus carinatum deformity includes elevation of the pectoralis major and rectus abdominis muscle from the sternum and adjacent ribs. A postoperative restriction of shoulder activity for several weeks is necessary to allow stable healing of the elevated muscles. To reduce postoperative immobilization, we present a modified approach to the parasternal ribs using a pectoralis muscle split technique. At each level of rib cartilage resection, the pectoralis muscle is split along the direction of its fibers instead of elevating the entire muscle as performed with the standard technique. From July 2000 to May 2007, we successfully used this technique in 33 patients with pectus carinatum deformity. After the muscle split approach, patients returned to full unrestricted shoulder activity as early as 3 weeks postoperatively, compared to 6 weeks in patients treated with muscle flap elevation. Postoperative pain was reduced and the patients were discharged earlier from the hospital than following the conventional approach. The muscle split technique is a modified surgical approach to the parasternal ribs in patients with pectus carinatum deformity. It helps to maintain pectoralis muscle vascularization and function and can reduce postoperative pain, hospitalization, and rehabilitation period.

  5. A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals

    NASA Technical Reports Server (NTRS)

    Skelton, R. T.; Mahoney, W. A.

    1993-01-01

    We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.

  6. A technique for phase correction in Fourier transform spectroscopy

    NASA Astrophysics Data System (ADS)

    Artsang, P.; Pongchalee, P.; Palawong, K.; Buisset, C.; Meemon, P.

    2018-03-01

    Fourier transform spectroscopy (FTS) is a type of spectroscopy that can be used to analyze components in the sample. The basic setup that is commonly used in this technique is "Michelson interferometer". The interference signal obtained from interferometer can be Fourier transformed into the spectral pattern of the illuminating light source. To experimentally study the concept of the Fourier transform spectroscopy, the project started by setup the Michelson interferometer in the laboratory. The implemented system used a broadband light source in near infrared region (0.81-0.89 μm) and controlled the movable mirror by using computer controlled motorized translation stage. In the early study, there is no sample the interference path. Therefore, the theoretical spectral results after the Fourier transformation of the captured interferogram must be the spectral shape of the light source. One main challenge of the FTS is to retrieve the correct phase information of the inferferogram that relates with the correct spectral shape of the light source. One main source of the phase distortion in FTS that we observed from our system is the non-linear movement of the movable reference mirror of the Michelson interferometer. Therefore, to improve the result, we coupled a monochromatic light source to the implemented interferometer. We simultaneously measured the interferograms of the monochromatic and broadband light sources. The interferogram of the monochromatic light source was used to correct the phase of the interferogram of the broadband light source. The result shows significant improvement in the computed spectral shape.

  7. An Accurate Scatter Measurement and Correction Technique for Cone Beam Breast CT Imaging Using Scanning Sampled Measurement (SSM) Technique.

    PubMed

    Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu

    2006-02-28

    We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.

  8. Revised Radiometric Calibration Technique for LANDSAT-4 Thematic Mapper Data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.

  9. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  10. Modified 16-Dot plication technique for correction of penile curvature: prevention of knot-related complications.

    PubMed

    Salem, Emad A

    2018-05-08

    Penile curvature is a common urological disease. Tunical plication for correction of penile curvature has been much popularized being simpler, adjustable to avoid overcorrection, less bleeding, and less postoperative erectile dysfunction. This study aims to assess the results of modified 16-dot plication technique for correction of congenital and acquired penile curvature and avoidance of knot-associated complications. Eighteen patients underwent correction of their penile curvature using the modified 16-dot plication technique between January 2014 and October 2015. Patients' pre and postoperative data were analyzed. The mean age of patients is 44 years old. Of the patients 15 who were available for follow-up, 8 patients had congenital penile curvature (CPC) and 7 had Peyronie's disease (PD). The angle of deviation ranged from 30° to 90°. Erectile function (EF) was assessed preoperative by IIEF score and duplex ultrasound. Postoperative follow-up at 3 and 6 months revealed straight erect penis in all patients. Longer follow-up at 1 to 2 years, 2 patients complained from slight recurrence of curve (<20°) and 2 patients complained of worsening of their erectile function. Penile shortening was noted by 6 patients. None of our patients stated any knot complication or bothersome, nor do hematomas, numbness, or painful erections. The modified 16-dot plication technique for correction if penile curvature is a safe and effective method. This modification allowed the knots to be tucked in the plicate tunical tissue avoiding knot-associated complications. More investigation on a large scale of patients or multicenter studies is recommended.

  11. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    PubMed

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  12. Differential Deposition Technique for Figure Corrections in Grazing Incidence X-ray Optics

    NASA Technical Reports Server (NTRS)

    Kilaru, Kiranmayee; Ramsey, Brian D.; Gubarev, Mikhail

    2009-01-01

    A differential deposition technique is being developed to correct the low- and mid-spatial-frequency deviations in the axial figure profile of Wolter type grazing incidence X-ray optics. These deviations arise due to various factors in the fabrication process and they degrade the performance of the optics by limiting the achievable angular resolution. In the differential deposition technique, material of varying thickness is selectively deposited along the length of the optic to minimize these deviations, thereby improving the overall figure. High resolution focusing optics being developed at MSFC for small animal radionuclide imaging are being coated to test the differential deposition technique. The required spatial resolution for these optics is 100 m. This base resolution is achievable with the regular electroform-nickel-replication fabrication technique used at MSFC. However, by improving the figure quality of the optics through differential deposition, we aim at significantly improving the resolution beyond this value.

  13. Herramientas y tecnicas para corregir composiciones electronicamente (Tools and Techniques for Correcting Compositions Electronically).

    ERIC Educational Resources Information Center

    Larsen, Mark D.

    2001-01-01

    Although most teachers use word processors and electronic mail on a daily basis, they still depend on paper and pencil for correcting their students' compositions. This article suggests some tools and techniques for submitting, editing, and returning written work electronically. (BD) (Author/VWL)

  14. A technique for correcting ERTS data for solar and atmospheric effects

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Peacock, K.; Shah, N. J.

    1974-01-01

    A technique is described by which ERTS investigators can obtain and utilize solar and atmospheric parameters to transform spacecraft radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument (RPMI) and its use in determining atmospheric paramaters needed for ground truth are discussed. The procedures used and results achieved in processing ERTS CCTs to correct for atmospheric parameters to obtain imagery are reviewed. Examples are given which demonstrate the nature and magnitude of atmospheric effects on computer classification programs.

  15. Biogeosystem Technique as a method to correct the climate

    NASA Astrophysics Data System (ADS)

    Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana

    2017-04-01

    can be produced; The less energy is consumed for climate correction, the better. The proposed algorithm was never discussed before because most of its ingredients were unenforceable. Now the possibility to execute the algorithm exists in the framework of our new scientific-technical branch - Biogeosystem Technique (BGT*). The BGT* is a transcendental (non-imitating natural processes) approach to soil processing, regulation of energy, matter, water fluxes and biological productivity of biosphere: intra-soil machining to provide the new highly productive dispersed system of soil; intra-soil pulse continuous-discrete plants watering to reduce the transpiration rate and water consumption of plants for 5-20 times; intra-soil environmentally safe return of matter during intra-soil milling processing and (or) intra-soil pulse continuous-discrete plants watering with nutrition. Are possible: waste management; reducing flow of nutrients to water systems; carbon and other organic and mineral substances transformation into the soil to plant nutrition elements; less degradation of biological matter to greenhouse gases; increasing biological sequestration of carbon dioxide in terrestrial system's photosynthesis; oxidizing methane and hydrogen sulfide by fresh photosynthesis ionized biologically active oxygen; expansion of the active terrestrial site of biosphere. The high biological product output of biosphere will be gained. BGT* robotic systems are of low cost, energy and material consumption. By BGT* methods the uncertainties of climate and biosphere will be reduced. Key words: Biogeosystem Technique, method to correct, climate

  16. A new technique for measuring aerosols with moonlight observations and a sky background model

    NASA Astrophysics Data System (ADS)

    Jones, Amy; Noll, Stefan; Kausch, Wolfgang; Kimeswenger, Stefan; Szyszka, Ceszary; Unterguggenberger, Stefanie

    2014-05-01

    There have been an ample number of studies on aerosols in urban, daylight conditions, but few for remote, nocturnal aerosols. We have developed a new technique for investigating such aerosols using our sky background model and astronomical observations. With a dedicated observing proposal we have successfully tested this technique for nocturnal, remote aerosol studies. This technique relies on three requirements: (a) sky background model, (b) observations taken with scattered moonlight, and (c) spectrophotometric standard star observations for flux calibrations. The sky background model was developed for the European Southern Observatory and is optimized for the Very Large Telescope at Cerro Paranal in the Atacama desert in Chile. This is a remote location with almost no urban aerosols. It is well suited for studying remote background aerosols that are normally difficult to detect. Our sky background model has an uncertainty of around 20 percent and the scattered moonlight portion is even more accurate. The last two requirements are having astronomical observations with moonlight and of standard stars at different airmasses, all during the same night. We had a dedicated observing proposal at Cerro Paranal with the instrument X-Shooter to use as a case study for this method. X-Shooter is a medium resolution, echelle spectrograph which covers the wavelengths from 0.3 to 2.5 micrometers. We observed plain sky at six different distances (7, 13, 20, 45, 90, and 110 degrees) to the Moon for three different Moon phases (between full and half). Also direct observations of spectrophotometric standard stars were taken at two different airmasses for each night to measure the extinction curve via the Langley method. This is an ideal data set for testing this technique. The underlying assumption is that all components, other than the atmospheric conditions (specifically aerosols and airglow), can be calculated with the model for the given observing parameters. The scattered

  17. Fast correction approach for wavefront sensorless adaptive optics based on a linear phase diversity technique.

    PubMed

    Yue, Dan; Nie, Haitao; Li, Ye; Ying, Changsheng

    2018-03-01

    Wavefront sensorless (WFSless) adaptive optics (AO) systems have been widely studied in recent years. To reach optimum results, such systems require an efficient correction method. This paper presents a fast wavefront correction approach for a WFSless AO system mainly based on the linear phase diversity (PD) technique. The fast closed-loop control algorithm is set up based on the linear relationship between the drive voltage of the deformable mirror (DM) and the far-field images of the system, which is obtained through the linear PD algorithm combined with the influence function of the DM. A large number of phase screens under different turbulence strengths are simulated to test the performance of the proposed method. The numerical simulation results show that the method has fast convergence rate and strong correction ability, a few correction times can achieve good correction results, and can effectively improve the imaging quality of the system while needing fewer measurements of CCD data.

  18. The Taylor saddle effacement: a new technique for correction of saddle nose deformity.

    PubMed

    Taylor, S Mark; Rigby, Matthew H

    2008-02-01

    To describe a novel technique, the Taylor saddle effacement (TSE), for correction of saddle nose deformity using autologous grafts from the lower lateral cartilages. A prospective evaluation of six patients, all of whom had the TSE performed. Photographs were taken in combination with completion of a rhinoplasty outcomes questionnaire preoperatively and at 6 months. The questionnaire included a visual analogue scale (VAS) of nasal breathing and a rhinoplasty outcomes evaluation (ROE) of nasal function and esthetics. All six patients had improvement in both their global nasal airflow on the VAS and on their ROE that was statistically significant. The mean preoperative VAS score was 5.8 compared with our postoperative mean of 8.5 of a possible 10. Mean ROE scores improved from 34.7 to 85.5. At 6 months, all patients felt that their nasal appearance had improved. The TSE is a simple and reliable technique for correction of saddle nose deformity. This prospective study has demonstrated improvement in both nasal function and esthetics when it is employed.

  19. Publisher Correction: Cluster richness-mass calibration with cosmic microwave background lensing

    NASA Astrophysics Data System (ADS)

    Geach, James E.; Peacock, John A.

    2018-03-01

    Owing to a technical error, the `Additional information' section of the originally published PDF version of this Letter incorrectly gave J.A.P. as the corresponding author; it should have read J.E.G. This has now been corrected. The HTML version is correct.

  20. [Preliminary results in the correction of the pectus excavatum with the Acastello modified Welch technique].

    PubMed

    Lorenzo, G R; Gutiérrez Dueñas, J M; Ardela, E; Martín Pinto, F

    2011-10-01

    Congenital malformations of the chest wall are a heterogeneous group of diseases affecting the costal cartilage, ribs, sternum, scapula and clavicle. The pectus excavatum is characterized by a posterior depression of the sternum. Acastello-Welch technique consists in a partial resection of the costal cartilages adding some bars or plates unilaterally fixed to the sternum in each hemithorax. From October 2008 to March 2011 we evaluated 108 patients with congenital malformations of the chest wall. Forty-seven patients (44%) had a pectus excavatum and 12 were treated with Acastello-Welch technique. There were no intraoperative complications. After a mean follow up of 27 months, correction of the deformity was very satisfactory both objective and subjective for patients. The Welch thoracoplasty modified by Acastello is a good option for the correction of the pectus excavatum associating little morbidity and good esthetic outcomes.

  1. Quantitative coronary angiography using image recovery techniques for background estimation in unsubtracted images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Jerry T.; Kamyar, Farzad; Molloi, Sabee

    2007-10-15

    Densitometry measurements have been performed previously using subtracted images. However, digital subtraction angiography (DSA) in coronary angiography is highly susceptible to misregistration artifacts due to the temporal separation of background and target images. Misregistration artifacts due to respiration and patient motion occur frequently, and organ motion is unavoidable. Quantitative densitometric techniques would be more clinically feasible if they could be implemented using unsubtracted images. The goal of this study is to evaluate image recovery techniques for densitometry measurements using unsubtracted images. A humanoid phantom and eight swine (25-35 kg) were used to evaluate the accuracy and precision of the followingmore » image recovery techniques: Local averaging (LA), morphological filtering (MF), linear interpolation (LI), and curvature-driven diffusion image inpainting (CDD). Images of iodinated vessel phantoms placed over the heart of the humanoid phantom or swine were acquired. In addition, coronary angiograms were obtained after power injections of a nonionic iodinated contrast solution in an in vivo swine study. Background signals were estimated and removed with LA, MF, LI, and CDD. Iodine masses in the vessel phantoms were quantified and compared to known amounts. Moreover, the total iodine in left anterior descending arteries was measured and compared with DSA measurements. In the humanoid phantom study, the average root mean square errors associated with quantifying iodine mass using LA and MF were approximately 6% and 9%, respectively. The corresponding average root mean square errors associated with quantifying iodine mass using LI and CDD were both approximately 3%. In the in vivo swine study, the root mean square errors associated with quantifying iodine in the vessel phantoms with LA and MF were approximately 5% and 12%, respectively. The corresponding average root mean square errors using LI and CDD were both 3%. The standard

  2. Correction for intravascular activity in the oxygen-15 steady-state technique is independent of the regional hematocrit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lammertsma, A.A.; Baron, J.C.; Jones, T.

    1987-06-01

    The oxygen-15 steady-state technique to measure the regional cerebral metabolic rate for oxygen requires a correction for the nonextracted intravascular molecular oxygen-15. To perform this correction, an additional procedure is carried out using RBCs labeled with /sup 11/CO or C/sup 15/O. The previously reported correction method, however, required knowledge of the regional cerebral to large vessel hematocrit ratio. A closer examination of the underlying model eliminated this ratio. Both molecular oxygen and carbon monoxide are carried by RBCs and are therefore similarly affected by a change in hematocrit.

  3. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    PubMed

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  4. Evaluation of new Deflux administration techniques: intraureteric HIT and Double HIT for the endoscopic correction of vesicoureteral reflux.

    PubMed

    Kirsch, Andrew J; Arlen, Angela M

    2014-09-01

    Vesicoureteral reflux (VUR) is one of the most common urologic diagnoses affecting children, and optimal treatment requires an individualized approach that considers potential risks. Management options include observation with or without continuous antibiotic prophylaxis and surgical correction via endoscopic, open or laparoscopic/robotic approaches. Endoscopic correction of VUR is an outpatient procedure associated with decreased morbidity compared with ureteral reimplantation. The concept of ureteral hydrodistention and intraluminal submucosal injection (Hydrodistention Implantation Technique [HIT]) has led to improved success rates in eliminating VUR compared with the subureteral transurethral injection technique. Further modifications now include use of proximal and distal intraluminal injections (Double HIT) that result in coaptation of both the ureteral tunnel and orifice. Endoscopic injection of dextranomer/hyaluronic acid copolymer, via the HIT and Double HIT, has emerged as a highly successful, minimally invasive alternative to open surgical correction, with minimal associated morbidity.

  5. Analysis and correction of ground reflection effects in measured narrowband sound spectra using cepstral techniques

    NASA Technical Reports Server (NTRS)

    Miles, J. H.; Stevens, G. H.; Leininger, G. G.

    1975-01-01

    Ground reflections generate undesirable effects on acoustic measurements such as those conducted outdoors for jet noise research, aircraft certification, and motor vehicle regulation. Cepstral techniques developed in speech processing are adapted to identify echo delay time and to correct for ground reflection effects. A sample result is presented using an actual narrowband sound pressure level spectrum. The technique can readily be adapted to existing fast Fourier transform type spectrum measurement instrumentation to provide field measurements/of echo time delays.

  6. Background and survey of bioreplication techniques.

    PubMed

    Pulsifer, Drew Patrick; Lakhtakia, Akhlesh

    2011-09-01

    Bioreplication is the direct reproduction of a biological structure in order to realize at least one specific functionality. Current bioreplication techniques include the sol-gel technique, atomic layer deposition, physical vapor deposition, and imprint lithography and casting. The combined use of a focused ion beam and a scanning electron microscope could develop into a bioreplication technique as well. Some of these techniques are more suitable for reproducing surface features, others for bulk three-dimensional structures. Industrial upscaling appears possible only for imprint lithography and casting (which can be replaced by stamping).

  7. Comparison of a two-dimensional adaptive-wall technique with analytical wall interference correction techniques

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.

    1992-01-01

    A two dimensional airfoil model was tested in the adaptive wall test section of the NASA Langley 0.3 meter Transonic Cryogenic Tunnel (TCT) and in the ventilated test section of the National Aeronautical Establishment Two Dimensional High Reynold Number Facility (HRNF). The primary goal of the tests was to compare different techniques (adaptive test section walls and classical, analytical corrections) to account for wall interference. Tests were conducted over a Mach number range from 0.3 to 0.8 at chord Reynolds numbers of 10 x 10(exp 6), 15 x 10(exp 6), and 20 x 10(exp 6). The angle of attack was varied from about 12 degrees up to stall. Movement of the top and bottom test section walls was used to account for the wall interference in the HRNF tests. The test results are in good agreement.

  8. Intensity-corrected Herschel Observations of Nearby Isolated Low-mass Clouds

    NASA Astrophysics Data System (ADS)

    Sadavoy, Sarah I.; Keto, Eric; Bourke, Tyler L.; Dunham, Michael M.; Myers, Philip C.; Stephens, Ian W.; Di Francesco, James; Webb, Kristi; Stutz, Amelia M.; Launhardt, Ralf; Tobin, John J.

    2018-01-01

    We present intensity-corrected Herschel maps at 100, 160, 250, 350, and 500 μm for 56 isolated low-mass clouds. We determine the zero-point corrections for Herschel Photodetector Array Camera and Spectrometer (PACS) and Spectral Photometric Imaging Receiver (SPIRE) maps from the Herschel Science Archive (HSA) using Planck data. Since these HSA maps are small, we cannot correct them using typical methods. Here we introduce a technique to measure the zero-point corrections for small Herschel maps. We use radial profiles to identify offsets between the observed HSA intensities and the expected intensities from Planck. Most clouds have reliable offset measurements with this technique. In addition, we find that roughly half of the clouds have underestimated HSA-SPIRE intensities in their outer envelopes relative to Planck, even though the HSA-SPIRE maps were previously zero-point corrected. Using our technique, we produce corrected Herschel intensity maps for all 56 clouds and determine their line-of-sight average dust temperatures and optical depths from modified blackbody fits. The clouds have typical temperatures of ∼14–20 K and optical depths of ∼10‑5–10‑3. Across the whole sample, we find an anticorrelation between temperature and optical depth. We also find lower temperatures than what was measured in previous Herschel studies, which subtracted out a background level from their intensity maps to circumvent the zero-point correction. Accurate Herschel observations of clouds are key to obtaining accurate density and temperature profiles. To make such future analyses possible, intensity-corrected maps for all 56 clouds are publicly available in the electronic version. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  9. Improvement of Aerosol Optical Depth Retrieval over Hong Kong from a Geostationary Meteorological Satellite Using Critical Reflectance with Background Optical Depth Correction

    NASA Technical Reports Server (NTRS)

    Kim, Mijin; Kim, Jhoon; Wong, Man Sing; Yoon, Jongmin; Lee, Jaehwa; Wu, Dong L.; Chan, P.W.; Nichol, Janet E.; Chung, Chu-Yong; Ou, Mi-Lim

    2014-01-01

    Despite continuous efforts to retrieve aerosol optical depth (AOD) using a conventional 5-channelmeteorological imager in geostationary orbit, the accuracy in urban areas has been poorer than other areas primarily due to complex urban surface properties and mixed aerosol types from different emission sources. The two largest error sources in aerosol retrieval have been aerosol type selection and surface reflectance. In selecting the aerosol type from a single visible channel, the season-dependent aerosol optical properties were adopted from longterm measurements of Aerosol Robotic Network (AERONET) sun-photometers. With the aerosol optical properties obtained fromthe AERONET inversion data, look-up tableswere calculated by using a radiative transfer code: the Second Simulation of the Satellite Signal in the Solar Spectrum (6S). Surface reflectance was estimated using the clear sky composite method, awidely used technique for geostationary retrievals. Over East Asia, the AOD retrieved from the Meteorological Imager showed good agreement, although the values were affected by cloud contamination errors. However, the conventional retrieval of the AOD over Hong Kong was largely underestimated due to the lack of information on the aerosol type and surface properties. To detect spatial and temporal variation of aerosol type over the area, the critical reflectance method, a technique to retrieve single scattering albedo (SSA), was applied. Additionally, the background aerosol effect was corrected to improve the accuracy of the surface reflectance over Hong Kong. The AOD retrieved froma modified algorithmwas compared to the collocated data measured by AERONET in Hong Kong. The comparison showed that the new aerosol type selection using the critical reflectance and the corrected surface reflectance significantly improved the accuracy of AODs in Hong Kong areas,with a correlation coefficient increase from0.65 to 0.76 and a regression line change from tMI [basic algorithm] = 0

  10. PET motion correction in context of integrated PET/MR: Current techniques, limitations, and future projections.

    PubMed

    Gillman, Ashley; Smith, Jye; Thomas, Paul; Rose, Stephen; Dowson, Nicholas

    2017-12-01

    Patient motion is an important consideration in modern PET image reconstruction. Advances in PET technology mean motion has an increasingly important influence on resulting image quality. Motion-induced artifacts can have adverse effects on clinical outcomes, including missed diagnoses and oversized radiotherapy treatment volumes. This review aims to summarize the wide variety of motion correction techniques available in PET and combined PET/CT and PET/MR, with a focus on the latter. A general framework for the motion correction of PET images is presented, consisting of acquisition, modeling, and correction stages. Methods for measuring, modeling, and correcting motion and associated artifacts, both in literature and commercially available, are presented, and their relative merits are contrasted. Identified limitations of current methods include modeling of aperiodic and/or unpredictable motion, attaining adequate temporal resolution for motion correction in dynamic kinetic modeling acquisitions, and maintaining availability of the MR in PET/MR scans for diagnostic acquisitions. Finally, avenues for future investigation are discussed, with a focus on improvements that could improve PET image quality, and that are practical in the clinical environment. © 2017 American Association of Physicists in Medicine.

  11. A fast combination calibration of foreground and background for pipelined ADCs

    NASA Astrophysics Data System (ADS)

    Kexu, Sun; Lenian, He

    2012-06-01

    This paper describes a fast digital calibration scheme for pipelined analog-to-digital converters (ADCs). The proposed method corrects the nonlinearity caused by finite opamp gain and capacitor mismatch in multiplying digital-to-analog converters (MDACs). The considered calibration technique takes the advantages of both foreground and background calibration schemes. In this combination calibration algorithm, a novel parallel background calibration with signal-shifted correlation is proposed, and its calibration cycle is very short. The details of this technique are described in the example of a 14-bit 100 Msample/s pipelined ADC. The high convergence speed of this background calibration is achieved by three means. First, a modified 1.5-bit stage is proposed in order to allow the injection of a large pseudo-random dithering without missing code. Second, before correlating the signal, it is shifted according to the input signal so that the correlation error converges quickly. Finally, the front pipeline stages are calibrated simultaneously rather than stage by stage to reduce the calibration tracking constants. Simulation results confirm that the combination calibration has a fast startup process and a short background calibration cycle of 2 × 221 conversions.

  12. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  13. Techniques for the correction of topographical effects in scanning Auger electron microscopy

    NASA Technical Reports Server (NTRS)

    Prutton, M.; Larson, L. A.; Poppa, H.

    1983-01-01

    A number of ratioing methods for correcting Auger images and linescans for topographical contrast are tested using anisotropically etched silicon substrates covered with Au or Ag. Thirteen well-defined angles of incidence are present on each polyhedron produced on the Si by this etching. If N1 electrons are counted at the energy of an Auger peak and N2 are counted in the background above the peak, then N1, N1 - N2, (N1 - N2)/(N1 + N2) are measured and compared as methods of eliminating topographical contrast. The latter method gives the best compensation but can be further improved by using a measurement of the sample absorption current. Various other improvements are discussed.

  14. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data

    USGS Publications Warehouse

    Chavez, P.S.

    1988-01-01

    Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.

  15. A New Approach for the Correction of Prominent Ear Deformity: The Distally Based Perichondrio-Adipo-Dermal Flap Technique.

    PubMed

    Cihandide, Ercan; Kayiran, Oguz; Aydin, Elif Eren; Uzunismail, Adnan

    2016-06-01

    Otoplasty techniques are generally divided into 2 categories as cartilage-cutting and cartilage-sparing. The cartilage-cutting techniques have been criticized because of their high risk of hematoma, skin necrosis, and ear deformity. As a result, suture-based cartilage-sparing methods like Mustardé and Furnas-type suture techniques have become increasingly popular. However, with these techniques postauricular suture extrusion may be seen and recurrence rates of up to 25% have been reported. In this study, cartilage-sparing otoplasty is redefined by introduction of the distally based perichondrio-adipo-dermal flap which is elevated from the postauricular region. Thirty-seven ears (17 bilateral and 3 unilateral) in 20 patients (14 females and 6 males) have been operated with the defined technique by the same surgeon. The distally based perichondrio-adipo-dermal flap is advanced posteriorly to correct the deformity, also acting as a strong postauricular support to prevent recurrence. In addition to the resultant natural-looking antihelical fold, the posterior advancement of the flap corrects both the otherwise wide conchoscaphal and conchomastoid angles. The operative technique is explained in detail with results and the literature is reviewed. There were no hematomas. After an average follow-up of 8.3 months (2-16 months), recurrence was seen in only 1 patient who requested no further surgery. No patients developed suture extrusion or granuloma. The authors introduce a simple and safe procedure to correct prominent ears with benefits including a resultant natural-looking antihelical fold and less tissue trauma. The distally based perichondrio-adipo-dermal flap seems to prevent suture extrusion and may also help to reduce recurrence rates. By forming neochondrogenesis which is stimulated by elevation of the perichondrium, this flap gives the promise of longer durability of the newly formed antihelical fold.

  16. A new corrective technique for adolescent idiopathic scoliosis: convex manipulation using 6.35 mm diameter pure titanium rod followed by concave fixation using 6.35 mm diameter titanium alloy

    PubMed Central

    2015-01-01

    Background It has been thought that corrective posterior surgery for adolescent idiopathic scoliosis (AIS) should be started on the concave side because initial convex manipulation would increase the risk of vertebral malrotation, worsening the rib hump. With the many new materials, implants, and manipulation techniques (e.g., direct vertebral rotation) now available, we hypothesized that manipulating the convex side first is no longer taboo. Methods Our technique has two major facets. (1) Curve correction is started from the convex side with a derotation maneuver and in situ bending followed by concave rod application. (2) A 6.35 mm diameter pure titanium rod is used on the convex side and a 6.35 mm diameter titanium alloy rod on the concave side. Altogether, 52 patients were divided into two groups. Group N included 40 patients (3 male, 37 female; average age 15.9 years) of Lenke type 1 (23 patients), type 2 (2), type 3 (3), type 5 (10), type 6 (2). They were treated with a new technique using 6.35 mm diameter different-stiffness titanium rods. Group C included 12 patients (all female, average age 18.8 years) of Lenke type 1 (6 patients), type 2 (3), type 3 (1), type 5 (1), type 6 (1). They were treated with conventional methods using 5.5 mm diameter titanium alloy rods. Radiographic parameters (Cobb angle/thoracic kyphosis/correction rates) and perioperative data were retrospectively collected and analyzed. Results Preoperative main Cobb angles (groups N/C) were 56.8°/60.0°, which had improved to 15.2°/17.1° at the latest follow-up. Thoracic kyphosis increased from 16.8° to 21.3° in group N and from 16.0° to 23.4° in group C. Correction rates were 73.2% in group N and 71.7% in group C. There were no significant differences for either parameter. Mean operating time, however, was significantly shorter in group N (364 min) than in group C (456 min). Conclusion We developed a new corrective surgical technique for AIS using a 6.35 mm diameter pure titanium

  17. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  18. Correlations Between Teacher and Student Backgrounds and Teacher Perceptions of Discipline Problems and Disciplinary Techniques.

    ERIC Educational Resources Information Center

    Moore, W. L.; Cooper, Harris

    1984-01-01

    A study in Columbia, Missouri, revealed that many teacher and student background characteristics correlated weakly but significantly with teachers' perceptions of the frequency of discipline infractions and the effectiveness of disciplinary techniques. The data (derived from school records and from a questionnaire to which 162 elementary teachers…

  19. Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging

    PubMed Central

    Carasso, Alfred S; Vladár, András E

    2014-01-01

    This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by ‘slow motion’ low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected ‘fast scan’ frames. The paper includes software routines, written in Interactive Data Language (IDL),1 that can perform the above image processing tasks. PMID:26601050

  20. Recovery of Background Structures in Nanoscale Helium Ion Microscope Imaging.

    PubMed

    Carasso, Alfred S; Vladár, András E

    2014-01-01

    This paper discusses a two step enhancement technique applicable to noisy Helium Ion Microscope images in which background structures are not easily discernible due to a weak signal. The method is based on a preliminary adaptive histogram equalization, followed by 'slow motion' low-exponent Lévy fractional diffusion smoothing. This combined approach is unexpectedly effective, resulting in a companion enhanced image in which background structures are rendered much more visible, and noise is significantly reduced, all with minimal loss of image sharpness. The method also provides useful enhancements of scanning charged-particle microscopy images obtained by composing multiple drift-corrected 'fast scan' frames. The paper includes software routines, written in Interactive Data Language (IDL),(1) that can perform the above image processing tasks.

  1. Lightweight and Statistical Techniques for Petascale Debugging: Correctness on Petascale Systems (CoPS) Preliminry Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Supinski, B R; Miller, B P; Liblit, B

    2011-09-13

    Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques.more » Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the

  2. Techniques for transparent lattice measurement and correction

    NASA Astrophysics Data System (ADS)

    Cheng, Weixing; Li, Yongjun; Ha, Kiman

    2017-07-01

    A novel method has been successfully demonstrated at NSLS-II to characterize the lattice parameters with gated BPM turn-by-turn (TbT) capability. This method can be used at high current operation. Conventional lattice characterization and tuning are carried out at low current in dedicated machine studies which include beam-based measurement/correction of orbit, tune, dispersion, beta-beat, phase advance, coupling etc. At the NSLS-II storage ring, we observed lattice drifting during beam accumulation in user operation. Coupling and lifetime change while insertion device (ID) gaps are moved. With the new method, dynamical lattice correction is possible to achieve reliable and productive operations. A bunch-by-bunch feedback system excites a small fraction (∼1%) of bunches and gated BPMs are aligned to see those bunch motions. The gated TbT position data are used to characterize the lattice hence correction can be applied. As there are ∼1% of total charges disturbed for a short period of time (several ms), this method is transparent to general user operation. We demonstrated the effectiveness of these tools during high current user operation.

  3. Experimental technique for simultaneous measurement of absorption-, emission cross-sections, and background loss coefficient in doped optical fibers

    NASA Astrophysics Data System (ADS)

    Karimi, M.; Seraji, F. E.

    2010-01-01

    We report a new simple technique for the simultaneous measurements of absorption-, emission cross-sections, background loss coefficient, and dopant density of doped optical fibers with low dopant concentration. Using our proposed technique, the experimental characterization of a sample Ge-Er-doped optical fiber is presented, and the results are analyzed and compared with other reports. This technique is suitable for production line of doped optical fibers.

  4. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, S; Wang, Y; Lue, K

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the

  5. Photon correlation study of background suppressed single InGaN nanocolumns

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takatoshi; Maekawa, Michiru; Imanishi, Yusuke; Ishizawa, Shunsuke; Nakaoka, Toshihiro; Kishino, Katsumi

    2016-04-01

    We report on a linearly polarized non-classical light emission from a single InGaN/GaN nanocolumn, which is a site-controlled nanostructure allowing for pixel-like large-scale integration. We have developed a shadow mask technique to reduce background emissions arising from nitride deposits around single nanocolumns and defect states of GaN. The signal to background ratio is improved from 0.5:1 to 10:1, which allows for detailed polarization-dependent measurement and photon-correlation measurements. Polarization-dependent measurements show that linearly polarized emissions arise from excitonic recombination involving a heavy-hole-like electronic state, corresponding to the bulk exciton of an in-plane polarized A exciton. The second-order coherence function at time zero g (2)(0) is 0.52 at 20 K without background correction. This value is explained in terms of a statistical mixture of a single-photon emission with residual weak background emissions, as well as efficient carrier injection from other localized states.

  6. Correcting Flank Skin Laxity and Dog Ear Plus Aggressive Liposuction: A Technique for Classic Abdominoplasty in Middle-Eastern Obese Women

    PubMed Central

    Hosseini, Seyed Nejat; Ammari, Ali; Mousavizadeh, Seyed Mehdi

    2018-01-01

    BACKGROUND Nowadays obesity is a common problem as it leads to abdominal deformation and people’s dissatisfaction of their own body. This study has explored using a new surgical technique based on a different incision to reform the flank skin laxity and dog ear plus aggressive liposuction on women with abdominal deformities. METHODS From May 2014 to February 2016, 25 women were chosen for this study. All women had a body mass index more than 28 kg/m2, flank folding, bulging and excess fat, abdominal and flank skin sagging and laxity. An important point of the new technique was that the paramedian perforator was preserved. RESULTS All women were between 33 and 62 years old (mean age of 47±7.2 years old). The average amount of liposuction aspirate was 2,350 mL (1700-3200 mL), and the size of average excised skin ellipse was 23.62×16.08 cm (from 19×15 to 27×18 cm). Dog ear, skin laxity, bulging and fat deposit correction were assessed and scored in two and four months after the surgery. CONCLUSION Aggressive abdominal and flank liposuction can be safely done when paramedian perforator is preserved. This has a good cosmetic result in the abdomen and flank and prevents bulging in the incision end and flank. Using this abdominoplasty technique is recommended on patients with high body mass indexes. PMID:29651396

  7. Comparison of motion correction techniques applied to functional near-infrared spectroscopy data from children

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-Su; Arredondo, Maria M.; Gomba, Megan; Confer, Nicole; DaSilva, Alexandre F.; Johnson, Timothy D.; Shalinsky, Mark; Kovelman, Ioulia

    2015-12-01

    Motion artifacts are the most significant sources of noise in the context of pediatric brain imaging designs and data analyses, especially in applications of functional near-infrared spectroscopy (fNIRS), in which it can completely affect the quality of the data acquired. Different methods have been developed to correct motion artifacts in fNIRS data, but the relative effectiveness of these methods for data from child and infant subjects (which is often found to be significantly noisier than adult data) remains largely unexplored. The issue is further complicated by the heterogeneity of fNIRS data artifacts. We compared the efficacy of the six most prevalent motion artifact correction techniques with fNIRS data acquired from children participating in a language acquisition task, including wavelet, spline interpolation, principal component analysis, moving average (MA), correlation-based signal improvement, and combination of wavelet and MA. The evaluation of five predefined metrics suggests that the MA and wavelet methods yield the best outcomes. These findings elucidate the varied nature of fNIRS data artifacts and the efficacy of artifact correction methods with pediatric populations, as well as help inform both the theory and practice of optical brain imaging analysis.

  8. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-09

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Inverted Nipple Correction with Selective Dissection of Lactiferous Ducts Using an Operative Microscope and a Traction Technique.

    PubMed

    Sowa, Yoshihiro; Itsukage, Sizu; Morita, Daiki; Numajiri, Toshiaki

    2017-10-01

    An inverted nipple is a common congenital condition in young women that may cause breastfeeding difficulty, psychological distress, repeated inflammation, and loss of sensation. Various surgical techniques have been reported for correction of inverted nipples, and all have advantages and disadvantages. Here, we report a new technique for correction of an inverted nipple using an operative microscope and traction that results in low recurrence and preserves lactation function and sensation. Between January 2010 and January 2013, we treated eight inverted nipples in seven patients with selective lactiferous duct dissection using an operative microscope. An opposite Z-plasty was added at the junction of the nipple and areola. Postoperatively, traction was applied through an apparatus made from a rubber gasket attached to a sterile syringe. Patients were followed up for 15-48 months. Adequate projection was achieved in all patients, and there was no wound dehiscence or complications such as infection. Three patients had successful pregnancies and subsequent breastfeeding that was not adversely affected by the treatment. There was no loss of sensation in any patient during the postoperative period. Our technique for treating an inverted nipple is effective and preserves lactation function and nipple sensation. The method maintains traction for a longer period, which we believe increases the success rate of the surgery for correction of severely inverted nipples. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  10. A Comparison of Off-Level Correction Techniques for Airborne Gravity using GRAV-D Re-Flights

    NASA Astrophysics Data System (ADS)

    Preaux, S. A.; Melachroinos, S.; Diehl, T. M.

    2011-12-01

    The airborne gravity data collected for the GRAV-D project contain a number of tracks which have been flown multiple times, either by design or due to data collection issues. Where viable data can be retrieved, these re-flights are a valuable resource not only for assessing the quality of the data but also for evaluating the relative effectiveness of various processing techniques. Correcting for the instantaneous misalignment of the gravimeter sensitive axis with local vertical has been a long standing challenge for stable platform airborne gravimetry. GRAV-D re-flights are used to compare the effectiveness of existing methods of computing this off-level correction (Valliant 1991, Peters and Brozena 1995, Swain 1996, etc.) and to assess the impact of possible modifications to these methods including pre-filtering accelerations, use of IMU horizontal accelerations in place of those derived from GPS positions and accurately compensating for GPS lever-arm and attitude effects prior to computing accelerations from the GPS positions (Melachroinos et al. 2010, B. de Saint-Jean, et al. 2005). The resulting corrected gravity profiles are compared to each other and to EGM08 in order to assess the accuracy and precision of each method. Preliminary results indicate that the methods presented in Peters & Brozena 1995 and Valliant 1991 completely correct the off-level error some of the time but only partially correct it others, while introducing an overall bias to the data of -0.5 to -2 mGal.

  11. Oxygen Reduction Reaction Measurements on Platinum Electrocatalysts Utilizing Rotating Disk Electrode Technique: I. Impact of Impurities, Measurement Protocols and Applied Corrections

    DOE PAGES

    Shinozaki, Kazuma; Zack, Jason W.; Richards, Ryan M.; ...

    2015-07-22

    The rotating disk electrode (RDE) technique is being extensively used as a screening tool to estimate the activity of novel PEMFC electrocatalysts synthesized in lab-scale (mg) quantities. Discrepancies in measured activity attributable to glassware and electrolyte impurity levels, as well as conditioning, protocols and corrections are prevalent in the literature. Moreover, the electrochemical response to a broad spectrum of commercially sourced perchloric acid and the effect of acid molarity on impurity levels and solution resistance were also assessed. Our findings reveal that an area specific activity (SA) exceeding 2.0 mA/cm 2 (20 mV/s, 25°C, 100 kPa, 0.1 M HClO 4)more » for polished poly-Pt is an indicator of impurity levels that do not impede the accurate measurement of the ORR activity of Pt based catalysts. After exploring various conditioning protocols to approach maximum utilization of the electrochemical area (ECA) and peak ORR activity without introducing catalyst degradation, an investigation of measurement protocols for ECA and ORR activity was conducted. Down-selected protocols were based on the criteria of reproducibility, duration of experiments, impurity effects and magnitude of pseudo-capacitive background correction. In sum, statistical reproducibility of ORR activity for poly-Pt and Pt supported on high surface area carbon was demonstrated.« less

  12. Effectiveness of Indirect and Direct Metalinguistic Error Correction Techniques on the Essays of Senior Secondary School Students in South Western Nigeria

    ERIC Educational Resources Information Center

    Eyengho, Toju; Fawole, Oyebisi

    2013-01-01

    The study assessed error-correction techniques used in correcting students' essays in English language and also determined the effects of these strategies and other related variables on students' performance in essay writing with a view to improving students' writing skill in English language in South Western Nigeria. A quasi-experimental design…

  13. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    NASA Astrophysics Data System (ADS)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2018-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  14. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    NASA Astrophysics Data System (ADS)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  15. Correction of the lobule.

    PubMed

    Siegert, Ralf

    2004-11-01

    Many techniques have been described for the correction of protruding ears. Most of them concentrate on correcting the form and position of auricular cartilage. The lobule is a soft tissue structure. Skin resections of its posterior surface have been propagated for the correction of its position; however, these cause tension on the wound and might increase the already relatively high risk for the development of keloids. We have modified the technique for correcting the protruding lobule for its exact positioning and minimizing the risk for relapse and keloids. Starting from the incision performed for the anthelix plasty, a subcutaneous pocket is prepared between the anterior and posterior sides of the lobule. Afterwards, the subcutaneous layer of the postlobular skin is adjusted and fixed to the cartilage of the conchal cavum with a special mattress suture. This technique is a refinement of otoplasty for bat ears. It is indicated for precise modification of form and position of protruding lobules.

  16. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    PubMed

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  17. Corrections Officer Candidate Information Booklet and User's Manual. Standards and Training for Corrections Program.

    ERIC Educational Resources Information Center

    California State Board of Corrections, Sacramento.

    This package consists of an information booklet for job candidates preparing to take California's Corrections Officer Examination and a user's manual intended for those who will administer the examination. The candidate information booklet provides background information about the development of the Corrections Officer Examination, describes its…

  18. Low background materials and fabrication techniques for cables and connectors in the Majorana Demonstrator

    NASA Astrophysics Data System (ADS)

    Busch, M.; Abgrall, N.; Alvis, S. I.; Arnquist, I. J.; Avignone, F. T.; Barabash, A. S.; Barton, C. J.; Bertrand, F. E.; Bode, T.; Bradley, A. W.; Brudanin, V.; Buuck, M.; Caldwell, T. S.; Chan, Y.-D.; Christofferson, C. D.; Chu, P.-H.; Cuesta, C.; Detwiler, J. A.; Dunagan, C.; Efremenko, Yu.; Ejiri, H.; Elliott, S. R.; Gilliss, T.; Giovanetti, G. K.; Green, M. P.; Gruszko, J.; Guinn, I. S.; Guiseppe, V. E.; Haufe, C. R.; Hehn, L.; Henning, R.; Hoppe, E. W.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Konovalov, S. I.; Kouzes, R. T.; Lopez, A. M.; Martin, R. D.; Massarczyk, R.; Meijer, S. J.; Mertens, S.; Myslik, J.; O'Shaughnessy, C.; Othman, G.; Poon, A. W. P.; Radford, D. C.; Rager, J.; Reine, A. L.; Rielage, K.; Robertson, R. G. H.; Rouf, N. W.; Shanks, B.; Shirchenko, M.; Suriano, A. M.; Tedeschi, D.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.; Zhu, B. X.

    2018-01-01

    The Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a tonne scale 76Ge-based search (the LEGEND collaboration). In the Demonstrator, germanium detectors operate in an ultra-pure vacuum cryostat at 80 K. One special challenge of an ultra-pure environment is to develop reliable cables, connectors, and electronics that do not significantly contribute to the radioactive background of the experiment. This paper highlights the experimental requirements and how these requirements were met for the Majorana Demonstrator, including plans to upgrade the wiring for higher reliability in the summer of 2018. Also described are requirements for LEGEND R&D efforts underway to meet these additional requirements

  19. Pattern matching techniques for correcting low-confidence OCR words in a known context

    NASA Astrophysics Data System (ADS)

    Ford, Glenn; Hauser, Susan E.; Le, Daniel X.; Thoma, George R.

    2000-12-01

    A commercial OCR system is a key component of a system developed at the National Library of Medicine for the automated extraction of bibliographic fields from biomedical journals. This 5-engine OCR system, while exhibiting high performance overall, does not reliably convert very small characters, especially those that are in italics. As a result, the 'affiliations' field that typically contains such characters in most journals, is not captured accurately, and requires a disproportionately high manual input. To correct this problem, dictionaries have been created from words occurring in this field (e.g., university, department, street addresses, names of cities, etc.) from 230,000 articles already processed. The OCR output corresponding to the affiliation field is then matched against these dictionary entries by approximate string-matching techniques, and the ranked matches are presented to operators for verification. This paper outlines the techniques employed and the results of a comparative evaluation.

  20. PETPVC: a toolbox for performing partial volume correction techniques in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Thomas, Benjamin A.; Cuplov, Vesna; Bousse, Alexandre; Mendes, Adriana; Thielemans, Kris; Hutton, Brian F.; Erlandsson, Kjell

    2016-11-01

    Positron emission tomography (PET) images are degraded by a phenomenon known as the partial volume effect (PVE). Approaches have been developed to reduce PVEs, typically through the utilisation of structural information provided by other imaging modalities such as MRI or CT. These methods, known as partial volume correction (PVC) techniques, reduce PVEs by compensating for the effects of the scanner resolution, thereby improving the quantitative accuracy. The PETPVC toolbox described in this paper comprises a suite of methods, both classic and more recent approaches, for the purposes of applying PVC to PET data. Eight core PVC techniques are available. These core methods can be combined to create a total of 22 different PVC techniques. Simulated brain PET data are used to demonstrate the utility of toolbox in idealised conditions, the effects of applying PVC with mismatched point-spread function (PSF) estimates and the potential of novel hybrid PVC methods to improve the quantification of lesions. All anatomy-based PVC techniques achieve complete recovery of the PET signal in cortical grey matter (GM) when performed in idealised conditions. Applying deconvolution-based approaches results in incomplete recovery due to premature termination of the iterative process. PVC techniques are sensitive to PSF mismatch, causing a bias of up to 16.7% in GM recovery when over-estimating the PSF by 3 mm. The recovery of both GM and a simulated lesion was improved by combining two PVC techniques together. The PETPVC toolbox has been written in C++, supports Windows, Mac and Linux operating systems, is open-source and publicly available.

  1. Evaluation of a technique for color correction in restoring anterior teeth.

    PubMed

    Rauber, Gabrielle Branco; Bernardon, Jussara Karina; Vieira, Luiz Clovis Cardoso; Baratieri, Luiz Narciso

    2017-09-01

    This study aimed to evaluate the ability of the proposed technique in producing restorations that exhibit mimesis with tooth structure and to define a restorative clinical protocol. For this study a typodont was used. The right upper central incisor with Class IV lesion was restored with the layering technique (reference tooth, RT). For the left upper central incisor with Class IV lesion, six teeth were restored monochromatically (test teeth, TT), using DA3.5 (n = 3) and DA4 (n = 3) composite resins-resulting in six unsatisfactory color restorations. TT were divided into six groups depending on the color of unsatisfactory restoration and preparation depth. First, a preparation was realized on the labial surface with 0.5 mm, 0.7 mm or 1.0 mm of depth. A second preparation was then performed to reproduce the dentinal mamelons. Next, adhesive procedures were performed and the teeth restored. Opaque halo, opalescent halo and vestibular enamel were then reproduced by the addition of different composite resins. The RT and TT were photographed side by side in typodont to obtain six photographic prints. The photographs of the groups were subjected to visual evaluation by 120 volunteers via a questionnaire. Data were analyzed by the prevalence of answers, and Chi-square test was used to investigate the association between variables at .05 significance. Furthermore, ΔE of groups was evaluated in comparison RT. The results demonstrated that the moderate intensity restorations (DA3.5) with depths of 0.5 mm and 0.7 mm had the highest prevalence of acceptance. For severe intensity restorations (DA4), the preparation depth of 1.0 mm obtained better acceptance. The technique was able to modify the final color of Class IV restorations, producing satisfactory color restorations. This technique can be used for color correction in cases of Class IV restorations, in situations where there is no time for immediate layered restoration, and as a restorative technique. © 2017

  2. Observation-Corrected Precipitation Estimates in GEOS-5

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; Liu, Qing

    2014-01-01

    Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.

  3. Low background materials and fabrication techniques for cables and connectors in the Majorana Demonstrator

    DOE PAGES

    Busch, M.; Abgrall, N.; Alvis, S. I.; ...

    2018-01-03

    Here, the Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a tonne scale 76Ge-based search (the LEGEND collaboration). In the Demonstrator, germanium detectors operate in an ultra-pure vacuum cryostat at 80 K. One special challenge of an ultra-pure environment is to develop reliable cables, connectors, and electronics that do not significantly contribute to the radioactive background of the experiment. This paper highlights the experimental requirements and how these requirements were met for the Majorana Demonstrator,more » including plans to upgrade the wiring for higher reliability in the summer of 2018. Also described are requirements for LEGEND R&D efforts underway to meet these additional requirements« less

  4. Validation of Regression-Based Myogenic Correction Techniques for Scalp and Source-Localized EEG

    PubMed Central

    McMenamin, Brenton W.; Shackman, Alexander J.; Maxwell, Jeffrey S.; Greischar, Lawrence L.; Davidson, Richard J.

    2008-01-01

    EEG and EEG source-estimation are susceptible to electromyographic artifacts (EMG) generated by the cranial muscles. EMG can mask genuine effects or masquerade as a legitimate effect - even in low frequencies, such as alpha (8–13Hz). Although regression-based correction has been used previously, only cursory attempts at validation exist and the utility for source-localized data is unknown. To address this, EEG was recorded from 17 participants while neurogenic and myogenic activity were factorially varied. We assessed the sensitivity and specificity of four regression-based techniques: between-subjects, between-subjects using difference-scores, within-subjects condition-wise, and within-subject epoch-wise on the scalp and in data modeled using the LORETA algorithm. Although within-subject epoch-wise showed superior performance on the scalp, no technique succeeded in the source-space. Aside from validating the novel epoch-wise methods on the scalp, we highlight methods requiring further development. PMID:19298626

  5. Simple statistical bias correction techniques greatly improve moderate resolution air quality forecast at station level

    NASA Astrophysics Data System (ADS)

    Curci, Gabriele; Falasca, Serena

    2017-04-01

    Deterministic air quality forecast is routinely carried out at many local Environmental Agencies in Europe and throughout the world by means of eulerian chemistry-transport models. The skill of these models in predicting the ground-level concentrations of relevant pollutants (ozone, nitrogen dioxide, particulate matter) a few days ahead has greatly improved in recent years, but it is not yet always compliant with the required quality level for decision making (e.g. the European Commission has set a maximum uncertainty of 50% on daily values of relevant pollutants). Post-processing of deterministic model output is thus still regarded as a useful tool to make the forecast more reliable. In this work, we test several bias correction techniques applied to a long-term dataset of air quality forecasts over Europe and Italy. We used the WRF-CHIMERE modelling system, which provides operational experimental chemical weather forecast at CETEMPS (http://pumpkin.aquila.infn.it/forechem/), to simulate the years 2008-2012 at low resolution over Europe (0.5° x 0.5°) and moderate resolution over Italy (0.15° x 0.15°). We compared the simulated dataset with available observation from the European Environmental Agency database (AirBase) and characterized model skill and compliance with EU legislation using the Delta tool from FAIRMODE project (http://fairmode.jrc.ec.europa.eu/). The bias correction techniques adopted are, in order of complexity: (1) application of multiplicative factors calculated as the ratio of model-to-observed concentrations averaged over the previous days; (2) correction of the statistical distribution of model forecasts, in order to make it similar to that of the observations; (3) development and application of Model Output Statistics (MOS) regression equations. We illustrate differences and advantages/disadvantages of the three approaches. All the methods are relatively easy to implement for other modelling systems.

  6. Fast and precise technique for magnet lattice correction via sine-wave excitation of fast correctors

    DOE PAGES

    Yang, X.; Smaluk, V.; Yu, L. H.; ...

    2017-05-02

    A novel technique has been developed to improve the precision and shorten the measurement time of the LOCO (linear optics from closed orbits) method. This technique, named AC LOCO, is based on sine-wave (ac) beam excitation via fast correctors. Such fast correctors are typically installed at synchrotron light sources for the fast orbit feedback. The beam oscillations are measured by beam position monitors. The narrow band used for the beam excitation and measurement not only allows us to suppress effectively the beam position noise but also opens the opportunity for simultaneously exciting multiple correctors at different frequencies (multifrequency mode). Wemore » demonstrated at NSLS-II that AC LOCO provides better lattice corrections and works much faster than the traditional LOCO method.« less

  7. Posterior Double Vertebral Column Resections Combined with Satellite Rod Technique to Correct Severe Congenital Angular Kyphosis.

    PubMed

    Sun, Xu; Zhu, Ze-Zhang; Chen, Xi; Liu, Zhen; Wang, Bin; Qiu, Yong

    2016-08-01

    This paper presents a highly challenging technique involving posterior double vertebral column resections (VCRs) and satellite rods placement. This was a young adult case with severe angular thoracolumbar kyphosis of 101 degrees, secondary to anterior segmentation failure from T11 to L1 . There were hemivertebrae at T11 and T12 , and a wedged vertebra at L1 . He received double VCRs at T12 and T11 and instrumented fusion from T6 to L4 via a posterior only approach. Autologous grafts and a cage were placed between the bony surfaces of the osteotomy gap. Once closure of osteotomy was achieved, bilateral permanent CoCr rods were placed with addition of satellite rods. Postoperative X-ray demonstrated marked correction of kyphosis. On the 10(th) days after surgery, the patient was able to walk without assistance. In conclusion, double VCRs are effective to correct severe angular kyphosis, and addition of satellite rods may be imperative to enhance instrumentation strength and thus prevent correction loss. © 2016 Chinese Orthopaedic Association and John Wiley & Sons Australia, Ltd.

  8. Atmospheric Precorrected Differential Absorption technique to retrieve columnar water vapor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlaepfer, D.; Itten, K.I.; Borel, C.C.

    1998-09-01

    Differential absorption techniques are suitable to retrieve the total column water vapor contents from imaging spectroscopy data. A technique called Atmospheric Precorrected Differential Absorption (APDA) is derived directly from simplified radiative transfer equations. It combines a partial atmospheric correction with a differential absorption technique. The atmospheric path radiance term is iteratively corrected during the retrieval of water vapor. This improves the results especially over low background albedos. The error of the method for various ground reflectance spectra is below 7% for most of the spectra. The channel combinations for two test cases are then defined, using a quantitative procedure, whichmore » is based on MODTRAN simulations and the image itself. An error analysis indicates that the influence of aerosols and channel calibration is minimal. The APDA technique is then applied to two AVIRIS images acquired in 1991 and 1995. The accuracy of the measured water vapor columns is within a range of {+-}5% compared to ground truth radiosonde data.« less

  9. Assessing the impact of background spectral graph construction techniques on the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.

    2012-06-01

    Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.

  10. Estimating representative background PM2.5 concentration in heavily polluted areas using baseline separation technique and chemical mass balance model

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng

    2018-02-01

    The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.

  11. Techniques to improve the accuracy of noise power spectrum measurements in digital x-ray imaging based on background trends removal.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2011-03-01

    Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two

  12. Karl Ludloff (1864-1945): An Inventive Orthopedic Surgeon, His Work and His Surgical Technique for the Correction of Hallux Valgus.

    PubMed

    Markatos, Konstantinos; Karaoglanis, Georgios; Damaskos, Christos; Garmpis, Nikolaos; Tsourouflis, Gerasimos; Laios, Konstantinos; Tsoucalas, Gregory

    2018-05-01

    The purpose of this article is to summarize the work and pioneering achievements in the field of orthopedic surgery of the German orthopedic surgeon Karl Ludloff. Ludloff had an impact in the diagnostics, physical examination, orthopedic imaging, and orthopedic surgical technique of his era. He was a pioneer in the surgical treatment of dysplastic hip, anterior cruciate ligament reconstruction, and hallux valgus. His surgical technique for the correction of hallux valgus, initially stabilized with plaster of Paris, remained unpopular among other orthopedic surgeons for decades. In the 1990s, the advent and use of improved orthopedic materials for fixation attracted the interest of numerous orthopedic surgeons in the Ludloff osteotomy for its ability to correct the deformity in all 3 dimensions, its anatomic outcomes, and its low recurrence rate and patient satisfaction.

  13. A new technique for correction of simple congenital earlobe clefts: diametric hinge flaps method.

    PubMed

    Qing, Yong; Cen, Ying; Xu, Xuewen; Chen, Junjie

    2013-06-01

    The earlobe plays an important part in the aesthetic appearance of the auricle. Congenital cleft earlobe may vary considerably in severity from a simple notching to extensive tissue deficiency. Most patients with cleft earlobe require surgical correction because of abnormal appearance. In this article, a new surgical technique for correcting congenital simple cleft earlobe using diametric hinge flaps is introduced. We retrospectively reviewed 4 patients diagnosed with congenital cleft earlobe between 2008 and 2010. All of them received this new surgical method. The patients were followed up from 3 to 6 months. All patients attained relatively full bodied earlobes with smooth contours, inconspicuous scars, and found their reconstructed earlobes to be aesthetically satisfactory. One patient experienced hypoesthesia in the area operated on, but recovered 3 months later. No other complications were noted. This simple method not only makes full use of the surrounding tissues to reconstruct full bodied earlobes but also avoids small notch formation caused by the linear scar contraction sometimes seen when using more traditional methods.

  14. Surgical correction of pectus arcuatum

    PubMed Central

    Ershova, Ksenia; Adamyan, Ruben

    2016-01-01

    Background Pectus arcuatum is a rear congenital chest wall deformity and methods of surgical correction are debatable. Methods Surgical correction of pectus arcuatum always includes one or more horizontal sternal osteotomies, resection of deformed rib cartilages and finally anterior chest wall stabilization. The study is approved by the institutional ethical committee and has obtained the informed consent from every patient. Results In this video we show our modification of pectus arcuatum correction with only partial sternal osteotomy and further stabilization by vertical parallel titanium plates. Conclusions Reported method is a feasible option for surgical correction of pectus arcuatum. PMID:29078483

  15. Evaluation of an improved fiberoptics luminescence skin monitor with background correction.

    PubMed

    Vo-Dinh, T

    1987-06-01

    In this work, an improved version of a fiberoptics luminescence monitor, the prototype luminoscope II, is evaluated for in situ quantitative measurements. The instrument was developed to detect traces of luminescing organic contaminants on skin. An electronic background-nulling system was designed and incorporated into the instrument to compensate for various skin background emissions. A dose-response curve for a coal liquid spotted on mouse skin was established. The results illustrated the usefulness of the instrument for in vivo detection of organic materials on laboratory mouse skin.

  16. A comparison of radiometric correction techniques in the evaluation of the relationship between LST and NDVI in Landsat imagery.

    PubMed

    Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin

    2012-06-01

    Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.

  17. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    PubMed

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  18. Posterior spinal fusion for adolescent idiopathic scoliosis using a convex pedicle screw technique: a novel concept of deformity correction.

    PubMed

    Tsirikos, A I; Mataliotakis, G; Bounakis, N

    2017-08-01

    We present the results of correcting a double or triple curve adolescent idiopathic scoliosis using a convex segmental pedicle screw technique. We reviewed 191 patients with a mean age at surgery of 15 years (11 to 23.3). Pedicle screws were placed at the convexity of each curve. Concave screws were inserted at one or two cephalad levels and two caudal levels. The mean operating time was 183 minutes (132 to 276) and the mean blood loss 0.22% of the total blood volume (0.08% to 0.4%). Multimodal monitoring remained stable throughout the operation. The mean hospital stay was 6.8 days (5 to 15). The mean post-operative follow-up was 5.8 years (2.5 to 9.5). There were no neurological complications, deep wound infection, obvious nonunion or need for revision surgery. Upper thoracic scoliosis was corrected by a mean 68.2% (38% to 48%, p < 0.001). Main thoracic scoliosis was corrected by a mean 71% (43.5% to 8.9%, p < 0.001). Lumbar scoliosis was corrected by a mean 72.3% (41% to 90%, p < 0.001). No patient lost more than 3° of correction at follow-up. The thoracic kyphosis improved by 13.1° (-21° to 49°, p < 0.001); the lumbar lordosis remained unchanged (p = 0.58). Coronal imbalance was corrected by a mean 98% (0% to 100%, p < 0.001). Sagittal imbalance was corrected by a mean 96% (20% to 100%, p < 0.001). The Scoliosis Research Society Outcomes Questionnaire score improved from a mean 3.6 to 4.6 (2.4 to 4, p < 0.001); patient satisfaction was a mean 4.9 (4.8 to 5). This technique carries low neurological and vascular risks because the screws are placed in the pedicles of the convex side of the curve, away from the spinal cord, cauda equina and the aorta. A low implant density (pedicle screw density 1.2, when a density of 2 represents placement of pedicle screws bilaterally at every instrumented segment) achieved satisfactory correction of the scoliosis, an improved thoracic kyphosis and normal global sagittal balance. Both patient satisfaction and functional

  19. Data analysis techniques

    NASA Technical Reports Server (NTRS)

    Park, Steve

    1990-01-01

    A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.

  20. Using Statistical Techniques and Web Search to Correct ESL Errors

    ERIC Educational Resources Information Center

    Gamon, Michael; Leacock, Claudia; Brockett, Chris; Dolan, William B.; Gao, Jianfeng; Belenko, Dmitriy; Klementiev, Alexandre

    2009-01-01

    In this paper we present a system for automatic correction of errors made by learners of English. The system has two novel aspects. First, machine-learned classifiers trained on large amounts of native data and a very large language model are combined to optimize the precision of suggested corrections. Second, the user can access real-life web…

  1. Chemical Plume Detection with an Iterative Background Estimation Technique

    DTIC Science & Technology

    2016-05-17

    schemes because of contamination of background statistics by the plume. To mitigate the effects of plume contamination , a first pass of the detector...can be used to create a background mask. However, large diffuse plumes are typically not removed by a single pass. Instead, contamination can be...is estimated using plume-pixels, the covariance matrix is contaminated and detection performance may be significantly reduced. To avoid Further author

  2. MRI artifact reduction and quality improvement in the upper abdomen with PROPELLER and prospective acquisition correction (PACE) technique.

    PubMed

    Hirokawa, Yuusuke; Isoda, Hiroyoshi; Maetani, Yoji S; Arizono, Shigeki; Shimada, Kotaro; Togashi, Kaori

    2008-10-01

    The purpose of this study was to evaluate the effectiveness of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER [BLADE in the MR systems from Siemens Medical Solutions]) with a respiratory compensation technique for motion correction, image noise reduction, improved sharpness of liver edge, and image quality of the upper abdomen. Twenty healthy adult volunteers with a mean age of 28 years (age range, 23-42 years) underwent upper abdominal MRI with a 1.5-T scanner. For each subject, fat-saturated T2-weighted turbo spin-echo (TSE) sequences with respiratory compensation (prospective acquisition correction [PACE]) were performed with and without the BLADE technique. Ghosting artifact, artifacts except ghosting artifact such as respiratory motion and bowel movement, sharpness of liver edge, image noise, and overall image quality were evaluated visually by three radiologists using a 5-point scale for qualitative analysis. The Wilcoxon's signed rank test was used to determine whether a significant difference existed between images with and without BLADE. A p value less than 0.05 was considered to be statistically significant. In the BLADE images, image artifacts, sharpness of liver edge, image noise, and overall image quality were significantly improved (p < 0.001). With the BLADE technique, T2-weighted TSE images of the upper abdomen could provide reduced image artifacts including ghosting artifact and image noise and provide better image quality.

  3. Correction of moderate to severe hallux valgus with combined proximal opening wedge and distal chevron osteotomies: a reliable technique.

    PubMed

    Jeyaseelan, L; Chandrashekar, S; Mulligan, A; Bosman, H A; Watson, A J S

    2016-09-01

    The mainstay of surgical correction of hallux valgus is first metatarsal osteotomy, either proximally or distally. We present a technique of combining a distal chevron osteotomy with a proximal opening wedge osteotomy, for the correction of moderate to severe hallux valgus. We reviewed 45 patients (49 feet) who had undergone double osteotomy. Outcome was assessed using the American Orthopaedic Foot and Ankle Society (AOFAS) and the Short Form (SF) -36 Health Survey scores. Radiological measurements were undertaken to assess the correction. The mean age of the patients was 60.8 years (44.2 to 75.3). The mean follow-up was 35.4 months (24 to 51). The mean AOFAS score improved from 54.7 to 92.3 (p < 0.001) and the mean SF-36 score from 59 to 86 (p < 0.001). The mean hallux valgus and intermetatarsal angles were improved from 41.6(o) to 12.8(o) (p < 0.001) and from 22.1(o) to 7.1(o), respectively (p < 0.001). The mean distal metatarsal articular angle improved from 23(o) to 9.7(o). The mean sesamoid position, as described by Hardy and Clapham, improved from 6.8 to 3.5. The mean length of the first metatarsal was unchanged. The overall rate of complications was 4.1% (two patients). These results suggest that a double osteotomy of the first metatarsal is a reliable, safe technique which, when compared with other metatarsal osteotomies, provides strong angular correction and excellent outcomes with a low rate of complications. Cite this article: Bone Joint J 2016;98-B:1202-7. ©2016 The British Editorial Society of Bone & Joint Surgery.

  4. Time-of-day Corrections to Aircraft Noise Metrics

    NASA Technical Reports Server (NTRS)

    Clevenson, S. (Editor); Shepherd, W. T. (Editor)

    1980-01-01

    The historical and background aspects of time-of-day corrections as well as the evidence supporting these corrections are discussed. Health, welfare, and economic impacts, needs a criteria, and government policy and regulation, are also reported.

  5. [Techniques for pixel response nonuniformity correction of CCD in interferential imaging spectrometer].

    PubMed

    Yao, Tao; Yin, Shi-Min; Xiangli, Bin; Lü, Qun-Bo

    2010-06-01

    Based on in-depth analysis of the relative radiation scaling theorem and acquired scaling data of pixel response nonuniformity correction of CCD (charge-coupled device) in spaceborne visible interferential imaging spectrometer, a pixel response nonuniformity correction method of CCD adapted to visible and infrared interferential imaging spectrometer system was studied out, and it availably resolved the engineering technical problem of nonuniformity correction in detector arrays for interferential imaging spectrometer system. The quantitative impact of CCD nonuniformity on interferogram correction and recovery spectrum accuracy was given simultaneously. Furthermore, an improved method with calibration and nonuniformity correction done after the instrument is successfully assembled was proposed. The method can save time and manpower. It can correct nonuniformity caused by other reasons in spectrometer system besides CCD itself's nonuniformity, can acquire recalibration data when working environment is changed, and can also more effectively improve the nonuniformity calibration accuracy of interferential imaging

  6. Correcting Poor Posture without Awareness or Willpower

    ERIC Educational Resources Information Center

    Wernik, Uri

    2012-01-01

    In this article, a new technique for correcting poor posture is presented. Rather than intentionally increasing awareness or mobilizing willpower to correct posture, this approach offers a game using randomly drawn cards with easy daily assignments. A case using the technique is presented to emphasize the subjective experience of living with poor…

  7. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO

  8. A new technique for correcting cryptotia: bolster external fixation method.

    PubMed

    Qing, Yong; Cen, Ying; Yu, Rong; Xu, Xuewen

    2010-11-01

    Cryptotia is a congenital auricular deformity in which the upper third of the auricle is buried under the temporal skin. There is no standard surgical method to correct cryptotia. This study is aimed at devising a new surgical method to correct cryptotia with good auricular contour and inconspicuous scar. We retrospectively reviewed 8 patients diagnosed with cryptotia in West China Hospital between 2006 and 2009. All of them received this new surgical method to correct cryptotia. The follow-up period ranged from 6 months to 1 year. All patients possessed good auricular contour and sufficient skin for release of the upper part of the auricle without the need for a skin graft or local skin flap transferred. All patients possessed deep auriculotemporal sulci and inconspicuous scars. There were no complications, and cryptotia did not recur in any patient.

  9. Assessing the performance of different DTI motion correction strategies in the presence of EPI distortion correction.

    PubMed

    Taylor, Paul A; Alhamud, A; van der Kouwe, Andre; Saleh, Muhammad G; Laughton, Barbara; Meintjes, Ernesta

    2016-12-01

    Diffusion tensor imaging (DTI) is susceptible to several artifacts due to eddy currents, echo planar imaging (EPI) distortion and subject motion. While several techniques correct for individual distortion effects, no optimal combination of DTI acquisition and processing has been determined. Here, the effects of several motion correction techniques are investigated while also correcting for EPI distortion: prospective correction, using navigation; retrospective correction, using two different popular packages (FSL and TORTOISE); and the combination of both methods. Data from a pediatric group that exhibited incidental motion in varying degrees are analyzed. Comparisons are carried while implementing eddy current and EPI distortion correction. DTI parameter distributions, white matter (WM) maps and probabilistic tractography are examined. The importance of prospective correction during data acquisition is demonstrated. In contrast to some previous studies, results also show that the inclusion of retrospective processing also improved ellipsoid fits and both the sensitivity and specificity of group tractographic results, even for navigated data. Matches with anatomical WM maps are highest throughout the brain for data that have been both navigated and processed using TORTOISE. The inclusion of both prospective and retrospective motion correction with EPI distortion correction is important for DTI analysis, particularly when studying subject populations that are prone to motion. Hum Brain Mapp 37:4405-4424, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs

    PubMed Central

    Esmonde-White, Francis W. L.; Esmonde-White, Karen A.; Morris, Michael D.

    2010-01-01

    Projective transformation is a mathematical correction (implemented in software) used in the remote imaging field to produce distortion-free images. We present the application of projective transformation to correct minor alignment and astigmatism distortions that are inherent in dispersive spectrographs. Patterned white-light images and neon emission spectra were used to produce registration points for the transformation. Raman transects collected on microscopy and fiber-optic systems were corrected using established methods and compared with the same transects corrected using the projective transformation. Even minor distortions have a significant effect on reproducibility and apparent fluorescence background complexity. Simulated Raman spectra were used to optimize the projective transformation algorithm. We demonstrate that the projective transformation reduced the apparent fluorescent background complexity and improved reproducibility of measured parameters of Raman spectra. Distortion correction using a projective transformation provides a major advantage in reducing the background fluorescence complexity even in instrumentation where slit-image distortions and camera rotation were minimized using manual or mechanical means. We expect these advantages should be readily applicable to other spectroscopic modalities using dispersive imaging spectrographs. PMID:21211158

  11. Motion Correction in PROPELLER and Turboprop-MRI

    PubMed Central

    Tamhane, Ashish A.; Arfanakis, Konstantinos

    2009-01-01

    PROPELLER and Turboprop-MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo and gradient and spin-echo, respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop-MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that, blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop-MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction were discussed for PROPELLER and Turboprop-MRI. PMID:19365858

  12. Ionospheric propagation correction modeling for satellite altimeters

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.

    1981-01-01

    The theoretical basis and avaliable accuracy verifications were reviewed and compared for ionospheric correction procedures based on a global ionsopheric model driven by solar flux, and a technique in which measured electron content (using Faraday rotation measurements) for one path is mapped into corrections for a hemisphere. For these two techniques, RMS errors for correcting satellite altimeters data (at 14 GHz) are estimated to be 12 cm and 3 cm, respectively. On the basis of global accuracy and reliability after implementation, the solar flux model is recommended.

  13. Impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing pulmonary dynamic contrast-enhanced MR imaging.

    PubMed

    Tokuda, Junichi; Mamata, Hatsuho; Gill, Ritu R; Hata, Nobuhiko; Kikinis, Ron; Padera, Robert F; Lenkinski, Robert E; Sugarbaker, David J; Hatabu, Hiroto

    2011-04-01

    To investigates the impact of nonrigid motion correction on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in patients with solitary pulmonary nodules (SPNs). Misalignment of focal lesions due to respiratory motion in free-breathing dynamic contrast-enhanced MRI (DCE-MRI) precludes obtaining reliable time-intensity curves, which are crucial for pharmacokinetic analysis for tissue characterization. Single-slice 2D DCE-MRI was obtained in 15 patients. Misalignments of SPNs were corrected using nonrigid B-spline image registration. Pixel-wise pharmacokinetic parameters K(trans) , v(e) , and k(ep) were estimated from both original and motion-corrected DCE-MRI by fitting the two-compartment pharmacokinetic model to the time-intensity curve obtained in each pixel. The "goodness-of-fit" was tested with χ(2) -test in pixel-by-pixel basis to evaluate the reliability of the parameters. The percentages of reliable pixels within the SPNs were compared between the original and motion-corrected DCE-MRI. In addition, the parameters obtained from benign and malignant SPNs were compared. The percentage of reliable pixels in the motion-corrected DCE-MRI was significantly larger than the original DCE-MRI (P = 4 × 10(-7) ). Both K(trans) and k(ep) derived from the motion-corrected DCE-MRI showed significant differences between benign and malignant SPNs (P = 0.024, 0.015). The study demonstrated the impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in SPNs. Copyright © 2011 Wiley-Liss, Inc.

  14. Holographic corrections to the Veneziano amplitude

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-08-01

    We propose a holographic computation of the 2 → 2 meson scattering in a curved string background, dual to a QCD-like theory. We recover the Veneziano amplitude and compute a perturbative correction due to the background curvature. The result implies a small deviation from a linear trajectory, which is a requirement of the UV regime of QCD.

  15. [Therapeutic evaluation of the correction of the severe bi-maxillary protrusion cases by Tweed-Merrifield technique].

    PubMed

    Huang, J Q; Liu, S Y; Jiang, J H

    2016-06-18

    To evaluate the influence of Tweed-Merrifield technique in correction of severe bimaxillary protrusion adult patients on the measurement of the dental and skeletal changes after orthodontic treatment by Johnston analysis and the regular cephalomatric analysis. Twelve adolescent patients with severe bimaxillary protrusion were included in this self-control retrospective study. Lateral cephalometric radiographs were taken before and after treatments. All the radiographs were traced and analyzed by the method of Johnston analysis. Other measurements were evaluated using a series of 13 linear and angular measurements including SNA, SNB, ANB, U1-SN, U1-NA, U1/NA, L1-NB, U1/NB, L1/MP, U1-L1, (U1+L1)/2-AB, MP/SN and MP/FH from regular cephalomatric analysis. These measurements were also applied to compare the differences between pre- and post-treatments, which clarify the dental and skeletal changes by Johnston analysis. The effect of orthodontic correction was determined using the non-parameters test. The maxillary moved backforward by 1.3 mm according to the stable skull base, while the mandible moved forward by 2.12 mm. The relative position between the maxillary and mandible (ABCH) changed 3.42 mm. The upper and lower incisors retracted significantly. The upper and lower molars moved slightly forward and the relative positions of upper and lower molars and anterior teeth after treatment were 3.44 mm and 4.23 mm respectively. After treatment, the parameters of ANB, U1-NA, U1/NA, U1-SN, L1-NB, L1/NB and L1-M were reduced by -(1.98±1.55)°(P=0.012), - (5.08±4.6) mm (P=0.002), -(11.79±1.21)°(P=0.004), -(13.55±6.32)°(P=0.047), -(3.17±3.07) mm (P=0.010), -(6.84±2.55)°(P=0.038) and -(4.13±2.24)°(P=0.048) on average, whose changes had the statistically significant effects. Tweed-Merrifield technique (directional force technique) can stabilize anchorage molar, retract anterior teeth and significantly improve the hard and soft tissue profile for patients with

  16. The coincidence counting technique for orders of magnitude background reduction in data obtained with the magnetic recoil spectrometer at OMEGA and the NIF.

    PubMed

    Casey, D T; Frenje, J A; Séguin, F H; Li, C K; Rosenberg, M J; Rinderknecht, H; Manuel, M J-E; Gatu Johnson, M; Schaeffer, J C; Frankel, R; Sinenian, N; Childs, R A; Petrasso, R D; Glebov, V Yu; Sangster, T C; Burke, M; Roberts, S

    2011-07-01

    A magnetic recoil spectrometer (MRS) has been built and successfully used at OMEGA for measurements of down-scattered neutrons (DS-n), from which an areal density in both warm-capsule and cryogenic-DT implosions have been inferred. Another MRS is currently being commissioned on the National Ignition Facility (NIF) for diagnosing low-yield tritium-hydrogen-deuterium implosions and high-yield DT implosions. As CR-39 detectors are used in the MRS, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). The coincidence counting technique was developed to reduce these types of background tracks to the required level for the DS-n measurements at OMEGA and the NIF. Using this technique, it has been demonstrated that the number of background tracks is reduced by a couple of orders of magnitude, which exceeds the requirement for the DS-n measurements at both facilities.

  17. 76 FR 56949 - Biomass Crop Assistance Program; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    .... ACTION: Interim rule; correction. SUMMARY: The Commodity Credit Corporation (CCC) is amending the Biomass... funds in favor of the ``project area'' portion of BCAP. CCC is also correcting errors in the regulation... INFORMATION: Background CCC published a final rule on October 27, 2010 (75 FR 66202-66243) implementing BCAP...

  18. The "clover technique" as a novel approach for correction of post-traumatic tricuspid regurgitation.

    PubMed

    Alfieri, O; De Bonis, M; Lapenna, E; Agricola, E; Quarti, A; Maisano, F

    2003-07-01

    To describe a novel technique, named "clover," to correct complex post-traumatic tricuspid valve lesions. Five patients with severe post-traumatic tricuspid insufficiency underwent valve reconstruction with the clover technique, a new surgical approach that consists of stitching together the middle point of the free edges of the tricuspid leaflets, producing a clover-shaped valve. The mechanism of tricuspid regurgitation was complex in all patients, and right ventricular function was always moderately to severely depressed. An echocardiographic study was performed after cardiopulmonary bypass, at discharge, and at follow-up. Cardiopulmonary bypass time was 32 +/- 6.3 minutes and crossclamp time was 23 +/- 7.4. There was no hospital mortality or morbidity. Intraoperative transesophageal and predischarge transthoracic echocardiography showed perfect results in all patients. No late deaths occurred. At the latest follow-up, extending to 14.2 months (mean 11.3; median 12.4), all patients were asymptomatic (New York Heart Association class I) with trivial (2 patients) or no residual regurgitation (3 patients) on 2-dimensional echocardiogram. No transvalvular gradient was revealed in any patient. A significant reduction of the right ventricular end-diastolic dimensions was noted as well (from 54 +/- 7.1 mm to 40 +/- 7.5 mm, P <.001). In this preliminary experience, the clover technique increased the feasibility of tricuspid valve repair in case of severe traumatic tricuspid valve insufficiency, leading to very satisfactory mid-term results even in the presence of complex lesions or dilatation and deterioration of the right ventricle.

  19. Validation of phenol red versus gravimetric method for water reabsorption correction and study of gender differences in Doluisio's absorption technique.

    PubMed

    Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival

    2014-10-01

    The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Dead-time Corrected Disdrometer Data

    DOE Data Explorer

    Bartholomew, Mary Jane

    2008-03-05

    Original and dead-time corrected disdrometer results for observations made at SGP and TWP. The correction is based on the technique discussed in Sheppard and Joe, 1994. In addition, these files contain calculated radar reflectivity factor, mean Doppler velocity and attenuation for every measurement for both the original and dead-time corrected data at the following wavelengths: 0.316, 0.856, 3.2, 5, and 10cm (W,K,X,C,S bands). Pavlos Kollias provided the code to do these calculations.

  1. A Phase Correction Technique Based on Spatial Movements of Antennas in Real-Time (S.M.A.R.T.) for Designing Self-Adapting Conformal Array Antennas

    NASA Astrophysics Data System (ADS)

    Roy, Sayan

    This research presents a real-time adaptive phase correction technique for flexible phased array antennas on conformal surfaces of variable shapes. Previously reported pattern correctional methods for flexible phased array antennas require prior knowledge on the possible non-planar shapes in which the array may adapt for conformal applications. For the first time, this initial requirement of shape curvature knowledge is no longer needed and the instantaneous information on the relative location of array elements is used here for developing a geometrical model based on a set of Bezier curves. Specifically, by using an array of inclinometer sensors and an adaptive phase-correctional algorithm, it has been shown that the proposed geometrical model can successfully predict different conformal orientations of a 1-by-4 antenna array in real-time without the requirement of knowing the shape-changing characteristics of the surface the array is attached upon. Moreover, the phase correction technique is validated by determining the field patterns and broadside gain of the 1-by-4 antenna array on four different conformal surfaces with multiple points of curvatures. Throughout this work, measurements are shown to agree with the analytical solutions and full-wave simulations.

  2. a Novel Technique for Precision Geometric Correction of Jitter Distortion for the Europa Imaging System and Other Rolling-Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Shepherd, M.; Sides, S. C.

    2018-04-01

    We use simulated images to demonstrate a novel technique for mitigating geometric distortions caused by platform motion ("jitter") as two-dimensional image sensors are exposed and read out line by line ("rolling shutter"). The results indicate that the Europa Imaging System (EIS) on NASA's Europa Clipper can likely meet its scientific goals requiring 0.1-pixel precision. We are therefore adapting the software used to demonstrate and test rolling shutter jitter correction to become part of the standard processing pipeline for EIS. The correction method will also apply to other rolling-shutter cameras, provided they have the operational flexibility to read out selected "check lines" at chosen times during the systematic readout of the frame area.

  3. Artifact correction and absolute radiometric calibration techniques employed in the Landsat 7 image assessment system

    USGS Publications Warehouse

    Boncyk, Wayne C.; Markham, Brian L.; Barker, John L.; Helder, Dennis

    1996-01-01

    The Landsat-7 Image Assessment System (IAS), part of the Landsat-7 Ground System, will calibrate and evaluate the radiometric and geometric performance of the Enhanced Thematic Mapper Plus (ETM +) instrument. The IAS incorporates new instrument radiometric artifact correction and absolute radiometric calibration techniques which overcome some limitations to calibration accuracy inherent in historical calibration methods. Knowledge of ETM + instrument characteristics gleaned from analysis of archival Thematic Mapper in-flight data and from ETM + prelaunch tests allow the determination and quantification of the sources of instrument artifacts. This a priori knowledge will be utilized in IAS algorithms designed to minimize the effects of the noise sources before calibration, in both ETM + image and calibration data.

  4. Electroweak Corrections to pp→μ^{+}μ^{-}e^{+}e^{-}+X at the LHC: A Higgs Boson Background Study.

    PubMed

    Biedermann, B; Denner, A; Dittmaier, S; Hofer, L; Jäger, B

    2016-04-22

    The first complete calculation of the next-to-leading-order electroweak corrections to four-lepton production at the LHC is presented, where all off-shell effects of intermediate Z bosons and photons are taken into account. Focusing on the mixed final state μ^{+}μ^{-}e^{+}e^{-}, we study differential cross sections that are particularly interesting for Higgs boson analyses. The electroweak corrections are divided into photonic and purely weak corrections. The former exhibit patterns familiar from similar W- or Z-boson production processes with very large radiative tails near resonances and kinematical shoulders. The weak corrections are of the generic size of 5% and show interesting variations, in particular, a sign change between the regions of resonant Z-pair production and the Higgs signal.

  5. High-energy electrons from the muon decay in orbit: Radiative corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szafron, Robert; Czarnecki, Andrzej

    2015-12-07

    We determine the Ο(α) correction to the energy spectrum of electrons produced in the decay of muons bound in atoms. We focus on the high-energy end of the spectrum that constitutes a background for the muon-electron conversion and will be precisely measured by the upcoming experiments Mu2e and COMET. As a result, the correction suppresses the background by about 20%.

  6. A Novel Surgical Technique to Correct Intrareolar Polythelia.

    PubMed

    Cherubino, Mario; Pellegatta, Igor; Frigo, Claudia; Scamoni, Stefano; Taibi, Dominic; Maggiulli, Francesca; Valdatta, Luigi

    2014-07-01

    Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.

  7. Photobleaching correction in fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Vicente, Nathalie B.; Diaz Zamboni, Javier E.; Adur, Javier F.; Paravani, Enrique V.; Casco, Víctor H.

    2007-11-01

    Fluorophores are used to detect molecular expression by highly specific antigen-antibody reactions in fluorescence microscopy techniques. A portion of the fluorophore emits fluorescence when irradiated with electromagnetic waves of particular wavelengths, enabling its detection. Photobleaching irreversibly destroys fluorophores stimulated by radiation within the excitation spectrum, thus eliminating potentially useful information. Since this process may not be completely prevented, techniques have been developed to slow it down or to correct resulting alterations (mainly, the decrease in fluorescent signal). In the present work, the correction by photobleaching curve was studied using E-cadherin (a cell-cell adhesion molecule) expression in Bufo arenarum embryos. Significant improvements were observed when applying this simple, inexpensive and fast technique.

  8. Ionospheric Correction of D-InSAR Using Split-Spectrum Technique and 3D Ionosphere Model in Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Guo, L.; Wu, J. J.; Chen, Q.; Song, S.

    2014-12-01

    In Differential Interferometric Synthetic Aperture Radar (D-InSAR) atmosphere effect including troposphere and ionosphere is one of the dominant sources of error in most interferograms, which greatly reduced the accuracy of deformation monitoring. In recent years tropospheric correction especially Zwd in InSAR data processing has ever got widely investigated and got efficiently suppressed. And thus we focused our study on ionospheric correction using two different methods, which are split-spectrum technique and Nequick model, one of the three dimensional electron density models. We processed Wenchuan ALOS PALSAR images, and compared InSAR surface deformation after ionospheric modification using the two approaches mentioned above with ground GPS subsidence observations to validate the effect of split-spectrum method and NeQuick model, further discussed the performance and feasibility of external data and InSAR itself during the study of the elimination of InSAR ionospheric effect.

  9. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  10. Review of approaches to the recording of background lesions in toxicologic pathology studies in rats.

    PubMed

    McInnes, E F; Scudamore, C L

    2014-08-17

    Pathological evaluation of lesions caused directly by xenobiotic treatment must always take into account the recognition of background (incidental) findings. Background lesions can be congenital or hereditary, histological variations, changes related to trauma or normal aging and physiologic or hormonal changes. This review focuses on the importance and correct approach to recording of background changes and includes discussion on sources of variability in background changes, the correct use of terminology, the concept of thresholds, historical control data, diagnostic drift, blind reading of slides, scoring and artifacts. The review is illustrated with background lesions in Sprague Dawley and Wistar rats. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Transconjuctival Incision with Lateral Paracanthal Extension for Corrective Osteotomy of Malunioned Zygoma

    PubMed Central

    Chung, Jae-Ho; You, Hi-Jin; Hwang, Na-Hyun; Yoon, Eul-Sik

    2016-01-01

    Background Conventional correction of malunioned zygoma requires complete regional exposure through a bicoronal flap combined with a lower eyelid incision and an upper buccal sulcus incision. However, there are many potential complications following bicoronal incisions, such as infection, hematoma, alopecia, scarring and nerve injury. We have adopted a zygomaticofrontal suture osteotomy technique using transconjunctival incision with lateral paracanthal extension. We performed a retrospective review of clinical cases underwent correction of malunioned zygoma with the approach to evaluate outcomes following this method. Methods Between June 2009 and September 2015, corrective osteotomies were performed in 14 patients with malunioned zygoma by a single surgeon. All 14 patients received both upper gingivobuccal and transconjunctival incisions with lateral paracanthal extension. The mean interval from injury to operation was 16 months (range, 12 months to 4 years), and the mean follow-up was 1 year (range, 4 months to 3 years). Results Our surgical approach technique allowed excellent access to the infraorbital rim, orbital floor, zygomaticofrontal suture and anterior surface of the maxilla. Of the 14 patients, only 1 patient suffered a complication—oral wound dehiscence. Among the 6 patients who received infraorbital nerve decompression, numbness was gradually relieved in 4 patients. Two patients continued to experience persistent numbness. Conclusion Transconjunctival incision with lateral paracanthal extension combined with upper gingivobuccal sulcus incision offers excellent exposure of the zygoma-orbit complex, and could be a valid alternative to the bicoronal approach for osteotomy of malunioned zygoma. PMID:28913268

  12. Rapid Vision Correction by Special Operations Forces.

    PubMed

    Reynolds, Mark E

    This report describes a rapid method of vision correction used by Special Operations Medics in multiple operational engagements. Between 2011 and 2015, Special Operations Medics used an algorithm- driven refraction technique. A standard block of instruction was provided to the medics, along with a packaged kit. The technique was used in multiple operational engagements with host nation military and civilians. Data collected for program evaluation were later analyzed to assess the utility of the technique. Glasses were distributed to 230 patients with complaints of either decreased distance or near (reading). Most patients (84%) with distance complaints achieved corrected binocular vision of 20/40 or better, and 97% of patients with near-vision complaints achieved corrected near-binocular vision of 20/40 or better. There was no statistically significant difference between the percentages of patients achieving 20/40 when medics used the technique under direct supervision versus independent use. A basic refraction technique using a designed kit allows for meaningful improvement in distance and/or near vision at austere locations. Special Operations Medics can leverage this approach after specific training with minimal time commitment. It can serve as a rapid, effective intervention with multiple applications in diverse operational environments. 2017.

  13. A Novel Surgical Technique to Correct Intrareolar Polythelia

    PubMed Central

    Cherubino, Mario; Pellegatta, Igor; Frigo, Claudia; Scamoni, Stefano; Taibi, Dominic; Maggiulli, Francesca; Valdatta, Luigi

    2014-01-01

    Polythelia is a rare congenital malformation that occurs in 1–2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory. PMID:28331667

  14. Delegation in Correctional Nursing Practice.

    PubMed

    Tompkins, Frances

    2016-07-01

    Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. © The Author(s) 2016.

  15. Quantitative assessment of scatter correction techniques incorporated in next generation dual-source computed tomography

    NASA Astrophysics Data System (ADS)

    Mobberley, Sean David

    Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU

  16. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array.

    PubMed

    Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun

    2016-11-10

    In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above.

  17. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array

    PubMed Central

    Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun

    2016-01-01

    In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above. PMID:27834893

  18. Correction of Angle Class II division 1 malocclusion with a mandibular protraction appliances and multiloop edgewise archwire technique

    PubMed Central

    Freitas, Heloiza; dos Santos, Pedro César F; Janson, Guilherme

    2014-01-01

    A Brazilian girl aged 14 years and 9 months presented with a chief complaint of protrusive teeth. She had a convex facial profile, extreme overjet, deep bite, lack of passive lip seal, acute nasolabial angle, and retrognathic mandible. Intraorally, she showed maxillary diastemas, slight mandibular incisor crowding, a small maxillary arch, 13-mm overjet, and 4-mm overbite. After the diagnosis of severe Angle Class II division 1 malocclusion, a mandibular protraction appliance was placed to correct the Class II relationships and multiloop edgewise archwires were used for finishing. Follow-up examinations revealed an improved facial profile, normal overjet and overbite, and good intercuspation. The patient was satisfied with her occlusion, smile, and facial appearance. The excellent results suggest that orthodontic camouflage by using a mandibular protraction appliance in combination with the multiloop edgewise archwire technique is an effective option for correcting Class II malocclusions in patients who refuse orthognathic surgery. PMID:25309867

  19. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.

    PubMed

    Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas

    2009-03-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

  20. Judgments of aircraft noise in a traffic noise background

    NASA Technical Reports Server (NTRS)

    Powell, C. A.; Rice, C. G.

    1975-01-01

    An investigation was conducted to determine subjective response to aircraft noise in different road traffic backgrounds. In addition, two laboratory techniques for presenting the aircraft noise with the background noise were evaluated. For one technique, the background noise was continuous over an entire test session; for the other, the background noise level was changed with each aircraft noise during a session. Subjective response to aircraft noise was found to decrease with increasing background noise level, for a range of typical indoor noise levels. Subjective response was found to be highly correlated with the Noise Pollution Level (NPL) measurement scale.

  1. Demonstration of Cosmic Microwave Background Delensing Using the Cosmic Infrared Background.

    PubMed

    Larsen, Patricia; Challinor, Anthony; Sherwin, Blake D; Mak, Daisy

    2016-10-07

    Delensing is an increasingly important technique to reverse the gravitational lensing of the cosmic microwave background (CMB) and thus reveal primordial signals the lensing may obscure. We present a first demonstration of delensing on Planck temperature maps using the cosmic infrared background (CIB). Reversing the lensing deflections in Planck CMB temperature maps using a linear combination of the 545 and 857 GHz maps as a lensing tracer, we find that the lensing effects in the temperature power spectrum are reduced in a manner consistent with theoretical expectations. In particular, the characteristic sharpening of the acoustic peaks of the temperature power spectrum resulting from successful delensing is detected at a significance of 16σ, with an amplitude of A_{delens}=1.12±0.07 relative to the expected value of unity. This first demonstration on data of CIB delensing, and of delensing techniques in general, is significant because lensing removal will soon be essential for achieving high-precision constraints on inflationary B-mode polarization.

  2. Ultra-low background mass spectrometry for rare-event searches

    NASA Astrophysics Data System (ADS)

    Dobson, J.; Ghag, C.; Manenti, L.

    2018-01-01

    Inductively Coupled Plasma Mass Spectrometry (ICP-MS) allows for rapid, high-sensitivity determination of trace impurities, notably the primordial radioisotopes 238U and 232Th, in candidate materials for low-background rare-event search experiments. We describe the setup and characterisation of a dedicated low-background screening facility at University College London where we operate an Agilent 7900 ICP-MS. The impact of reagent and carrier gas purity is evaluated and we show that twice-distilled ROMIL-SpATM-grade nitric acid and zero-grade Ar gas delivers similar sensitivity to ROMIL-UpATM-grade acid and research-grade gas. A straightforward procedure for sample digestion and analysis of materials with U/Th concentrations down to 10 ppt g/g is presented. This includes the use of 233U and 230Th spikes to correct for signal loss from a range of sources and verification of 238U and 232Th recovery through digestion and analysis of a certified reference material with a complex sample matrix. Finally, we demonstrate assays and present results from two sample preparation and assay methods: a high-sensitivity measurement of ultra-pure Ti using open digestion techniques, and a closed vessel microwave digestion of a nickel-chromium-alloy using a multi-acid mixture.

  3. Apprendre a apprendre: L'autocorrection (Learning to Learn: Self-Correction).

    ERIC Educational Resources Information Center

    Noir, Pascal

    1996-01-01

    A technique used in an advanced French writing class to encourage student self-correction is described. The technique focused on correction of verbs and their tenses; reduction of repetition; appropriate use of "on" and "nous;" and verification of possessive adjectives, negatives, personal pronouns, spelling, and punctuation.…

  4. Correction of nasal deformity in infants with unilateral cleft lip and palate using multiple digital techniques.

    PubMed

    Zheng, Yaqi; Zhang, Dapeng; Qin, Tian; Wu, Guofeng

    2016-06-01

    Presurgical correction of severe nasal deformities before cheiloplasty is often recommended for infants with cleft lip and palate. This article describes an approach for the computer-aided design and fabrication of a nasal molding stent. A 3-dimensional photogrammetric system was used to obtain the shape information of the nosewing that was then built as the nostril support for the nasal molding stent. The stent was fabricated automatically with a rapid prototyping machine. This technique may be an alternative approach to presurgical nasal molding in the clinic. Moreover, the patient's nasal morphology can be saved as clinical data for future study. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  5. ICT: isotope correction toolbox.

    PubMed

    Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan

    2016-01-01

    Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Quantum corrections for spinning particles in de Sitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fröb, Markus B.; Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu

    We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalarmore » Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.« less

  7. The surgical correction of mandibular prognathism using rigid internal fixation--a report of a new technique together with its long-term stability.

    PubMed Central

    Reitzik, M.

    1988-01-01

    A historical review of the literature for the surgical correction of mandibular prognathism is presented, together with a list of ideal conditions for the successful treatment of this condition. This is a report of a new surgical technique which satisfies the majority of these principles and demonstrates stability at the osteotomy site. PMID:3207331

  8. T1 mapping with the variable flip angle technique: A simple correction for insufficient spoiling of transverse magnetization.

    PubMed

    Baudrexel, Simon; Nöth, Ulrike; Schüre, Jan-Rüdiger; Deichmann, Ralf

    2018-06-01

    The variable flip angle method derives T 1 maps from radiofrequency-spoiled gradient-echo data sets, acquired with different flip angles α. Because the method assumes validity of the Ernst equation, insufficient spoiling of transverse magnetization yields errors in T 1 estimation, depending on the chosen radiofrequency-spoiling phase increment (Δϕ). This paper presents a versatile correction method that uses modified flip angles α' to restore the validity of the Ernst equation. Spoiled gradient-echo signals were simulated for three commonly used phase increments Δϕ (50°/117°/150°), different values of α, repetition time (TR), T 1 , and a T 2 of 85 ms. For each parameter combination, α' (for which the Ernst equation yielded the same signal) and a correction factor C Δϕ (α, TR, T 1 ) = α'/α were determined. C Δϕ was found to be independent of T 1 and fitted as polynomial C Δϕ (α, TR), allowing to calculate α' for any protocol using this Δϕ. The accuracy of the correction method for T 2 values deviating from 85 ms was also determined. The method was tested in vitro and in vivo for variable flip angle scans with different acquisition parameters. The technique considerably improved the accuracy of variable flip angle-based T 1 maps in vitro and in vivo. The proposed method allows for a simple correction of insufficient spoiling in gradient-echo data. The required polynomial parameters are supplied for three common Δϕ. Magn Reson Med 79:3082-3092, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Geometric and shading correction for images of printed materials using boundary.

    PubMed

    Brown, Michael S; Tsoi, Yau-Chat

    2006-06-01

    A novel technique that uses boundary interpolation to correct geometric distortion and shading artifacts present in images of printed materials is presented. Unlike existing techniques, our algorithm can simultaneously correct a variety of geometric distortions, including skew, fold distortion, binder curl, and combinations of these. In addition, the same interpolation framework can be used to estimate the intrinsic illumination component of the distorted image to correct shading artifacts. We detail our algorithm for geometric and shading correction and demonstrate its usefulness on real-world and synthetic data.

  10. Are conventional statistical techniques exhaustive for defining metal background concentrations in harbour sediments? A case study: The Coastal Area of Bari (Southeast Italy).

    PubMed

    Mali, Matilda; Dell'Anna, Maria Michela; Mastrorilli, Piero; Damiani, Leonardo; Ungaro, Nicola; Belviso, Claudia; Fiore, Saverio

    2015-11-01

    Sediment contamination by metals poses significant risks to coastal ecosystems and is considered to be problematic for dredging operations. The determination of the background values of metal and metalloid distribution based on site-specific variability is fundamental in assessing pollution levels in harbour sediments. The novelty of the present work consists of addressing the scope and limitation of analysing port sediments through the use of conventional statistical techniques (such as: linear regression analysis, construction of cumulative frequency curves and the iterative 2σ technique), that are commonly employed for assessing Regional Geochemical Background (RGB) values in coastal sediments. This study ascertained that although the tout court use of such techniques in determining the RGB values in harbour sediments seems appropriate (the chemical-physical parameters of port sediments fit well with statistical equations), it should nevertheless be avoided because it may be misleading and can mask key aspects of the study area that can only be revealed by further investigations, such as mineralogical and multivariate statistical analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. BPM CALIBRATION INDEPENDENT LHC OPTICS CORRECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CALAGA,R.; TOMAS, R.; GIOVANNOZZI, M.

    2007-06-25

    The tight mechanical aperture for the LHC imposes severe constraints on both the beta and dispersion beating. Robust techniques to compensate these errors are critical for operation of high intensity beams in the LHC. We present simulations using realistic errors from magnet measurements and alignment tolerances in the presence of BPM noise. Correction reveals that the use of BPM calibration and model independent observables are key ingredients to accomplish optics correction. Experiments at RHIC to verify the algorithms for optics correction are also presented.

  12. An investigation of error correcting techniques for OMV and AXAF

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  13. Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity

    ERIC Educational Resources Information Center

    Simmons-Mackie, Nina; Damico, Jack S.

    2008-01-01

    Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…

  14. Development of a Technique for Separating Raman Scattering Signals from Background Emission with Single-Shot Measurement Potential

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy

    1996-01-01

    Raman scattering is a powerful technique for quantitatively probing high temperature and high speed flows. However, this technique has typically been limited to clean hydrogen flames because of the broadband fluorescence interference which occurs in hydrocarbon flames. Fluorescence can also interfere with the Raman signal in clean hydrogen flames when broadband UV lasers are used as the scattering source. A solution to this problem has been demonstrated. The solution to the fluorescence interference lies in the fact that the vibrational Q-branch Raman signal is highly polarized for 90 deg. signal collection and the fluorescence background is essentially unpolarized. Two basic schemes are available for separating the Raman from the background. One scheme involves using a polarized laser and collecting a signal with both horizontal and vertical laser polarizations separately. The signal with the vertical polarization will contain both the Raman and the fluorescence while the signal with the horizontal polarization will contain only the fluorescence. The second scheme involves polarization discrimination on the collection side of the optical setup. For vertical laser polarization, the scattered Q-branch Raman signal will be vertically polarized; hence the two polarizations can be collected separately and the difference between the two is the Raman signal. This approach has been used for the work found herein and has the advantage of allowing the data to be collected from the same laser shot(s). This makes it possible to collect quantitative Raman data with single shot resolution in conditions where interference cannot otherwise be eliminated.

  15. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  16. LWIR pupil imaging and prospects for background compensation

    NASA Astrophysics Data System (ADS)

    LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg

    2015-08-01

    A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.

  17. Background of SAM atom-fraction profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ernst, Frank

    Atom-fraction profiles acquired by SAM (scanning Auger microprobe) have important applications, e.g. in the context of alloy surface engineering by infusion of carbon or nitrogen through the alloy surface. However, such profiles often exhibit an artifact in form of a background with a level that anti-correlates with the local atom fraction. This article presents a theory explaining this phenomenon as a consequence of the way in which random noise in the spectrum propagates into the discretized differentiated spectrum that is used for quantification. The resulting model of “energy channel statistics” leads to a useful semi-quantitative background reduction procedure, which ismore » validated by applying it to simulated data. Subsequently, the procedure is applied to an example of experimental SAM data. The analysis leads to conclusions regarding optimum experimental acquisition conditions. The proposed method of background reduction is based on general principles and should be useful for a broad variety of applications. - Highlights: • Atom-fraction–depth profiles of carbon measured by scanning Auger microprobe • Strong background, varies with local carbon concentration. • Needs correction e.g. for quantitative comparison with simulations • Quantitative theory explains background. • Provides background removal strategy and practical advice for acquisition.« less

  18. Use of 3D Printed Bone Plate in Novel Technique to Surgically Correct Hallux Valgus Deformities

    PubMed Central

    Smith, Kathryn E.; Dupont, Kenneth M.; Safranski, David L.; Blair, Jeremy; Buratti, Dawn; Zeetser, Vladimir; Callahan, Ryan; Lin, Jason; Gall, Ken

    2016-01-01

    Three-dimensional (3-D) printing offers many potential advantages in designing and manufacturing plating systems for foot and ankle procedures that involve small, geometrically complex bony anatomy. Here, we describe the design and clinical use of a Ti-6Al-4V ELI bone plate (FastForward™ Bone Tether Plate, MedShape, Inc., Atlanta, GA) manufactured through 3-D printing processes. The plate protects the second metatarsal when tethering suture tape between the first and second metatarsals and is a part of a new procedure that corrects hallux valgus (bunion) deformities without relying on doing an osteotomy or fusion procedure. The surgical technique and two clinical cases describing the use of this procedure with the 3-D printed bone plate are presented within. PMID:28337049

  19. Motion correction in MRI of the brain

    PubMed Central

    Godenschweger, F; Kägebein, U; Stucht, D; Yarach, U; Sciarra, A; Yakupov, R; Lüsebrink, F; Schulze, P; Speck, O

    2016-01-01

    Subject motion in MRI is a relevant problem in the daily clinical routine as well as in scientific studies. Since the beginning of clinical use of MRI, many research groups have developed methods to suppress or correct motion artefacts. This review focuses on rigid body motion correction of head and brain MRI and its application in diagnosis and research. It explains the sources and types of motion and related artefacts, classifies and describes existing techniques for motion detection, compensation and correction and lists established and experimental approaches. Retrospective motion correction modifies the MR image data during the reconstruction, while prospective motion correction performs an adaptive update of the data acquisition. Differences, benefits and drawbacks of different motion correction methods are discussed. PMID:26864183

  20. Motion correction in MRI of the brain

    NASA Astrophysics Data System (ADS)

    Godenschweger, F.; Kägebein, U.; Stucht, D.; Yarach, U.; Sciarra, A.; Yakupov, R.; Lüsebrink, F.; Schulze, P.; Speck, O.

    2016-03-01

    Subject motion in MRI is a relevant problem in the daily clinical routine as well as in scientific studies. Since the beginning of clinical use of MRI, many research groups have developed methods to suppress or correct motion artefacts. This review focuses on rigid body motion correction of head and brain MRI and its application in diagnosis and research. It explains the sources and types of motion and related artefacts, classifies and describes existing techniques for motion detection, compensation and correction and lists established and experimental approaches. Retrospective motion correction modifies the MR image data during the reconstruction, while prospective motion correction performs an adaptive update of the data acquisition. Differences, benefits and drawbacks of different motion correction methods are discussed.

  1. Calculation of background effects on the VESUVIO eV neutron spectrometer

    NASA Astrophysics Data System (ADS)

    Mayers, J.

    2011-01-01

    The VESUVIO spectrometer at the ISIS pulsed neutron source measures the momentum distribution n(p) of atoms by 'neutron Compton scattering' (NCS). Measurements of n(p) provide a unique window into the quantum behaviour of atomic nuclei in condensed matter systems. The VESUVIO 6Li-doped neutron detectors at forward scattering angles were replaced in February 2008 by yttrium aluminium perovskite (YAP)-doped γ-ray detectors. This paper compares the performance of the two detection systems. It is shown that the YAP detectors provide a much superior resolution and general performance, but suffer from a sample-dependent gamma background. This report details how this background can be calculated and data corrected. Calculation is compared with data for two different instrument geometries. Corrected and uncorrected data are also compared for the current instrument geometry. Some indications of how the gamma background can be reduced are also given.

  2. Automatic motion correction of clinical shoulder MR images

    NASA Astrophysics Data System (ADS)

    Manduca, Armando; McGee, Kiaran P.; Welch, Edward B.; Felmlee, Joel P.; Ehman, Richard L.

    1999-05-01

    A technique for the automatic correction of motion artifacts in MR images was developed. The algorithm uses only the raw (complex) data from the MR scanner, and requires no knowledge of the patient motion during the acquisition. It operates by searching over the space of possible patient motions and determining the motion which, when used to correct the image, optimizes the image quality. The performance of this algorithm was tested in coronal images of the rotator cuff in a series of 144 patients. A four observer comparison of the autocorrelated images with the uncorrected images demonstrated that motion artifacts were significantly reduced in 48% of the cases. The improvements in image quality were similar to those achieved with a previously reported navigator echo-based adaptive motion correction. The results demonstrate that autocorrelation is a practical technique for retrospectively reducing motion artifacts in a demanding clinical MRI application. It achieves performance comparable to a navigator based correction technique, which is significant because autocorrection does not require an imaging sequence that has been modified to explicitly track motion during acquisition. The approach is flexible and should be readily extensible to other types of MR acquisitions that are corrupted by global motion.

  3. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  4. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  5. Automatic background updating for video-based vehicle detection

    NASA Astrophysics Data System (ADS)

    Hu, Chunhai; Li, Dongmei; Liu, Jichuan

    2008-03-01

    Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The widely used video-based vehicle detection technique is the background subtraction method. The key problem of this method is how to subtract and update the background effectively. In this paper an efficient background updating scheme based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and fast enough to satisfy the real-time constraints of vehicle detection.

  6. [IR spectral-analysis-based range estimation for an object with small temperature difference from background].

    PubMed

    Fu, Xiao-Ning; Wang, Jie; Yang, Lin

    2013-01-01

    It is a typical passive ranging technology that estimation of distance of an object is based on transmission characteristic of infrared radiation, it is also a hotspot in electro-optic countermeasures. Because of avoiding transmitting energy in the detection, this ranging technology will significantly enhance the penetration capability and infrared conceal capability of the missiles or unmanned aerial vehicles. With the current situation in existing passive ranging system, for overcoming the shortage in ranging an oncoming target object with small temperature difference from background, an improved distance estimation scheme was proposed. This article begins with introducing the concept of signal transfer function, makes clear the working curve of current algorithm, and points out that the estimated distance is not unique due to inherent nonlinearity of the working curve. A new distance calculation algorithm was obtained through nonlinear correction technique. It is a ranging formula by using sensing information at 3-5 and 8-12 microm combined with background temperature and field meteorological conditions. The authors' study has shown that the ranging error could be mainly kept around the level of 10% under the condition of the target and background apparent temperature difference equal to +/- 5 K, and the error in estimating background temperature is no more than +/- 15 K.

  7. Holographic corrections to meson scattering amplitudes

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-06-01

    We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.

  8. Technique of Dynamic Flexor Digitorum Superficialis Transfer to Lateral Bands for Proximal Interphalangeal Joint Deformity Correction in Severe Dupuytren Disease.

    PubMed

    Schreck, Michael J; Holbrook, Hayden S; Koman, L Andrew

    2018-02-01

    Pseudo-boutonniere deformity is an uncommon complication from long-standing proximal interphalangeal (PIP) joint contracture in Dupuytren disease. Prolonged flexion contracture of the PIP joint can lead to central slip attenuation and resultant imbalances in the extensor mechanism. We present a technique of flexor digitorum superficialis (FDS) tendon transfer to the lateral bands to correct pseudo-boutonniere deformity at the time of palmar fasciectomy for the treatment of Dupuytren disease. The FDS tendon is transferred from volar to dorsal through the lumbrical canal and sutured into the dorsally mobilized lateral bands. This technique presents an approach to the repair of pseudo-boutonniere deformity in Dupuytren disease. Copyright © 2018 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  9. Precise predictions for V+jets dark matter backgrounds

    NASA Astrophysics Data System (ADS)

    Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.

    2017-12-01

    High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.

  10. Three-dimensional accuracy of different correction methods for cast implant bars

    PubMed Central

    Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom

    2014-01-01

    PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205

  11. The PICS Technique: A Novel Approach for Residual Curvature Correction During Penile Prosthesis Implantation in Patients With Severe Peyronie's Disease Using the Collagen Fleece TachoSil.

    PubMed

    Hatzichristodoulou, Georgios

    2018-03-01

    Correction of residual curvature during inflatable penile prosthesis (IPP) implantation in patients with Peyronie's disease (PD) by plaque incision and grafting is a common approach. To present a novel technique for residual curvature correction during IPP implantation using collagen fleece (TachoSil, Baxter Healthcare Corp, Deerfield, IL, USA). After the IPP (Titan Touch, Coloplast, Minneapolis, MN, USA) is placed, the implant is inflated maximally. When residual curvature exceeds 40°, the PICS (penile implant in combination with the Sealing technique) technique is performed. The device is deflated, and a circumcising skin incision and penile degloving are performed. After elevation of the neurovascular bundle, the device is reinflated maximally. Plaque incision is performed at the point of maximum curvature using electrocautery. This leads to penile straightening because the tension is removed. In the next step, the defect of the tunica is closed with collagen fleece, which sticks to the tunica and defect without any sutures needed. The neurovascular bundle is reapproximated and the Buck fascia is closed. This is followed by closure of penile skin. Primary outcome measurements were straightening rates, operative times, 5-item International Index of Erectile Function (IIEF-5) scores at follow-up, immediate and late complications, and patient satisfaction. The PICS technique was applied to 15 patients. Mean patient age was 61.7 years (52-79 years). Mean residual curvature after IPP was 66.7° (50-90°). Mean operative time was 117.3 minutes (100-140 minutes). Mean follow-up was 15.1 months (1-29 months). 12 of 15 patients (80%) showed a totally straight penis. 3 patients (20%) had residual curvature of 10° at follow-up, which did not interfere with sexual intercourse. Mean IIEF-5 score at follow-up was 24.2 (22-25). No immediate or late complications occurred. All patients were satisfied with the surgical outcomes. This novel technique prevents puncture or

  12. Loop corrections to primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Boran, Sibel; Kahya, E. O.

    2018-02-01

    We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.

  13. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  14. Nonrigid Autofocus Motion Correction for Coronary MR Angiography with a 3D Cones Trajectory

    PubMed Central

    Ingle, R. Reeve; Wu, Holden H.; Addy, Nii Okai; Cheng, Joseph Y.; Yang, Phillip C.; Hu, Bob S.; Nishimura, Dwight G.

    2014-01-01

    Purpose: To implement a nonrigid autofocus motion correction technique to improve respiratory motion correction of free-breathing whole-heart coronary magnetic resonance angiography (CMRA) acquisitions using an image-navigated 3D cones sequence. Methods: 2D image navigators acquired every heartbeat are used to measure superior-inferior, anterior-posterior, and right-left translation of the heart during a free-breathing CMRA scan using a 3D cones readout trajectory. Various tidal respiratory motion patterns are modeled by independently scaling the three measured displacement trajectories. These scaled motion trajectories are used for 3D translational compensation of the acquired data, and a bank of motion-compensated images is reconstructed. From this bank, a gradient entropy focusing metric is used to generate a nonrigid motion-corrected image on a pixel-by-pixel basis. The performance of the autofocus motion correction technique is compared with rigid-body translational correction and no correction in phantom, volunteer, and patient studies. Results: Nonrigid autofocus motion correction yields improved image quality compared to rigid-body-corrected images and uncorrected images. Quantitative vessel sharpness measurements indicate superiority of the proposed technique in 14 out of 15 coronary segments from three patient and two volunteer studies. Conclusion: The proposed technique corrects nonrigid motion artifacts in free-breathing 3D cones acquisitions, improving image quality compared to rigid-body motion correction. PMID:24006292

  15. Technique of Antireflux Procedure without Creating Submucosal Tunnel for Surgical Correction of Vesicoureteric Reflux during Bladder Closure in Exstrophy.

    PubMed

    Sunil, Kanoujia; Gupta, Archika; Chaubey, Digamber; Pandey, Anand; Kureel, Shiv Narain; Verma, Ajay Kumar

    2018-01-01

    To report the clinical application of the new surgical technique of antireflux procedure without creating submucosal tunnel for surgical correction of vesicoureteric reflux during bladder closure in exstrophy. Based on the report of published experimental technique, the procedure was clinically executed in seven patients of classic exstrophy bladder with small bladder plate with polyps, where the creation of submucosal tunnel was not possible, in last 18 months. Ureters were mobilized. A rectangular patch of bladder mucosa at trigone was removed exposing the detrusor. Mobilized urteres were advanced, crossed and anchored to exposed detrusor parallel to each other. Reconstruction included bladder and epispadias repair with abdominal wall closure. The outcome was measured with the assessment of complications, abolition of reflux on cystogram and upper tract status. At 3-month follow-up cystogram, reflux was absent in all. Follow-up ultrasound revealed mild dilatation of pelvis and ureter in one. The technique of extra-mucosal ureteric reimplantation without the creation of submucosal tunnel is simple to execute without risk and complications and effectively provides an antireflux mechanism for the preservation of upper tract in bladder exstrophy. With the use of this technique, reflux can be prevented since the very beginning of exstrophy reconstruction.

  16. Recovery correction technique for NMR spectroscopy of perchloric acid extracts using DL-valine-2,3-d2: validation and application to 5-fluorouracil-induced brain damage.

    PubMed

    Nakagami, Ryutaro; Yamaguchi, Masayuki; Ezawa, Kenji; Kimura, Sadaaki; Hamamichi, Shusei; Sekine, Norio; Furukawa, Akira; Niitsu, Mamoru; Fujii, Hirofumi

    2014-01-01

    We explored a recovery correction technique that can correct metabolite loss during perchloric acid (PCA) extraction and minimize inter-assay variance in quantitative (1)H nuclear magnetic resonance (NMR) spectroscopy of the brain and evaluated its efficacy in 5-fluorouracil (5-FU)- and saline-administered rats. We measured the recovery of creatine and dl-valine-2,3-d2 from PCA extract containing both compounds (0.5 to 8 mM). We intravenously administered either 5-FU for 4 days (total, 100 mg/kg body weight) or saline into 2 groups of 11 rats each. We subsequently performed PCA extraction of the whole brain on Day 9, externally adding 7 µmol of dl-valine-2,3-d2. We estimated metabolite concentrations using an NMR spectrometer with recovery correction, correcting metabolite concentrations based on the recovery factor of dl-valine-2,3-d2. For each metabolite concentration, we calculated the coefficient of variation (CEV) and compared differences between the 2 groups using unpaired t-test. Equivalent recoveries of dl-valine-2,3-d2 (89.4 ± 3.9%) and creatine (89.7 ± 3.9%) in the PCA extract of the mixed solution indicated the suitability of dl-valine-2,3-d2 as an internal reference. In the rat study, recovery of dl-valine-2,3-d2 was 90.6 ± 9.2%. Nine major metabolite concentrations adjusted by recovery of dl-valine-2,3-d2 in saline-administered rats were comparable to data in the literature. CEVs of these metabolites were reduced from 10 to 17% before to 7 to 16% after correction. The significance of differences in alanine and taurine between the 5-FU- and saline-administered groups was determined only after recovery correction (0.75 ± 0.12 versus 0.86 ± 0.07 for alanine; 5.17 ± 0.59 versus 5.66 ± 0.42 for taurine [µmol/g brain tissue]; P < 0.05). A new recovery correction technique corrected metabolite loss during PCA extraction, minimized inter-assay variance in quantitative (1)H NMR spectroscopy of brain tissue, and effectively detected inter

  17. 40Ar/39Ar technique of KAr dating: a comparison with the conventional technique

    USGS Publications Warehouse

    Brent, Dalrymple G.; Lanphere, M.A.

    1971-01-01

    K-Ar ages have been determined by the 40Ar/39Ar total fusion technique on 19 terrestrial samples whose conventional K-Ar ages range from 3.4 my to nearly 1700 my. Sample materials included biotite, muscovite, sanidine, adularia, plagioclase, hornblende, actinolite, alunite, dacite, and basalt. For 18 samples there are no significant differences at the 95% confidence level between the KAr ages obtained by these two techniques; for one sample the difference is 4.3% and is statistically significant. For the neutron doses used in these experiments (???4 ?? 1018 nvt) it appears that corrections for interfering Ca- and K-derived Ar isotopes can be made without significant loss of precision for samples with K/Ca > 1 as young as about 5 ?? 105 yr, and for samples with K/Ca < 1 as young as about 107 yr. For younger samples the combination of large atmospheric Ar corrections and large corrections for Ca- and K-derived Ar may make the precision of the 40Ar/39Ar technique less than that of the conventional technique unless the irradiation parameters are adjusted to minimize these corrections. ?? 1971.

  18. Correction Technique for Raman Water Vapor Lidar Signal-Dependent Bias and Suitability for Water Wapor Trend Monitoring in the Upper Troposphere

    NASA Technical Reports Server (NTRS)

    Whiteman, D. N.; Cadirola, M.; Venable, D.; Calhoun, M.; Miloshevich, L; Vermeesch, K.; Twigg, L.; Dirisu, A.; Hurst, D.; Hall, E.; hide

    2012-01-01

    The MOHAVE-2009 campaign brought together diverse instrumentation for measuring atmospheric water vapor. We report on the participation of the ALVICE (Atmospheric Laboratory for Validation, Interagency Collaboration and Education) mobile laboratory in the MOHAVE-2009 campaign. In appendices we also report on the performance of the corrected Vaisala RS92 radiosonde measurements during the campaign, on a new radiosonde based calibration algorithm that reduces the influence of atmospheric variability on the derived calibration constant, and on other results of the ALVICE deployment. The MOHAVE-2009 campaign permitted the Raman lidar systems participating to discover and address measurement biases in the upper troposphere and lower stratosphere. The ALVICE lidar system was found to possess a wet bias which was attributed to fluorescence of insect material that was deposited on the telescope early in the mission. Other sources of wet biases are discussed and data from other Raman lidar systems are investigated, revealing that wet biases in upper tropospheric (UT) and lower stratospheric (LS) water vapor measurements appear to be quite common in Raman lidar systems. Lower stratospheric climatology of water vapor is investigated both as a means to check for the existence of these wet biases in Raman lidar data and as a source of correction for the bias. A correction technique is derived and applied to the ALVICE lidar water vapor profiles. Good agreement is found between corrected ALVICE lidar measurments and those of RS92, frost point hygrometer and total column water. The correction is offered as a general method to both quality control Raman water vapor lidar data and to correct those data that have signal-dependent bias. The influence of the correction is shown to be small at regions in the upper troposphere where recent work indicates detection of trends in atmospheric water vapor may be most robust. The correction shown here holds promise for permitting useful upper

  19. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  20. Relativistic Corrections to the Sunyaev-Zeldovich Effect for Clusters of Galaxies. III. Polarization Effect

    NASA Astrophysics Data System (ADS)

    Itoh, Naoki; Nozawa, Satoshi; Kohyama, Yasuharu

    2000-04-01

    We extend the formalism of relativistic thermal and kinematic Sunyaev-Zeldovich effects and include the polarization of the cosmic microwave background photons. We consider the situation of a cluster of galaxies moving with a velocity β≡v/c with respect to the cosmic microwave background radiation. In the present formalism, polarization of the scattered cosmic microwave background radiation caused by the proper motion of a cluster of galaxies is naturally derived as a special case of the kinematic Sunyaev-Zeldovich effect. The relativistic corrections are also included in a natural way. Our results are in complete agreement with the recent results of relativistic corrections obtained by Challinor, Ford, & Lasenby with an entirely different method, as well as the nonrelativistic limit obtained by Sunyaev & Zeldovich. The relativistic correction becomes significant in the Wien region.

  1. Blind retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  2. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Astrophysics Data System (ADS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  3. Determination of background levels on water quality of groundwater bodies: a methodological proposal applied to a Mediterranean River basin (Guadalhorce River, Málaga, southern Spain).

    PubMed

    Urresti-Estala, Begoña; Carrasco-Cantos, Francisco; Vadillo-Pérez, Iñaki; Jiménez-Gavilán, Pablo

    2013-03-15

    Determine background levels are a key element in the further characterisation of groundwater bodies, according to Water Framework Directive 2000/60/EC and, more specifically, Groundwater Directive 2006/118/EC. In many cases, these levels present very high values for some parameters and types of groundwater, which is significant for their correct estimation as a prior step to establishing thresholds, assessing the status of water bodies and subsequently identifying contaminant patterns. The Guadalhorce River basin presents widely varying hydrogeological and hydrochemical conditions. Therefore, its background levels are the result of the many factors represented in the natural chemical composition of water bodies in this basin. The question of determining background levels under objective criteria is generally addressed as a statistical problem, arising from the many aspects involved in its calculation. In the present study, we outline the advantages of applying two statistical techniques applied specifically for this purpose: (1) the iterative 2σ technique and (2) the distribution function, and examine whether the conclusions reached by these techniques are similar or whether they differ considerably. In addition, we identify the specific characteristics of each approach and the circumstances under which they should be used. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. RSA and its Correctness through Modular Arithmetic

    NASA Astrophysics Data System (ADS)

    Meelu, Punita; Malik, Sitender

    2010-11-01

    To ensure the security to the applications of business, the business sectors use Public Key Cryptographic Systems (PKCS). An RSA system generally belongs to the category of PKCS for both encryption and authentication. This paper describes an introduction to RSA through encryption and decryption schemes, mathematical background which includes theorems to combine modular equations and correctness of RSA. In short, this paper explains some of the maths concepts that RSA is based on, and then provides a complete proof that RSA works correctly. We can proof the correctness of RSA through combined process of encryption and decryption based on the Chinese Remainder Theorem (CRT) and Euler theorem. However, there is no mathematical proof that RSA is secure, everyone takes that on trust!.

  5. Developing Formal Correctness Properties from Natural Language Requirements

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  6. [PERCUTANEOUS CORRECTION OF FOREFOOT DEFORMITIES IN DIABETIC PATIENTS IN ORDER TO PREVENT PRESSURE SORES - TECHNIQUE AND RESULTS IN 20 CONSECUTIVE PATIENTS].

    PubMed

    Yassin, Mustafa; Garti, Avraham; Heller, Eyal; Weissbrot, Moshe; Robinson, Dror

    2017-04-01

    Diabetes mellitus is a 21st century pandemic. Due to life-span prolongation combined with the increased rate of diabetes, a growing population of patients is afflicted with neuropathic foot deformities. Traditional operative repair of these deformities is associated with a high complication rate and relatively common infection incidence. In recent years, in order to prevent these complications, percutaneous deformity correction methods were developed. Description of experience accumulated in treating 20 consecutive patients with diabetic neuropathic foot deformities treated in a percutaneous fashion. A consecutive series of patients treated at our institute for neuropathic foot deformity was assessed according to a standard protocol using the AOFAS forefoot score and the LUMT score performed at baseline as well as at 6 months and 12 months. Treatment related complications were monitored. All procedures were performed in an ambulatory setting using local anesthesia. A total of 12 patients had soft tissue corrections, and 8 had a combined soft tissue and bone correction. Baseline AOFAS score was 48±7 and improved to 73±9 at six months and 75±7 at one year. LUMT score in 11 patients with a chronic wound decreased from 22±4 to 2±1 at one year post-op. One patient required hospitalization due to post-op bleeding. Percutaneous techniques allow deformity correction of diabetic feet, including those with open wounds in an ambulatory setting with a low complication rate.

  7. Influence of detector noise and background noise on detection-system

    NASA Astrophysics Data System (ADS)

    Song, Yiheng; Wang, Zhiyong

    2018-02-01

    Study the noise by detectors and background light ,we find that the influence of background noise on the detection is more than that of itself. Therefore, base on the fiber coupled beam splitting technique, the small area detector is used to replace the large area detector. It can achieve high signal-to-noise ratio (SNR) and reduce the speckle interference of the background light. This technique is expected to solve the bottleneck of large field of view and high sensitivity.

  8. General relativistic corrections in density-shear correlations

    NASA Astrophysics Data System (ADS)

    Ghosh, Basundhara; Durrer, Ruth; Sellentin, Elena

    2018-06-01

    We investigate the corrections which relativistic light-cone computations induce on the correlation of the tangential shear with galaxy number counts, also known as galaxy-galaxy lensing. The standard-approach to galaxy-galaxy lensing treats the number density of sources in a foreground bin as observable, whereas it is in reality unobservable due to the presence of relativistic corrections. We find that already in the redshift range covered by the DES first year data, these currently neglected relativistic terms lead to a systematic correction of up to 50% in the density-shear correlation function for the highest redshift bins. This correction is dominated by the fact that a redshift bin of number counts does not only lens sources in a background bin, but is itself again lensed by all masses between the observer and the counted source population. Relativistic corrections are currently ignored in the standard galaxy-galaxy analyses, and the additional lensing of a counted source populations is only included in the error budget (via the covariance matrix). At increasingly higher redshifts and larger scales, these relativistic and lensing corrections become however increasingly more important, and we here argue that it is then more efficient, and also cleaner, to account for these corrections in the density-shear correlations.

  9. Real-Time Correction By Optical Tracking with Integrated Geometric Distortion Correction for Reducing Motion Artifacts in fMRI

    NASA Astrophysics Data System (ADS)

    Rotenberg, David J.

    Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.

  10. [Can fixation disparity be detected reliably by measurement and correctional techniques according H.J. Haase (MKH)?].

    PubMed

    Gerling, J; de Paz, H; Schroth, V; Bach, M; Kommerell, G

    2000-06-01

    The theory of the "Measuring and Correction Methods of H.-J. Haase" (MCH) states that a small misalignment of one eye, called fixation disparity, indicates a difficulty in overcoming a "vergence position of rest" that is different from ortho position. This difficulty, so the theory, can cause asthenopic complaints, such as headaches, and these complaints can be relieved by prisms. The theory further claims that fixation disparity can be ascertained by a series of tests which depend on the subject's perception. The tests most decisive for the diagnosis of a so-called fixation disparity type 2 consist of stereo displays. The magnitude of the prism that allows the subject to see the test configurations in symmetry is thought to be the one that corrects the "vergence position of rest". Nine subjects with healthy eyes in whom a "fixation disparity type 2" had been diagnosed were selected for the study. Misalignment of the eyes was determined according to the principle of the unilateral cover test. Targets identical for both eyes were presented on the screen of the Polatest E. Then, the target was deleted for one eye and the ensuing position change of the other eye was measured, using the search coil technique. This test was performed both with and without the MCH prism. In all 9 subjects the misalignment was less than 10 minutes of arc, i.e. in the range of normal fixation instability. Averaging across the 9 subjects, the deviation of the eye (misaligned according to MCH) was 0.79 +/- 3.45 minutes of arc in the direction opposed to that predicted by the MCH, a value not significantly different from zero. The MCH prism elicited a fusional vergence movement the magnitude of which corresponded to the magnitude of the MCH prism. Ascertaining fixation disparity with the MCH is unreliable. Accordingly, it appears dubious to correct a "vergence position of rest" on the basis of the MCH.

  11. ViBe: a universal background subtraction algorithm for video sequences.

    PubMed

    Barnich, Olivier; Van Droogenbroeck, Marc

    2011-06-01

    This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.

  12. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    NASA Astrophysics Data System (ADS)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification

  13. The location and recognition of anti-counterfeiting code image with complex background

    NASA Astrophysics Data System (ADS)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  14. Low dose scatter correction for digital chest tomosynthesis

    NASA Astrophysics Data System (ADS)

    Inscoe, Christina R.; Wu, Gongting; Shan, Jing; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping

    2015-03-01

    Digital chest tomosynthesis (DCT) provides superior image quality and depth information for thoracic imaging at relatively low dose, though the presence of strong photon scatter degrades the image quality. In most chest radiography, anti-scatter grids are used. However, the grid also blocks a large fraction of the primary beam photons requiring a significantly higher imaging dose for patients. Previously, we have proposed an efficient low dose scatter correction technique using a primary beam sampling apparatus. We implemented the technique in stationary digital breast tomosynthesis, and found the method to be efficient in correcting patient-specific scatter with only 3% increase in dose. In this paper we reported the feasibility study of applying the same technique to chest tomosynthesis. This investigation was performed utilizing phantom and cadaver subjects. The method involves an initial tomosynthesis scan of the object. A lead plate with an array of holes, or primary sampling apparatus (PSA), was placed above the object. A second tomosynthesis scan was performed to measure the primary (scatter-free) transmission. This PSA data was used with the full-field projections to compute the scatter, which was then interpolated to full-field scatter maps unique to each projection angle. Full-field projection images were scatter corrected prior to reconstruction. Projections and reconstruction slices were evaluated and the correction method was found to be effective at improving image quality and practical for clinical implementation.

  15. Particle dynamics during nanoparticle synthesis by laser ablation in a background gas

    NASA Astrophysics Data System (ADS)

    Nakata, Yoshiki; Muramoto, Junichi; Okada, Tatsuo; Maeda, Mitsuo

    2002-02-01

    Particle dynamics during Si nanoparticle synthesis in a laser-ablation plume in different background gases were investigated by laser-spectroscopic imaging techniques. Two-dimensional laser induced fluorescence and ultraviolet Rayleigh scattering techniques were used to visualize the spatial distribution of the Si atoms and nanoparticles grown, respectively. We have developed a visualization technique called re-decomposition laser-induced fluorescence to observe small nanoparticles (hereafter called clusters) which are difficult to observe by the conventional imaging techniques. In this article, the whole process of nanoparticle synthesis in different background gases of He, Ne, Ar, N2 and O2 was investigated by these techniques. In He, Ne, Ar and N2 background gases at 10 Torr, the clustering of the Si atoms started 200, 250, 300 and 800 μs after ablation, respectively. The growth rate of the clusters in He background gas was much larger than that in the other gases. The spatial distributions of the Si nanoparticles were mushroom like in He, N2 and O2, and column like in Ne and Ar. It is thought that the difference in distribution was caused by differences in the flow characteristics of the background gases, which would imply that the viscosity of the background gas is one of the main governing parameters.

  16. Optimal Background Estimators in Single-Molecule FRET Microscopy.

    PubMed

    Preus, Søren; Hildebrandt, Lasse L; Birkedal, Victoria

    2016-09-20

    Single-molecule total internal reflection fluorescence (TIRF) microscopy constitutes an umbrella of powerful tools that facilitate direct observation of the biophysical properties, population heterogeneities, and interactions of single biomolecules without the need for ensemble synchronization. Due to the low signal/noise ratio in single-molecule TIRF microscopy experiments, it is important to determine the local background intensity, especially when the fluorescence intensity of the molecule is used quantitatively. Here we compare and evaluate the performance of different aperture-based background estimators used particularly in single-molecule Förster resonance energy transfer. We introduce the general concept of multiaperture signatures and use this technique to demonstrate how the choice of background can affect the measured fluorescence signal considerably. A new, to our knowledge, and simple background estimator is proposed, called the local statistical percentile (LSP). We show that the LSP background estimator performs as well as current background estimators at low molecular densities and significantly better in regions of high molecular densities. The LSP background estimator is thus suited for single-particle TIRF microscopy of dense biological samples in which the intensity itself is an observable of the technique. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. Novel Principles and Techniques to Create a Natural Design in Female Hairline Correction Surgery.

    PubMed

    Park, Jae Hyun

    2015-12-01

    Female hairline correction surgery is becoming increasingly popular. However, no guidelines or methods of female hairline design have been introduced to date. The purpose of this study was to create an initial framework based on the novel principles of female hairline design and then use artistic ability and experience to fine tune this framework. An understanding of the concept of 5 areas (frontal area, frontotemporal recess area, temporal peak, infratemple area, and sideburns) and 5 points (C, A, B, T, and S) is required for female hairline correction surgery (the 5A5P principle). The general concepts of female hairline correction surgery and natural design methods are, herein, explained with a focus on the correlations between these 5 areas and 5 points. A natural and aesthetic female hairline can be created with application of the above-mentioned concepts. The 5A5P principle of forming the female hairline is very useful in female hairline correction surgery.

  18. Small incision lenticule extraction (SMILE) in the correction of myopic astigmatism: outcomes and limitations - an update.

    PubMed

    Alió Del Barrio, Jorge L; Vargas, Verónica; Al-Shymali, Olena; Alió, Jorge L

    2017-01-01

    Small Incision Lenticule Extraction (SMILE) is a flap-free intrastromal technique for the correction of myopia and myopic astigmatism. To date, this technique lacks automated centration and cyclotorsion control, so several concerns have been raised regarding its capability to correct moderate or high levels of astigmatism. The objective of this paper is to review the reported SMILE outcomes for the correction of myopic astigmatism associated with a cylinder over 0.75 D, and its comparison with the outcomes reported with the excimer laser-based corneal refractive surgery techniques. A total of five studies clearly reporting SMILE astigmatic outcomes were identified. SMILE shows acceptable outcomes for the correction of myopic astigmatism, although a general agreement exists about the superiority of the excimer laser-based techniques for low to moderate levels of astigmatism. Manual correction of the static cyclotorsion should be adopted for any SMILE astigmatic correction over 0.75 D.

  19. Improving Focal Photostimulation of Cortical Neurons with Pre-derived Wavefront Correction

    PubMed Central

    Choy, Julian M. C.; Sané, Sharmila S.; Lee, Woei M.; Stricker, Christian; Bachor, Hans A.; Daria, Vincent R.

    2017-01-01

    Recent progress in neuroscience to image and investigate brain function has been made possible by impressive developments in optogenetic and opto-molecular tools. Such research requires advances in optical techniques for the delivery of light through brain tissue with high spatial resolution. The tissue causes distortions to the wavefront of the incoming light which broadens the focus and consequently reduces the intensity and degrades the resolution. Such effects are detrimental in techniques requiring focal stimulation. Adaptive wavefront correction has been demonstrated to compensate for these distortions. However, iterative derivation of the corrective wavefront introduces time constraints that limit its applicability to probe living cells. Here, we demonstrate that we can pre-determine and generalize a small set of Zernike modes to correct for aberrations of the light propagating through specific brain regions. A priori identification of a corrective wavefront is a direct and fast technique that improves the quality of the focus without the need for iterative adaptive wavefront correction. We verify our technique by measuring the efficiency of two-photon photolysis of caged neurotransmitters along the dendrites of a whole-cell patched neuron. Our results show that encoding the selected Zernike modes on the excitation light can improve light propagation through brain slices of rats as observed by the neuron's evoked excitatory post-synaptic potential in response to localized focal uncaging at the spines of the neuron's dendrites. PMID:28507508

  20. Readout circuit with novel background suppression for long wavelength infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Xie, L.; Xia, X. J.; Zhou, Y. F.; Wen, Y.; Sun, W. F.; Shi, L. X.

    2011-02-01

    In this article, a novel pixel readout circuit using a switched-capacitor integrator mode background suppression technique is presented for long wavelength infrared focal plane arrays. This circuit can improve dynamic range and signal-to-noise ratio by suppressing the large background current during integration. Compared with other background suppression techniques, the new background suppression technique is less sensitive to the process mismatch and has no additional shot noise. The proposed circuit is theoretically analysed and simulated while taking into account the non-ideal characteristics. The result shows that the background suppression non-uniformity is ultra-low even for a large process mismatch. The background suppression non-uniformity of the proposed circuit can also remain very small with technology scaling.

  1. Our method of correcting cryptotia.

    PubMed

    Yanai, A; Tange, I; Bandoh, Y; Tsuzuki, K; Sugino, H; Nagata, S

    1988-12-01

    Our technique for the correction of cryptotia using both Z-plasty and the advancement flap is described. The main advantages are the simple design of the skin incision and the possibility of its application to cryptotia other than severe cartilage deformity and extreme lack of skin.

  2. Blind multirigid retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2015-04-01

    Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.

  3. Inhaler technique maintenance: gaining an understanding from the patient's perspective.

    PubMed

    Ovchinikova, Ludmila; Smith, Lorraine; Bosnic-Anticevich, Sinthia

    2011-08-01

    The aim of this study was to determine the patient-, education-, and device-related factors that predict inhaler technique maintenance. Thirty-one community pharmacists were trained to deliver inhaler technique education to people with asthma. Pharmacists evaluated (based on published checklists), and where appropriate, delivered inhaler technique education to patients (participants) in the community pharmacy at baseline (Visit 1) and 1 month later (Visit 2). Data were collected on participant demographics, asthma history, current asthma control, history of inhaler technique education, and a range of psychosocial aspects of disease management (including adherence to medication, motivation for correct technique, beliefs regarding the importance of maintaining correct technique, and necessity and concern beliefs regarding preventer therapy). Stepwise backward logistic regression was used to identify the predictors of inhaler technique maintenance at 1 month. In total 145 and 127 participants completed Visits 1 and 2, respectively. At baseline, 17% of patients (n = 24) demonstrated correct technique (score 11/11) which increased to 100% (n = 139) after remedial education by pharmacists. At follow-up, 61% (n = 77) of patients demonstrated correct technique. The predictors of inhaler technique maintenance based on the logistic regression model (X(2) (3, N = 125) = 16.22, p = .001) were use of a dry powder inhaler over a pressurized metered-dose inhaler (OR 2.6), having better asthma control at baseline (OR 2.3), and being more motivated to practice correct inhaler technique (OR 1.2). Contrary to what is typically recommended in previous research, correct inhaler technique maintenance may involve more than repetition of instructions. This study found that past technique education factors had no bearing on technique maintenance, whereas patient psychosocial factors (motivation) did.

  4. An algebraic algorithm for nonuniformity correction in focal-plane arrays.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C

    2002-09-01

    A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.

  5. Power corrections in the N -jettiness subtraction scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  6. Power corrections in the N -jettiness subtraction scheme

    DOE PAGES

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    2017-03-30

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  7. Weighted Mean of Signal Intensity for Unbiased Fiber Tracking of Skeletal Muscles: Development of a New Method and Comparison With Other Correction Techniques.

    PubMed

    Giraudo, Chiara; Motyka, Stanislav; Weber, Michael; Resinger, Christoph; Thorsten, Feiweier; Traxler, Hannes; Trattnig, Siegfried; Bogner, Wolfgang

    2017-08-01

    The aim of this study was to investigate the origin of random image artifacts in stimulated echo acquisition mode diffusion tensor imaging (STEAM-DTI), assess the role of averaging, develop an automated artifact postprocessing correction method using weighted mean of signal intensities (WMSIs), and compare it with other correction techniques. Institutional review board approval and written informed consent were obtained. The right calf and thigh of 10 volunteers were scanned on a 3 T magnetic resonance imaging scanner using a STEAM-DTI sequence.Artifacts (ie, signal loss) in STEAM-based DTI, presumably caused by involuntary muscle contractions, were investigated in volunteers and ex vivo (ie, human cadaver calf and turkey leg using the same DTI parameters as for the volunteers). An automated postprocessing artifact correction method based on the WMSI was developed and compared with previous approaches (ie, iteratively reweighted linear least squares and informed robust estimation of tensors by outlier rejection [iRESTORE]). Diffusion tensor imaging and fiber tracking metrics, using different averages and artifact corrections, were compared for region of interest- and mask-based analyses. One-way repeated measures analysis of variance with Greenhouse-Geisser correction and Bonferroni post hoc tests were used to evaluate differences among all tested conditions. Qualitative assessment (ie, images quality) for native and corrected images was performed using the paired t test. Randomly localized and shaped artifacts affected all volunteer data sets. Artifact burden during voluntary muscle contractions increased on average from 23.1% to 77.5% but were absent ex vivo. Diffusion tensor imaging metrics (mean diffusivity, fractional anisotropy, radial diffusivity, and axial diffusivity) had a heterogeneous behavior, but in the range reported by literature. Fiber track metrics (number, length, and volume) significantly improved in both calves and thighs after artifact

  8. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated

  9. Blackfolds in (anti)-de Sitter backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armas, Jay; Obers, Niels A.

    2011-04-15

    We construct different neutral blackfold solutions in Anti-de Sitter and de Sitter background spacetimes in the limit where the cosmological constant is taken to be much smaller than the horizon size. This includes a class of blackfolds with horizons that are products of odd-spheres times a transverse sphere, for which the thermodynamic stability is also studied. Moreover, we exhibit a specific case in which the same blackfold solution can describe different limiting black hole spacetimes therefore illustrating the geometric character of the blackfold approach. Furthermore, we show that the higher-dimensional Kerr-(Anti)-de Sitter black hole allows for ultraspinning regimes in themore » same limit under consideration and demonstrate that this is correctly described by a pancaked blackfold geometry. We also give evidence for the possibility of saturating the rigidity theorem in these backgrounds.« less

  10. Rotational distortion correction in endoscopic optical coherence tomography based on speckle decorrelation

    PubMed Central

    Uribe-Patarroyo, Néstor; Bouma, Brett E.

    2015-01-01

    We present a new technique for the correction of nonuniform rotation distortion in catheter-based optical coherence tomography (OCT), based on the statistics of speckle between A-lines using intensity-based dynamic light scattering. This technique does not rely on tissue features and can be performed on single frames of data, thereby enabling real-time image correction. We demonstrate its suitability in a gastrointestinal balloon-catheter OCT system, determining the actual rotational speed with high temporal resolution, and present corrected cross-sectional and en face views showing significant enhancement of image quality. PMID:26625040

  11. Electroweak radiative corrections to the top quark decay

    NASA Astrophysics Data System (ADS)

    Kuruma, Toshiyuki

    1993-12-01

    The top quark, once produced, should be an important window to the electroweak symmetry breaking sector. We compute electroweak radiative corrections to the decay process t→b+W + in order to extract information on the Higgs sector and to fix the background in searches for a possible new physics contribution. The large Yukawa coupling of the top quark induces a new form factor through vertex corrections and causes discrepancy from the tree-level longitudinal W-boson production fraction, but the effect is of order 1% or less for m H<1 TeV.

  12. Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Horne, William C.

    2015-01-01

    An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.

  13. Carrier-phase multipath corrections for GPS-based satellite attitude determination

    NASA Technical Reports Server (NTRS)

    Axelrad, A.; Reichert, P.

    2001-01-01

    This paper demonstrates the high degree of spatial repeatability of these errors for a spacecraft environment and describes a correction technique, termed the sky map method, which exploits the spatial correlation to correct measurements and improve the accuracy of GPS-based attitude solutions.

  14. Low background screening capability in the UK

    NASA Astrophysics Data System (ADS)

    Ghag, Chamkaur

    2015-08-01

    Low background rare event searches in underground laboratories seeking observation of direct dark matter interactions or neutrino-less double beta decay have the potential to profoundly advance our understanding of the physical universe. Successful results from these experiments depend critically on construction from extremely radiologically clean materials and accurate knowledge of subsequent low levels of expected background. The experiments must conduct comprehensive screening campaigns to reduce radioactivity from detector components, and these measurements also inform detailed characterisation and quantification of background sources and their impact, necessary to assign statistical significance to any potential discovery. To provide requisite sensitivity for material screening and characterisation in the UK to support our rare event search activities, we have re-developed our infrastructure to add ultra-low background capability across a range of complementary techniques that collectively allow complete radioactivity measurements. Ultra-low background HPGe and BEGe detectors have been installed at the Boulby Underground Laboratory, itself undergoing substantial facility re-furbishment, to provide high sensitivity gamma spectroscopy in particular for measuring the uranium and thorium decay series products. Dedicated low-activity mass spectrometry instrumentation has been developed at UCL for part per trillion level contaminant identification to complement underground screening with direct U and Th measurements, and meet throughput demands. Finally, radon emanation screening at UCL measures radon background inaccessible to gamma or mass spectrometry techniques. With this new capability the UK is delivering half of the radioactivity screening for the LZ dark matter search experiment.

  15. Efficient genomic correction methods in human iPS cells using CRISPR-Cas9 system.

    PubMed

    Li, Hongmei Lisa; Gee, Peter; Ishida, Kentaro; Hotta, Akitsu

    2016-05-15

    Precise gene correction using the CRISPR-Cas9 system in human iPS cells holds great promise for various applications, such as the study of gene functions, disease modeling, and gene therapy. In this review article, we summarize methods for effective editing of genomic sequences of iPS cells based on our experiences correcting dystrophin gene mutations with the CRISPR-Cas9 system. Designing specific sgRNAs as well as having efficient transfection methods and proper detection assays to assess genomic cleavage activities are critical for successful genome editing in iPS cells. In addition, because iPS cells are fragile by nature when dissociated into single cells, a step-by-step confirmation during the cell recovery process is recommended to obtain an adequate number of genome-edited iPS cell clones. We hope that the techniques described here will be useful for researchers from diverse backgrounds who would like to perform genome editing in iPS cells. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Three dimensional socket preservation: a technique for soft tissue augmentation along with socket grafting

    PubMed Central

    2012-01-01

    Background A cursory review of the current socket preservation literatures well depicts the necessity of further esthetic considerations through the corrective procedures of the alveolar ridge upon and post extraction. A new technique has been described here is a rotational pedicle combined epithelialized and connective tissue graft (RPC graft) adjunct with immediate guided tissue regeneration (GBR) procedure. Results We reviewed this technique through a case report and discuss it’s benefit in compare to other socket preservation procedures. Conclusion The main advantages of RPC graft would be summarized as follows: stable primary closure during bone remodeling, saving or crating sufficient vestibular depth, making adequate keratinized gingiva on the buccal surface, and being esthetically pleasant. PMID:22540920

  17. Correction factors for on-line microprobe analysis of multielement alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Brewer, W. D.

    1977-01-01

    An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.

  18. Large mirror surface control by corrective coating

    NASA Astrophysics Data System (ADS)

    Bonnand, Romain; Degallaix, Jerome; Flaminio, Raffaele; Giacobone, Laurent; Lagrange, Bernard; Marion, Fréderique; Michel, Christophe; Mours, Benoit; Mugnier, Pierre; Pacaud, Emmanuel; Pinard, Laurent

    2013-08-01

    The Advanced Virgo gravitational wave detector aims at a sensitivity ten times better than the initial LIGO and Virgo detectors. This implies very stringent requirement on the optical losses in the interferometer arm cavities. In this paper we focus on the mirrors which form the interferometer arm cavities and that require a surface figure error to be well below one nanometre on a diameter of 150 mm. This ‘sub-nanometric flatness’ is not achievable by classical polishing on such a large diameter. Therefore we present the corrective coating technique which has been developed to reach this requirement. Its principle is to add a non-uniform thin film on top of the substrate in order to flatten its surface. In this paper we will introduce the Advanced Virgo requirements and present the basic principle of the corrective coating technique. Then we show the results obtained experimentally on an initial Virgo substrate. Finally we provide an evaluation of the round-trip losses in the Fabry-Perot arm cavities once the corrected surface is used.

  19. Effects of novel corrective spinal technique on adolescent idiopathic scoliosis as assessed by radiographic imaging.

    PubMed

    Noh, Dong Koog; You, Joshua Sung-H; Koh, Jae-Hyun; Kim, Hoseong; Kim, Donghyun; Ko, Sung-Mok; Shin, Ji-Youn

    2014-01-01

    To compare the therapeutic effects of a 3-dimensional corrective spinal technique (CST) and a conventional exercise program (CE) on altered spinal curvature and health related quality-of-life in patients with adolescent idiopathic scoliosis (AIS). Adolescents with idiopathic scoliosis (N=32, 6 males and 26 females) between 10 and 19 years of age (14.34 ± 2.60 years) were recruited and underwent the CST or CE for 60 minutes/day, 2-3 times a week, and an average of total 30 sessions. Diagnostic X-ray imaging technique was used to determine intervention-related changes in the Cobb angle, thoracic kyphosis angle, lumbar lordosis angle, sacral slope, pelvic tilt, pelvic incidence, and vertebral rotation (Nash-Moe method). The Scoliosis Research Society-22 (SRS-22) health related quality-of-life questionnaire was used. Data were analysed using independent t-test, paired t-test, and non-parametric Mann-Whitney U-test at p < 0.05. CST showed greater improvements in Cobb angle (p=0.003), vertebral rotation (p=0.000), and SRS-22 scores (self-image and treatment satisfaction subscale scores and total score, p=0.026, p=0.039, and p=0.041, respectively) as compared to the controls. There were no significant changes in the other measures between the two groups. This is the first clinical trial to investigate the effects of the 3-dimensional CST on spinal curvatures and health related quality-of-life in AIS, providing the important clinical rationale and compelling evidence for the effective management of AIS.

  20. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  1. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  2. Efficacy of distortion correction on diffusion imaging: comparison of FSL eddy and eddy_correct using 30 and 60 directions diffusion encoding.

    PubMed

    Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki

    2014-01-01

    Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, "eddy_correct" and the combination of "eddy" and "topup" in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non-diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non-diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions

  3. A Gas Dynamics Method Based on The Spectral Deferred Corrections (SDC) Time Integration Technique and The Piecewise Parabolic Method (PPM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samet Y. Kadioglu

    2011-12-01

    We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows,more » and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.« less

  4. Full-wave acoustic and thermal modeling of transcranial ultrasound propagation and investigation of skull-induced aberration correction techniques: a feasibility study.

    PubMed

    Kyriakou, Adamos; Neufeld, Esra; Werner, Beat; Székely, Gábor; Kuster, Niels

    2015-01-01

    inhomogeneity is required to predict the pressure distribution for given steering parameters. Simulation-based approaches to calculate aberration corrections may aid in the extension of the tcFUS treatment envelope as well as predict and avoid secondary effects (standing waves, skull heating). Due to their superior performance, simulationbased techniques may prove invaluable in the amelioration of skull-induced aberration effects in tcFUS therapy. The next steps are to investigate shear-wave-induced effects in order to reliably exclude secondary hot-spots, and to develop comprehensive uncertainty assessment and validation procedures.

  5. Fast automatic correction of motion artifacts in shoulder MRI

    NASA Astrophysics Data System (ADS)

    Manduca, Armando; McGee, Kiaran P.; Welch, Edward B.; Felmlee, Joel P.; Ehman, Richard L.

    2001-07-01

    The ability to correct certain types of MR images for motion artifacts from the raw data alone by iterative optimization of an image quality measure has recently been demonstrated. In the first study on a large data set of clinical images, we showed that such an autocorrection technique significantly improved the quality of clinical rotator cuff images, and performed almost as well as navigator echo correction while never degrading an image. One major criticism of such techniques is that they are computationally intensive, and reports of the processing time required have ranged form a few minutes to tens of minutes per slice. In this paper we describe a variety of improvements to our algorithm as well as approaches to correct sets of adjacent slices efficiently. The resulting algorithm is able to correct 256x256x20 clinical shoulder data sets for motion at an effective rate of 1 second/image on a standard commercial workstation. Future improvements in processor speeds and/or the use of specialized hardware will translate directly to corresponding reductions in this calculation time.

  6. Quantum gravitational contributions to the cosmic microwave background anisotropy spectrum.

    PubMed

    Kiefer, Claus; Krämer, Manuel

    2012-01-13

    We derive the primordial power spectrum of density fluctuations in the framework of quantum cosmology. For this purpose we perform a Born-Oppenheimer approximation to the Wheeler-DeWitt equation for an inflationary universe with a scalar field. In this way, we first recover the scale-invariant power spectrum that is found as an approximation in the simplest inflationary models. We then obtain quantum gravitational corrections to this spectrum and discuss whether they lead to measurable signatures in the cosmic microwave background anisotropy spectrum. The nonobservation so far of such corrections translates into an upper bound on the energy scale of inflation.

  7. [Astigmatism correction with Excimer laser].

    PubMed

    Gauthier, L

    2012-03-01

    Excimer laser is the best and the more used technique for Astigmatism correction. Lasik is generally preferred to PRK and must be the choice for hyperopic and mix astigmatisms. Myopic astigmatisms are the easier cases to treat: the length of the photoablation is placed on the flat meridian. Hyperopic and mix astigmatisms are more difficult to correct because they are more technically demanding and because the optical zone of the photoablation must be large. Flying spots lasers are the best for these cases. The most important point is to trace the photoablation very precisely on the astigmatism axis. The use of eye trackers with iris recognition or a preoperative marking of the reference axis avoid cyclotorsion or a wrong position of the head. Irregular astigmatism are better corrected with topoguided or wavefront guided photoablations. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  8. Transcranial phase aberration correction using beam simulations and MR-ARFI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vyas, Urvi, E-mail: urvi.vyas@gmail.com; Kaye, Elena; Pauly, Kim Butts

    2014-03-15

    Purpose: Transcranial magnetic resonance-guided focused ultrasound surgery is a noninvasive technique for causing selective tissue necrosis. Variations in density, thickness, and shape of the skull cause aberrations in the location and shape of the focal zone. In this paper, the authors propose a hybrid simulation-MR-ARFI technique to achieve aberration correction for transcranial MR-guided focused ultrasound surgery. The technique uses ultrasound beam propagation simulations with MR Acoustic Radiation Force Imaging (MR-ARFI) to correct skull-caused phase aberrations. Methods: Skull-based numerical aberrations were obtained from a MR-guided focused ultrasound patient treatment and were added to all elements of the InSightec conformal bone focusedmore » ultrasound surgery transducer during transmission. In the first experiment, the 1024 aberrations derived from a human skull were condensed into 16 aberrations by averaging over the transducer area of 64 elements. In the second experiment, all 1024 aberrations were applied to the transducer. The aberrated MR-ARFI images were used in the hybrid simulation-MR-ARFI technique to find 16 estimated aberrations. These estimated aberrations were subtracted from the original aberrations to result in the corrected images. Each aberration experiment (16-aberration and 1024-aberration) was repeated three times. Results: The corrected MR-ARFI image was compared to the aberrated image and the ideal image (image with zero aberrations) for each experiment. The hybrid simulation-MR-ARFI technique resulted in an average increase in focal MR-ARFI phase of 44% for the 16-aberration case and 52% for the 1024-aberration case, and recovered 83% and 39% of the ideal MR-ARFI phase for the 16-aberrations and 1024-aberration case, respectively. Conclusions: Using one MR-ARFI image and noa priori information about the applied phase aberrations, the hybrid simulation-MR-ARFI technique improved the maximum MR-ARFI phase of the beam's focus.« less

  9. caCORRECT2: Improving the accuracy and reliability of microarray data in the presence of artifacts

    PubMed Central

    2011-01-01

    Background In previous work, we reported the development of caCORRECT, a novel microarray quality control system built to identify and correct spatial artifacts commonly found on Affymetrix arrays. We have made recent improvements to caCORRECT, including the development of a model-based data-replacement strategy and integration with typical microarray workflows via caCORRECT's web portal and caBIG grid services. In this report, we demonstrate that caCORRECT improves the reproducibility and reliability of experimental results across several common Affymetrix microarray platforms. caCORRECT represents an advance over state-of-art quality control methods such as Harshlighting, and acts to improve gene expression calculation techniques such as PLIER, RMA and MAS5.0, because it incorporates spatial information into outlier detection as well as outlier information into probe normalization. The ability of caCORRECT to recover accurate gene expressions from low quality probe intensity data is assessed using a combination of real and synthetic artifacts with PCR follow-up confirmation and the affycomp spike in data. The caCORRECT tool can be accessed at the website: http://cacorrect.bme.gatech.edu. Results We demonstrate that (1) caCORRECT's artifact-aware normalization avoids the undesirable global data warping that happens when any damaged chips are processed without caCORRECT; (2) When used upstream of RMA, PLIER, or MAS5.0, the data imputation of caCORRECT generally improves the accuracy of microarray gene expression in the presence of artifacts more than using Harshlighting or not using any quality control; (3) Biomarkers selected from artifactual microarray data which have undergone the quality control procedures of caCORRECT are more likely to be reliable, as shown by both spike in and PCR validation experiments. Finally, we present a case study of the use of caCORRECT to reliably identify biomarkers for renal cell carcinoma, yielding two diagnostic biomarkers with

  10. Atmospheric Correction Algorithm for Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolutemore » calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.« less

  11. Effects of Cartilage Scoring in Correction of Prominent Ear with Incisionless Otoplasty Technique in Pediatric Patients.

    PubMed

    Haytoğlu, Süheyl; Haytoğlu, Tahir Gökhan; Kuran, Gökhan; Yıldırım, İlhami; Arıkan, Osman Kürşat

    2017-04-01

    The aim of this study was to investigate the efficacy, complication rates, patient satisfaction, and recurrence risks of the incisionless otoplasty technique performed with or without cartilage scoring for correcting the prominent ear in pediatric patients. A total of 49 patients with prominent ears were operated with incisionless otoplasty. In Group 1, 44 ears of 24 patients were operated with incisionless otoplasty without cartilage scoring. In Group 2, 46 ears of 25 patients were operated with incisionless otoplasty with cartilage scoring. For comparison, auriculocephalic distances were measured at three different levels: preoperatively, at the end of surgery, and at 1th and 6th month post-operatively. Patient satisfaction was evaluated using a visual analog scale (VAS). The global esthetic improvement scale (GAIS) was applied by an independent, non-participating plastic surgeon at 6 months after surgery. Prior to surgery and at the end of surgery, no statistically significant difference was observed between the groups in terms of auriculocephalic distances at the three levels. At the and 6th month after surgery, auriculocephalic distances were significantly higher in Group 1. There were no significant differences in VAS results and GAIS values between the groups. The recurrence rate was 9.1% in Group 1 and 4.3% in Group 2. The suture extrusion rate was 18.2% in Group 1 and 13% in Group 2. Although there was a significant difference of 1-2 mm in auriculocephalic distances, our study showed that cartilage scoring is not mandatory to correct the prominent ear in pediatric patients with soft cartilages and to achieve patient and surgeon satisfaction.

  12. Vision correction via multi-layer pattern corneal surgery

    NASA Astrophysics Data System (ADS)

    Sun, Han-Yin; Wang, Hsiang-Chen; Yang, Shun-Fa

    2013-07-01

    With the rapid development of vision correction techniques, increasing numbers of people have undergone laser vision corrective surgery in recent years. The use of a laser scalpel instead of a traditional surgical knife reduces the size of the wound and quickens recovery after surgery. The primary objective of this article is to examine corneal surgery for vision correction via multi-layer swim-ring-shaped wave circles of the technique through optical simulations with the use of Monte-Carlo ray tracing method. Presbyopia stems from the loss of flexibility of crystalline lens due to aging of the eyeball. Diopter adjustment of a normal crystalline lens could reach 5 D; in the case of presbyopia, the adjustment was approximately 1 D, which made patients unable to see objects clearly in near distance. Corneal laser surgery with multi-layer swim-ring-shaped wave circles was performed, which ablated multiple circles on the cornea to improve flexibility of the crystalline lens. Simulation results showed that the ability of the crystalline lens to adjust increased tremendously from 1 D to 4 D. The method was also used to compare the images displayed on the retina before and after the treatment. The results clearly indicated a significant improvement in presbyopia symptoms with the use of this technique.

  13. Long-Term Maintenance of Pharmacists' Inhaler Technique Demonstration Skills

    PubMed Central

    Armour, Carol L; Reddel, Helen K; Bosnic-Anticevich, Sinthia Z

    2009-01-01

    Objective To assess the effectiveness of a single educational intervention, followed by patient education training, in pharmacists retaining their inhaler technique skills. Methods A convenience sample of 31 pharmacists attended an educational workshop and their inhaler techniques were assessed. Those randomly assigned to the active group were trained to assess and teach correct Turbuhaler and Diskus inhaler techniques to patients and provided with patient education tools to use in their pharmacies during a 6-month study. Control pharmacists delivered standard care. All pharmacists were reassessed 2 years after initial training. Results Thirty-one pharmacists participated in the study. At the initial assessment, few pharmacists demonstrated correct technique (Turbuhaler:13%, Diskus:6%). All pharmacists in the active group demonstrated correct technique following training. Two years later, pharmacists in the active group demonstrated significantly better inhaler technique than pharmacists in the control group (p < 0.05) for Turbuhaler and Diskus (83% vs.11%; 75% vs.11%, respectively). Conclusion Providing community pharmacists with effective patient education tools and encouraging their involvement in educating patients may contribute to pharmacists maintaining their competence in correct inhaler technique long-term. PMID:19513170

  14. Surgical correction of severe spinal deformities using a staged protocol of external and internal techniques.

    PubMed

    Prudnikova, Oksana G; Shchurova, Elena N

    2018-02-01

    There is high risk of neurologic complications in one-stage management of severe rigid spinal deformities in adolescents. Therefore, gradual spine stretching variants are applied. One of them is the use of external transpedicular fixation. Our aim was to retrospectively study the outcomes of gradual correction with an apparatus for external transpedicular fixation followed by internal fixation used for high-grade kyphoscoliosis in adolescents. Twenty five patients were reviewed (mean age, 15.1 ± 0.4 years). Correction was performed in two stages: 1) gradual controlled correction with the apparatus for external transpedicular fixation; and 2) internal posterior transpedicular fixation. Rigid deformities in eight patients required discapophysectomy. Clinical and radiographic study of the outcomes was conducted immediately after treatment and at a mean long-term period of 3.8 ± 0.4 years. Pain was evaluated using the visual analogue scale (VAS, 10 points). The Oswestry questionnaire (ODI scale) was used for functional assessment. Deformity correction with the external apparatus was 64.2 ± 4.6% in the main curve and 60.7 ± 3.7% in the compensatory one. It was 72.8 ± 4.1% and 66.2 ± 5.3% immediately after treatment and 70.8 ± 4.6% and 64.3 ± 4.2% at long term, respectively. Pain relieved by 33.2 ± 4.2% (p < 0.05) immediately after treatment and by 55.6 ± 2.8% (p < 0.05) at long term. ODI reduced by 30.2 ± 1.7% (p < 0.05) immediately after treatment and by 37.2 ± 1.6% (p < 0.05) at long term. The apparatus for external transpedicular fixation provides gradual controlled correction for high-grade kyphoscoliosis in adolescents. Transition to internal fixation preserves the correction achieved, and correction is maintained at long term.

  15. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    PubMed

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  16. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  17. Relativistic electron plasma oscillations in an inhomogeneous ion background

    NASA Astrophysics Data System (ADS)

    Karmakar, Mithun; Maity, Chandan; Chakrabarti, Nikhil

    2018-06-01

    The combined effect of relativistic electron mass variation and background ion inhomogeneity on the phase mixing process of large amplitude electron oscillations in cold plasmas have been analyzed by using Lagrangian coordinates. An inhomogeneity in the ion density is assumed to be time-independent but spatially periodic, and a periodic perturbation in the electron density is considered as well. An approximate space-time dependent solution is obtained in the weakly-relativistic limit by employing the Bogolyubov and Krylov method of averaging. It is shown that the phase mixing process of relativistically corrected electron oscillations is strongly influenced by the presence of a pre-existing ion density ripple in the plasma background.

  18. Interstellar cyanogen and the temperature of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Roth, Katherine C.; Meyer, David M.; Hawkins, Isabel

    1993-01-01

    We present the results of a recently completed effort to determine the amount of CN rotational excitation in five diffuse interstellar clouds for the purpose of accurately measuring the temperature of the cosmic microwave background radiation (CMBR). In addition, we report a new detection of emission from the strongest hyperfine component of the 2.64 mm CN rotational transition (N = 1-0) in the direction toward HD 21483. We have used this result in combination with existing emission measurements toward our other stars to correct for local excitation effects within diffuse clouds which raise the measured CN rotational temperature above that of the CMBR. After making this correction, we find a weighted mean value of T(CMBR) = 2.729 (+0.023, -0.031) K. This temperature is in excellent agreement with the new COBE measurement of 2.726 +/- 0.010 K (Mather et al., 1993). Our result, which samples the CMBR far from the near-Earth environment, attests to the accuracy of the COBE measurement and reaffirms the cosmic nature of this background radiation. From the observed agreement between our CMBR temperature and the COBE result, we conclude that corrections for local CN excitation based on millimeter emission measurements provide an accurate adjustment to the measured rotational excitation.

  19. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  20. Double-modulation spectroscopy of molecular ions - Eliminating the background in velocity-modulation spectroscopy

    NASA Technical Reports Server (NTRS)

    Lan, Guang; Tholl, Hans Dieter; Farley, John W.

    1991-01-01

    Velocity-modulation spectroscopy is an established technique for performing laser absorption spectroscopy of molecular ions in a discharge. However, such experiments are often plagued by a coherent background signal arising from emission from the discharge or from electronic pickup. Fluctuations in the background can obscure the desired signal. A simple technique using amplitude modulation of the laser and two lock-in amplifiers in series to detect the signal is demonstrated. The background and background fluctuations are thereby eliminated, facilitating the detection of molecular ions.

  1. Segmentation-free empirical beam hardening correction for CT.

    PubMed

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed

  2. Segmentation-free empirical beam hardening correction for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential

  3. Efficacy of Distortion Correction on Diffusion Imaging: Comparison of FSL Eddy and Eddy_Correct Using 30 and 60 Directions Diffusion Encoding

    PubMed Central

    Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki

    2014-01-01

    Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, “eddy_correct” and the combination of “eddy” and “topup” in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non–diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non–diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60

  4. Generalized algebraic scene-based nonuniformity correction algorithm.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2005-02-01

    A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.

  5. Measurement of θ 13 in Double Chooz using neutron captures on hydrogen with novel background rejection techniques

    DOE PAGES

    Abe, Y.; Appel, S.; Abrahão, T.; ...

    2016-01-27

    We observed a measurement of the Double Chooz collaboration and the neutrino mixing angle θ 13 using reactormore » $$\\bar{v}$$ e via the inverse beta decay reaction in which the neutron is captured on hydrogen. Our measurement is based on 462.72 live days data, approximately twice as much data as in the previous such analysis, collected with a detector positioned at an average distance of 1050 m from two reactor cores. Several novel techniques have been developed to achieve significant reductions of the backgrounds and systematic uncertainties. Accidental coincidences, the dominant background in this analysis, are suppressed by more than an order of magnitude with respect to our previous publication by a multi-variate analysis. Furthermore, these improvements demonstrate the capability of precise measurement of reactor $$\\bar{v}$$ e without gadolinium loading. Spectral distortions from the $$\\bar{v}$$ e reactor flux predictions previously reported with the neutron capture on gadolinium events are confirmed in the independent data sample presented here. A value of sin 2 2θ 13= 0.095 0.039 +0.038 (stat+syst) is obtained from a fit to the observed event rate as a function of the reactor power, a method insensitive to the energy spectrum shape. A simultaneous fit of the hydrogen capture events and of the gadolinium capture events yields a measurement of sin 2 2θ 13 = 0.088 ± 0.033(stat+syst).« less

  6. High speed wavefront sensorless aberration correction in digital micromirror based confocal microscopy.

    PubMed

    Pozzi, P; Wilding, D; Soloviev, O; Verstraete, H; Bliek, L; Vdovin, G; Verhaegen, M

    2017-01-23

    The quality of fluorescence microscopy images is often impaired by the presence of sample induced optical aberrations. Adaptive optical elements such as deformable mirrors or spatial light modulators can be used to correct aberrations. However, previously reported techniques either require special sample preparation, or time consuming optimization procedures for the correction of static aberrations. This paper reports a technique for optical sectioning fluorescence microscopy capable of correcting dynamic aberrations in any fluorescent sample during the acquisition. This is achieved by implementing adaptive optics in a non conventional confocal microscopy setup, with multiple programmable confocal apertures, in which out of focus light can be separately detected, and used to optimize the correction performance with a sampling frequency an order of magnitude faster than the imaging rate of the system. The paper reports results comparing the correction performances to traditional image optimization algorithms, and demonstrates how the system can compensate for dynamic changes in the aberrations, such as those introduced during a focal stack acquisition though a thick sample.

  7. Hybrid two-dimensional navigator correction: a new technique to suppress respiratory-induced physiological noise in multi-shot echo-planar functional MRI

    PubMed Central

    Barry, Robert L.; Klassen, L. Martyn; Williams, Joy M.; Menon, Ravi S.

    2008-01-01

    A troublesome source of physiological noise in functional magnetic resonance imaging (fMRI) is due to the spatio-temporal modulation of the magnetic field in the brain caused by normal subject respiration. fMRI data acquired using echo-planar imaging is very sensitive to these respiratory-induced frequency offsets, which cause significant geometric distortions in images. Because these effects increase with main magnetic field, they can nullify the gains in statistical power expected by the use of higher magnetic fields. As a study of existing navigator correction techniques for echo-planar fMRI has shown that further improvements can be made in the suppression of respiratory-induced physiological noise, a new hybrid two-dimensional (2D) navigator is proposed. Using a priori knowledge of the slow spatial variations of these induced frequency offsets, 2D field maps are constructed for each shot using spatial frequencies between ±0.5 cm−1 in k-space. For multi-shot fMRI experiments, we estimate that the improvement of hybrid 2D navigator correction over the best performance of one-dimensional navigator echo correction translates into a 15% increase in the volume of activation, 6% and 10% increases in the maximum and average t-statistics, respectively, for regions with high t-statistics, and 71% and 56% increases in the maximum and average t-statistics, respectively, in regions with low t-statistics due to contamination by residual physiological noise. PMID:18024159

  8. Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds

    NASA Astrophysics Data System (ADS)

    Mitra, Arpita

    2017-12-01

    The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.

  9. Scene-based nonuniformity correction technique that exploits knowledge of the focal-plane array readout architecture.

    PubMed

    Narayanan, Balaji; Hardie, Russell C; Muse, Robert A

    2005-06-10

    Spatial fixed-pattern noise is a common and major problem in modern infrared imagers owing to the nonuniform response of the photodiodes in the focal plane array of the imaging system. In addition, the nonuniform response of the readout and digitization electronics, which are involved in multiplexing the signals from the photodiodes, causes further nonuniformity. We describe a novel scene based on a nonuniformity correction algorithm that treats the aggregate nonuniformity in separate stages. First, the nonuniformity from the readout amplifiers is corrected by use of knowledge of the readout architecture of the imaging system. Second, the nonuniformity resulting from the individual detectors is corrected with a nonlinear filter-based method. We demonstrate the performance of the proposed algorithm by applying it to simulated imagery and real infrared data. Quantitative results in terms of the mean absolute error and the signal-to-noise ratio are also presented to demonstrate the efficacy of the proposed algorithm. One advantage of the proposed algorithm is that it requires only a few frames to obtain high-quality corrections.

  10. Non-perturbative background field calculations

    NASA Astrophysics Data System (ADS)

    Stephens, C. R.

    1988-01-01

    New methods are developed for calculating one loop functional determinants in quantum field theory. Instead of relying on a calculation of all the eigenvalues of the small fluctuation equation, these techniques exploit the ability of the proper time formalism to reformulate an infinite dimensional field theoretic problem into a finite dimensional covariant quantum mechanical analog, thereby allowing powerful tools such as the method of Jacobi fields to be used advantageously in a field theory setting. More generally the methods developed herein should be extremely valuable when calculating quantum processes in non-constant background fields, offering a utilitarian alternative to the two standard methods of calculation—perturbation theory in the background field or taking the background field into account exactly. The formalism developed also allows for the approximate calculation of covariances of partial differential equations from a knowledge of the solutions of a homogeneous ordinary differential equation.

  11. High-resolution subgrid models: background, grid generation, and implementation

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  12. Quantum annealing correction with minor embedding

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.

    2015-10-01

    Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.

  13. Accelerated Slice Encoding for Metal Artifact Correction

    PubMed Central

    Hargreaves, Brian A.; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T.; Gold, Garry E.; Brau, Anja C. S.; Pauly, John M.; Pauly, Kim Butts

    2010-01-01

    Purpose To demonstrate accelerated imaging with artifact reduction near metallic implants and different contrast mechanisms. Materials and Methods Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The SNR effects of all reconstructions were quantified in one subject. 10 subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. Results The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. Conclusion SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. PMID:20373445

  14. Accelerated slice encoding for metal artifact correction.

    PubMed

    Hargreaves, Brian A; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T; Gold, Garry E; Brau, Anja C S; Pauly, John M; Pauly, Kim Butts

    2010-04-01

    To demonstrate accelerated imaging with both artifact reduction and different contrast mechanisms near metallic implants. Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The signal-to-noise ratio (SNR) effects of all reconstructions were quantified in one subject. Ten subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging, and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. (c) 2010 Wiley-Liss, Inc.

  15. Parameter Sensitivity Study of the Wall Interference Correction System (WICS)

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Everhart, Joel L.; Iyer, Venkit

    2001-01-01

    An off-line version of the Wall Interference Correction System (WICS) has been implemented for the "NASA Langley National Transonic Facility. The correction capability is currently restricted to corrections for solid wall interference in the model pitch plane for Mach numbers, less than 0.45 due to a limitation in tunnel calibration data. A study to assess output sensitivity to the aerodynamic parameters of Reynolds number and Mach number was conducted on this code to further ensure quality during the correction process. In addition, this paper includes all investigation into possible correction due to a semispan test technique using a non metric standoff and all improvement to the standard data rejection algorithm.

  16. Closed anterior scoring for prominent-ear correction revisited.

    PubMed

    Thomas, S S; Fatah, F

    2001-10-01

    The closed-anterior-scoring technique has been used over the past 3 years to correct 56 prominent ears in 32 patients at the West Midlands Regional Plastic Surgery Unit at Wordsley Hospital. A review was carried out to assess the result of this surgical procedure. We briefly discuss the historical development of other surgical techniques for prominent-ear correction, and describe in detail the operative technique for this procedure, which includes closed scoring and suturing of the cartilage. We used this technique to treat 24 patients with bilateral prominent ears and eight patients with unilateral prominent ears. The series comprised 20 females and 12 males, 26 children and six adults. The age range was from 4 to 24 years old. There were two complications (an upper-pole recurrence and protrusion of a buried prolene suture). Patients were followed up for between 6 months and 3 years (mean: 1.5 years). This procedure is quick and technically easy to learn, with no anterior scars or posterior cartilage overlap. Minimal dissection is involved, leading to a low rate of complications. The learning curve is rapid; this paper represents the experience of a specialist trainee (SST) after he was taught the technique by the senior author. Copyright 2001 The British Association of Plastic Surgeons.

  17. Error Correction: A Cognitive-Affective Stance

    ERIC Educational Resources Information Center

    Saeed, Aziz Thabit

    2007-01-01

    This paper investigates the application of some of the most frequently used writing error correction techniques to see the extent to which this application takes learners' cognitive and affective characteristics into account. After showing how unlearned application of these styles could be discouraging and/or damaging to students, the paper…

  18. A deconvolution technique to correct deep images of galaxies from instrumental scattered light

    NASA Astrophysics Data System (ADS)

    Karabal, E.; Duc, P.-A.; Kuntschner, H.; Chanial, P.; Cuillandre, J.-C.; Gwyn, S.

    2017-05-01

    Deep imaging of the diffuse light that is emitted by stellar fine structures and outer halos around galaxies is often now used to probe their past mass assembly. Because the extended halos survive longer than the relatively fragile tidal features, they trace more ancient mergers. We use images that reach surface brightness limits as low as 28.5-29 mag arcsec-2 (g-band) to obtain light and color profiles up to 5-10 effective radii of a sample of nearby early-type galaxies. These were acquired with MegaCam as part of the CFHT MATLAS large programme. These profiles may be compared to those produced using simulations of galaxy formation and evolution, once corrected for instrumental effects. Indeed they can be heavily contaminated by the scattered light caused by internal reflections within the instrument. In particular, the nucleus of galaxies generates artificial flux in the outer halo, which has to be precisely subtracted. We present a deconvolution technique to remove the artificial halos that makes use of very large kernels. The technique, which is based on PyOperators, is more time efficient than the model-convolution methods that are also used for that purpose. This is especially the case for galaxies with complex structures that are hard to model. Having a good knowledge of the point spread function (PSF), including its outer wings, is critical for the method. A database of MegaCam PSF models corresponding to different seeing conditions and bands was generated directly from the deep images. We show that the difference in the PSFs in different bands causes artificial changes in the color profiles, in particular a reddening of the outskirts of galaxies having a bright nucleus. The method is validated with a set of simulated images and applied to three representative test cases: NGC 3599, NGC 3489, and NGC 4274, which exhibits a prominent ghost halo for two of them. This method successfully removes this. The library of PSFs (FITS files) is only available at the

  19. B1- non-uniformity correction of phased-array coils without measuring coil sensitivity.

    PubMed

    Damen, Frederick C; Cai, Kejia

    2018-04-18

    Parallel imaging can be used to increase SNR and shorten acquisition times, albeit, at the cost of image non-uniformity. B 1 - non-uniformity correction techniques are confounded by signal that varies not only due to coil induced B 1 - sensitivity variation, but also the object's own intrinsic signal. Herein, we propose a method that makes minimal assumptions and uses only the coil images themselves to produce a single combined B 1 - non-uniformity-corrected complex image with the highest available SNR. A novel background noise classifier is used to select voxels of sufficient quality to avoid the need for regularization. Unique properties of the magnitude and phase were used to reduce the B 1 - sensitivity to two joint additive models for estimation of the B 1 - inhomogeneity. The complementary corruption of the imaged object across the coil images is used to abate individual coil correction imperfections. Results are presented from two anatomical cases: (a) an abdominal image that is challenging in both extreme B 1 - sensitivity and intrinsic tissue signal variation, and (b) a brain image with moderate B 1 - sensitivity and intrinsic tissue signal variation. A new relative Signal-to-Noise Ratio (rSNR) quality metric is proposed to evaluate the performance of the proposed method and the RF receiving coil array. The proposed method has been shown to be robust to imaged objects with widely inhomogeneous intrinsic signal, and resilient to poorly performing coil elements. Copyright © 2018. Published by Elsevier Inc.

  20. Correcting the planar perspective projection in geometric structures applied to forensic facial analysis.

    PubMed

    Baldasso, Rosane Pérez; Tinoco, Rachel Lima Ribeiro; Vieira, Cristina Saft Matos; Fernandes, Mário Marques; Oliveira, Rogério Nogueira

    2016-10-01

    The process of forensic facial analysis may be founded on several scientific techniques and imaging modalities, such as digital signal processing, photogrammetry and craniofacial anthropometry. However, one of the main limitations in this analysis is the comparison of images acquired with different angles of incidence. The present study aimed to explore a potential approach for the correction of the planar perspective projection (PPP) in geometric structures traced from the human face. A technique for the correction of the PPP was calibrated within photographs of two geometric structures obtained with angles of incidence distorted in 80°, 60° and 45°. The technique was performed using ImageJ ® 1.46r (National Institutes of Health, Bethesda, Maryland). The corrected images were compared with photographs of the same object obtained in 90° (reference). In a second step, the technique was validated in a digital human face created using MakeHuman ® 1.0.2 (Free Software Foundation, Massachusetts, EUA) and Blender ® 2.75 (Blender ® Foundation, Amsterdam, Nederland) software packages. The images registered with angular distortion presented a gradual decrease in height when compared to the reference. The digital technique for the correction of the PPP is a valuable tool for forensic applications using photographic imaging modalities, such as forensic facial analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Occupations at Case Closure for Vocational Rehabilitation Applicants with Criminal Backgrounds

    ERIC Educational Resources Information Center

    Whitfield, Harold Wayne

    2009-01-01

    The purpose of this study was to identify industries that hire persons with disabilities and criminal backgrounds. The researcher obtained data on 1,355 applicants for vocational rehabilitation services who were living in adult correctional facilities at the time of application. Service-based industries hired the most ex-inmates with disabilities…

  2. Evaluation of different distortion correction methods and interpolation techniques for an automated classification of celiac disease☆

    PubMed Central

    Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.

    2013-01-01

    Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585

  3. A novel background field removal method for MRI using projection onto dipole fields (PDF).

    PubMed

    Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi

    2011-11-01

    For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Fringe Capacitance Correction for a Coaxial Soil Cell

    PubMed Central

    Pelletier, Mathew G.; Viera, Joseph A.; Schwartz, Robert C.; Lascano, Robert J.; Evett, Steven R.; Green, Tim R.; Wanjura, John D.; Holt, Greg A.

    2011-01-01

    Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR) as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the fringe capacitance

  5. Using Correction Codes to Enhance Understanding of 4-Parts Harmony

    ERIC Educational Resources Information Center

    Engür, Doruk

    2018-01-01

    The effective ways of error correction in teaching musical harmony have been neglected. Making students realize their mistakes and have them think over them are assumed to be helpful in harmony teaching. In this sense, correction code technique is thought to be beneficial for students to realize their mistakes and solve them on their own. Forty…

  6. Training Correctional Educators: A Needs Assessment Study.

    ERIC Educational Resources Information Center

    Jurich, Sonia; Casper, Marta; Hull, Kim A.

    2001-01-01

    Focus groups and a training needs survey of Virginia correctional educators identified educational philosophy, communication skills, human behavior, and teaching techniques as topics of interest. Classroom observations identified additional areas: teacher isolation, multiple challenges, absence of grade structure, and safety constraints. (Contains…

  7. 77 FR 72199 - Technical Corrections; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... changes. This correcting amendment is necessary to correct the statutory authority that is cited in one of... is necessary to correct the statutory authority that is cited in the authority citation for part 171...

  8. Wavelength modulated surface enhanced (resonance) Raman scattering for background-free detection.

    PubMed

    Praveen, Bavishna B; Steuwe, Christian; Mazilu, Michael; Dholakia, Kishan; Mahajan, Sumeet

    2013-05-21

    Spectra in surface-enhanced Raman scattering (SERS) are always accompanied by a continuum emission called the 'background' which complicates analysis and is especially problematic for quantification and automation. Here, we implement a wavelength modulation technique to eliminate the background in SERS and its resonant version, surface-enhanced resonance Raman scattering (SERRS). This is demonstrated on various nanostructured substrates used for SER(R)S. An enhancement in the signal to noise ratio for the Raman bands of the probe molecules is also observed. This technique helps to improve the analytical ability of SERS by alleviating the problem due to the accompanying background and thus making observations substrate independent.

  9. Validation of spectroscopic gas analyzer accuracy using gravimetric standard gas mixtures: impact of background gas composition on CO2 quantitation by cavity ring-down spectroscopy

    NASA Astrophysics Data System (ADS)

    Lim, Jeong Sik; Park, Miyeon; Lee, Jinbok; Lee, Jeongsoon

    2017-12-01

    The effect of background gas composition on the measurement of CO2 levels was investigated by wavelength-scanned cavity ring-down spectrometry (WS-CRDS) employing a spectral line centered at the R(1) of the (3 00 1)III ← (0 0 0) band. For this purpose, eight cylinders with various gas compositions were gravimetrically and volumetrically prepared within 2σ = 0.1 %, and these gas mixtures were introduced into the WS-CRDS analyzer calibrated against standards of ambient air composition. Depending on the gas composition, deviations between CRDS-determined and gravimetrically (or volumetrically) assigned CO2 concentrations ranged from -9.77 to 5.36 µmol mol-1, e.g., excess N2 exhibited a negative deviation, whereas excess Ar showed a positive one. The total pressure broadening coefficients (TPBCs) obtained from the composition of N2, O2, and Ar thoroughly corrected the deviations up to -0.5 to 0.6 µmol mol-1, while these values were -0.43 to 1.43 µmol mol-1 considering PBCs induced by only N2. The use of TPBC enhanced deviations to be corrected to ˜ 0.15 %. Furthermore, the above correction linearly shifted CRDS responses for a large extent of TPBCs ranging from 0.065 to 0.081 cm-1 atm-1. Thus, accurate measurements using optical intensity-based techniques such as WS-CRDS require TPBC-based instrument calibration or use standards prepared in the same background composition of ambient air.

  10. Normalization, bias correction, and peak calling for ChIP-seq

    PubMed Central

    Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.

    2012-01-01

    Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706

  11. Correction of Lying Ears by Augmentation of the Conchoscaphal Angle.

    PubMed

    Kim, Sung-Eun; Yeo, Chi-Ho; Kim, Taegon; Kim, Yong-Ha; Lee, Jun Ho; Chung, Kyu-Jin

    2017-01-01

    Lying ears are defined as ears that protrude less from the head, and in frontal view, are characterized by lateral positioning of antihelical contour relative to the helical rim. These aesthetically displeasing ears require correction in accord with the goals of otoplasty stated by McDowell. The authors present a case of lying ears treated by correcting the conchomastoid angle using Z-plasty, resection of posterior auricular muscle, and correction of the conchoscaphal angle by releasing cartilage using 2 full-thickness incisions and grafting of a conchal cartilage spacer. By combining these techniques, the authors efficiently corrected lying ears and produced aesthetically pleasing results.

  12. Correcting Flank Skin Laxity and Dog Ear Plus Aggressive Liposuction: A Technique for Classic Abdominoplasty in Middle-Eastern Obese Women.

    PubMed

    Hosseini, Seyed Nejat; Ammari, Ali; Mousavizadeh, Seyed Mehdi

    2018-01-01

    Nowadays obesity is a common problem as it leads to abdominal deformation and people's dissatisfaction of their own body. This study has explored using a new surgical technique based on a different incision to reform the flank skin laxity and dog ear plus aggressive liposuction on women with abdominal deformities. From May 2014 to February 2016 , 25 women were chosen for this study. All women had a body mass index more than 28 kg/m 2 , flank folding, bulging and excess fat, abdominal and flank skin sagging and laxity. An important point of the new technique was that the paramedian perforator was preserved. All women were between 33 and 62 years old (mean age of 47±7.2 years old). The average amount of liposuction aspirate was 2,350 mL (1700-3200 mL), and the size of average excised skin ellipse was 23.62×16.08 cm (from 19×15 to 27×18 cm). Dog ear, skin laxity, bulging and fat deposit correction were assessed and scored in two and four months after the surgery. Aggressive abdominal and flank liposuction can be safely done when paramedian perforator is preserved. This has a good cosmetic result in the abdomen and flank and prevents bulging in the incision end and flank. Using this abdominoplasty technique is recommended on patients with high body mass indexes.

  13. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    NASA Astrophysics Data System (ADS)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  14. Development of a Technique for Separating Raman Scattering Signals from Background Emission with Single-Shot Measurement Potential

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Dobson, Chris; Eskridge, Richard; Wehrmeyer, Joseph A.

    1997-01-01

    A novel technique for extracting Q-branch Raman signals scattered by a diatomic species from the emission spectrum resulting from the irradiation of combustion products using a broadband excimer laser has been developed. This technique is based on the polarization characteristics of vibrational Raman scattering and can be used for both single-shot Raman extraction and time-averaged data collection. The Q-branch Raman signal has a unique set of polarization characteristics which depend on the direction of the scattering while fluorescence signals are unpolarized. For the present work, a calcite crystal is used to separate the horizonal component of a collected signal from the vertical component. The two components are then sent through a UV spectrometer and imaged onto an intensified CCD camera separately. The vertical component contains both the Raman signal and the interfering fluorescence signal. The horizontal component contains the fluorescence signal and a very weak component of the Raman signal; hence, the Raman scatter can be extracted by taking the difference between the two signals. The separation of the Raman scatter from interfering fluorescence signals is critically important to the interpretation of the Raman for cases in which a broadband ultraviolet (UV) laser is used as an excitation source in a hydrogen-oxygen flame and in all hydrocarbon flames. The present work provides a demonstration of the separation of the Raman scatter from the fluorescence background in real time.

  15. Correcting ligands, metabolites, and pathways

    PubMed Central

    Ott, Martin A; Vriend, Gert

    2006-01-01

    Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry) and that a considerable number (about one third) had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect) reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and visualization. It is freely available

  16. Dual-energy digital mammography for calcification imaging: scatter and nonuniformity corrections.

    PubMed

    Kappadath, S Cheenu; Shaw, Chris C

    2005-11-01

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 microm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 microm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 microm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than approximately 250 microm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.

  17. Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kappadath, S. Cheenu; Shaw, Chris C.

    Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DEmore » calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 {mu}m) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 {mu}m size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 {mu}m size range when the visibility criteria were lowered to barely visible. Calcifications smaller than {approx}250 {mu}m were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.« less

  18. Adjustment of geochemical background by robust multivariate statistics

    USGS Publications Warehouse

    Zhou, D.

    1985-01-01

    Conventional analyses of exploration geochemical data assume that the background is a constant or slowly changing value, equivalent to a plane or a smoothly curved surface. However, it is better to regard the geochemical background as a rugged surface, varying with changes in geology and environment. This rugged surface can be estimated from observed geological, geochemical and environmental properties by using multivariate statistics. A method of background adjustment was developed and applied to groundwater and stream sediment reconnaissance data collected from the Hot Springs Quadrangle, South Dakota, as part of the National Uranium Resource Evaluation (NURE) program. Source-rock lithology appears to be a dominant factor controlling the chemical composition of groundwater or stream sediments. The most efficacious adjustment procedure is to regress uranium concentration on selected geochemical and environmental variables for each lithologic unit, and then to delineate anomalies by a common threshold set as a multiple of the standard deviation of the combined residuals. Robust versions of regression and RQ-mode principal components analysis techniques were used rather than ordinary techniques to guard against distortion caused by outliers Anomalies delineated by this background adjustment procedure correspond with uranium prospects much better than do anomalies delineated by conventional procedures. The procedure should be applicable to geochemical exploration at different scales for other metals. ?? 1985.

  19. The Trojan Female Technique for pest control: a candidate mitochondrial mutation confers low male fertility across diverse nuclear backgrounds in Drosophila melanogaster.

    PubMed

    Dowling, Damian K; Tompkins, Daniel M; Gemmell, Neil J

    2015-10-01

    Pest species represent a major ongoing threat to global biodiversity. Effective management approaches are required that regulate pest numbers, while minimizing collateral damage to nontarget species. The Trojan Female Technique (TFT) was recently proposed as a prospective approach to biological pest control. The TFT draws on the evolutionary hypothesis that maternally inherited mitochondrial genomes are prone to the accumulation of male, but not female, harming mutations. These mutations could be harnessed to provide trans-generational fertility-based control of pest species. A candidate TFT mutation was recently described in the fruit fly, Drosophila melanogaster, which confers male-only sterility in the specific isogenic nuclear background in which it is maintained. However, applicability of the TFT relies on mitochondrial mutations whose male-sterilizing effects are general across nuclear genomic contexts. We test this assumption, expressing the candidate TFT-mutation bearing haplotype alongside a range of nuclear backgrounds and comparing its fertility in males, relative to that of control haplotypes. We document consistently lower fertility for males harbouring the TFT mutation, in both competitive and noncompetitive mating contexts, across all nuclear backgrounds screened. This indicates that TFT mutations conferring reduced male fertility can segregate within populations and could be harnessed to facilitate this novel form of pest control.

  20. The Trojan Female Technique for pest control: a candidate mitochondrial mutation confers low male fertility across diverse nuclear backgrounds in Drosophila melanogaster

    PubMed Central

    Dowling, Damian K; Tompkins, Daniel M; Gemmell, Neil J

    2015-01-01

    Pest species represent a major ongoing threat to global biodiversity. Effective management approaches are required that regulate pest numbers, while minimizing collateral damage to nontarget species. The Trojan Female Technique (TFT) was recently proposed as a prospective approach to biological pest control. The TFT draws on the evolutionary hypothesis that maternally inherited mitochondrial genomes are prone to the accumulation of male, but not female, harming mutations. These mutations could be harnessed to provide trans-generational fertility-based control of pest species. A candidate TFT mutation was recently described in the fruit fly, Drosophila melanogaster, which confers male-only sterility in the specific isogenic nuclear background in which it is maintained. However, applicability of the TFT relies on mitochondrial mutations whose male-sterilizing effects are general across nuclear genomic contexts. We test this assumption, expressing the candidate TFT-mutation bearing haplotype alongside a range of nuclear backgrounds and comparing its fertility in males, relative to that of control haplotypes. We document consistently lower fertility for males harbouring the TFT mutation, in both competitive and noncompetitive mating contexts, across all nuclear backgrounds screened. This indicates that TFT mutations conferring reduced male fertility can segregate within populations and could be harnessed to facilitate this novel form of pest control. PMID:26495040

  1. Simple aerosol correction technique based on the spectral relationships of the aerosol multiple-scattering reflectances for atmospheric correction over the oceans.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram

    2016-12-26

    An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the

  2. CT acquisition technique and quantitative analysis of the lung parenchyma: variability and corrections

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Leader, J. K.; Coxson, Harvey O.; Scuirba, Frank C.; Fuhrman, Carl R.; Balkan, Arzu; Weissfeld, Joel L.; Maitz, Glenn S.; Gur, David

    2006-03-01

    The fraction of lung voxels below a pixel value "cut-off" has been correlated with pathologic estimates of emphysema. We performed a "standard" quantitative CT (QCT) lung analysis using a -950 HU cut-off to determine the volume fraction of emphysema (below the cut-off) and a "corrected" QCT analysis after removing small group (5 and 10 pixels) of connected pixels ("blobs") below the cut-off. CT examinations two dataset of 15 subjects each with a range of visible emphysema and pulmonary obstruction were acquired at "low-dose and conventional dose reconstructed using a high-spatial frequency kernel at 2.5 mm section thickness for the same subject. The "blob" size (i.e., connected-pixels) removed was inversely related to the computed fraction of emphysema. The slopes of emphysema fraction versus blob size were 0.013, 0.009, and 0.005 for subjects with both no emphysema and no pulmonary obstruction, moderate emphysema and pulmonary obstruction, and severe emphysema and severe pulmonary obstruction, respectively. The slopes of emphysema fraction versus blob size were 0.008 and 0.006 for low-dose and conventional CT examinations, respectively. The small blobs of pixels removed are most likely CT image artifacts and do not represent actual emphysema. The magnitude of the blob correction was appropriately associated with COPD severity. The blob correction appears to be applicable to QCT analysis in low-dose and conventional CT exams.

  3. Analysis and correction of Landsat 4 and 5 Thematic Mapper Sensor Data

    NASA Technical Reports Server (NTRS)

    Bernstein, R.; Hanson, W. A.

    1985-01-01

    Procedures for the correction and registration and registration of Landsat TM image data are examined. The registration of Landsat-4 TM images of San Francisco to Landsat-5 TM images of the San Francisco using the interactive geometric correction program and the cross-correlation technique is described. The geometric correction program and cross-correlation results are presented. The corrections of the TM data to a map reference and to a cartographic database are discussed; geometric and cartographic analyses are applied to the registration results.

  4. Pulse shape discrimination for background rejection in germanium gamma-ray detectors

    NASA Technical Reports Server (NTRS)

    Feffer, P. T.; Smith, D. M.; Campbell, R. D.; Primbsch, J. H.; Lin, R. P.

    1989-01-01

    A pulse-shape discrimination (PSD) technique is developed to reject the beta-decay background resulting from activation of Ge gamma-ray detectors by cosmic-ray secondaries. These beta decays are a major source of background at 0.2-2 MeV energies in well shielded Ge detector systems. The technique exploits the difference between the detected current pulse shapes of single- and multiple-site energy depositions within the detector: beta decays are primarily single-site events, while photons at these energies typically Compton scatter before being photoelectrically absorbed to produce multiple-site events. Depending upon the amount of background due to sources other than beta decay, PSD can more than double the detector sensitivity.

  5. Blindness to background: an inbuilt bias for visual objects.

    PubMed

    O'Hanlon, Catherine G; Read, Jenny C A

    2017-09-01

    Sixty-eight 2- to 12-year-olds and 30 adults were shown colorful displays on a touchscreen monitor and trained to point to the location of a named color. Participants located targets near-perfectly when presented with four abutting colored patches. When presented with three colored patches on a colored background, toddlers failed to locate targets in the background. Eye tracking demonstrated that the effect was partially mediated by a tendency not to fixate the background. However, the effect was abolished when the targets were named as nouns, whilst the change to nouns had little impact on eye movement patterns. Our results imply a powerful, inbuilt tendency to attend to objects, which may slow the development of color concepts and acquisition of color words. A video abstract of this article can be viewed at: https://youtu.be/TKO1BPeAiOI. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  6. Background rejection in NEXT using deep neural networks

    DOE PAGES

    Renner, J.; Farbin, A.; Vidal, J. Muñoz; ...

    2017-01-16

    Here, we investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the usemore » of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.« less

  7. Data-driven modeling of background and mine-related acidity and metals in river basins

    USGS Publications Warehouse

    Friedel, Michael J

    2013-01-01

    A novel application of self-organizing map (SOM) and multivariate statistical techniques is used to model the nonlinear interaction among basin mineral-resources, mining activity, and surface-water quality. First, the SOM is trained using sparse measurements from 228 sample sites in the Animas River Basin, Colorado. The model performance is validated by comparing stochastic predictions of basin-alteration assemblages and mining activity at 104 independent sites. The SOM correctly predicts (>98%) the predominant type of basin hydrothermal alteration and presence (or absence) of mining activity. Second, application of the Davies–Bouldin criteria to k-means clustering of SOM neurons identified ten unique environmental groups. Median statistics of these groups define a nonlinear water-quality response along the spatiotemporal hydrothermal alteration-mining gradient. These results reveal that it is possible to differentiate among the continuum between inputs of background and mine-related acidity and metals, and it provides a basis for future research and empirical model development.

  8. Application of split window technique to TIMS data

    NASA Technical Reports Server (NTRS)

    Matsunaga, Tsuneo; Rokugawa, Shuichi; Ishii, Yoshinori

    1992-01-01

    Absorptions by the atmosphere in thermal infrared region are mainly due to water vapor, carbon dioxide, and ozone. As the content of water vapor in the atmosphere greatly changes according to weather conditions, it is important to know its amount between the sensor and the ground for atmospheric corrections of thermal Infrared Multispectral Scanner (TIMS) data (i.e. radiosonde). On the other hand, various atmospheric correction techniques were already developed for sea surface temperature estimations from satellites. Among such techniques, Split Window technique, now widely used for AVHRR (Advanced Very High Resolution Radiometer), uses no radiosonde or any kind of supplementary data but a difference between observed brightness temperatures in two channels for estimating atmospheric effects. Applications of Split Window technique to TIMS data are discussed because availability of atmospheric profile data is not clear when ASTER operates. After these theoretical discussions, the technique is experimentally applied to TIMS data at three ground targets and results are compared with atmospherically corrected data using LOWTRAN 7 with radiosonde data.

  9. A simple technique for correction of relapsed overjet.

    PubMed

    Kakkirala, Neelima; Saxena, Ruchi

    2014-01-01

    Class III malocclusions are usually growth related discrepancies, which often become more severe when growth is completed Orthognathic surgery can be a part of the treatment plan, although a good number of cases can be treated non-surgically by camouflage treatment. The purpose of this report is to review the relapse tendency in patients treated non-surgically. A simple technique is described to combat one such post-treatment relapse condition in an adult patient who had undergone orthodontic treatment by extraction of a single lower incisor.

  10. Stahl ear correction using the third crus cartilage flap.

    PubMed

    Gundeslioglu, A Ozlem; Ince, Bilsev

    2013-12-01

    Stahl ear is usually associated with the existence of a third crus that traverses the scapha. The absence of the superior crus of the antihelix, a broadened scapha, and unfolded, long helical rim may contribute to the formation of the deformity. The method presented in this article is a combined technique intended to recreate the absent superior crus using the third crus as a cartilage flap along with elimination of the third crus, reducing the scapha size, and correcting helical rim deformities by skin and cartilage excisions. One bilateral and three unilateral cases were operated on with this technique (five auricles). The first unilateral cases were performed without scapha reduction and helix correction which was used in the other three cases with successful results. This procedure can be used to correct Stahl ear variations in the case of an absent superior crus when a third crus is present, the scapha is of increased size, and the helix is long without a fold. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  11. Radon background in liquid xenon detectors

    NASA Astrophysics Data System (ADS)

    Rupp, N.

    2018-02-01

    The radioactive daughters isotope of 222Rn are one of the highest risk contaminants in liquid xenon detectors aiming for a small signal rate. The noble gas is permanently emanated from the detector surfaces and mixed with the xenon target. Because of its long half-life 222Rn is homogeneously distributed in the target and its subsequent decays can mimic signal events. Since no shielding is possible this background source can be the dominant one in future large scale experiments. This article provides an overview of strategies used to mitigate this source of background by means of material selection and on-line radon removal techniques.

  12. Advances in iterative non-uniformity correction techniques for infrared scene projection

    NASA Astrophysics Data System (ADS)

    Danielson, Tom; Franks, Greg; LaVeigne, Joe; Prewarski, Marcus; Nehring, Brian

    2015-05-01

    Santa Barbara Infrared (SBIR) is continually developing improved methods for non-uniformity correction (NUC) of its Infrared Scene Projectors (IRSPs) as part of its comprehensive efforts to achieve the best possible projector performance. The most recent step forward, Advanced Iterative NUC (AI-NUC), improves upon previous NUC approaches in several ways. The key to NUC performance is achieving the most accurate possible input drive-to-radiance output mapping for each emitter pixel. This requires many highly-accurate radiance measurements of emitter output, as well as sophisticated manipulation of the resulting data set. AI-NUC expands the available radiance data set to include all measurements made of emitter output at any point. In addition, it allows the user to efficiently manage that data for use in the construction of a new NUC table that is generated from an improved fit of the emitter response curve. Not only does this improve the overall NUC by offering more statistics for interpolation than previous approaches, it also simplifies the removal of erroneous data from the set so that it does not propagate into the correction tables. AI-NUC is implemented by SBIR's IRWindows4 automated test software as part its advanced turnkey IRSP product (the Calibration Radiometry System or CRS), which incorporates all necessary measurement, calibration and NUC table generation capabilities. By employing AI-NUC on the CRS, SBIR has demonstrated the best uniformity results on resistive emitter arrays to date.

  13. Automatic correction of dental artifacts in PET/MRI

    PubMed Central

    Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune. H.; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois

    2015-01-01

    Abstract. A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 F18-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of 97±3% of the artifact areas. PMID:26158104

  14. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological

  15. Beam hardening correction in CT myocardial perfusion measurement

    NASA Astrophysics Data System (ADS)

    So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim

    2009-05-01

    This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.

  16. Diffraction, chopping, and background subtraction for LDR

    NASA Technical Reports Server (NTRS)

    Wright, Edward L.

    1988-01-01

    The Large Deployable Reflector (LDR) will be an extremely sensitive infrared telescope if the noise due to the photons in the large thermal background is the only limiting factor. For observations with a 3 arcsec aperture in a broadband at 100 micrometers, a 20-meter LDR will emit 10(exp 12) per second, while the photon noise limited sensitivity in a deep survey observation will be 3,000 photons per second. Thus the background subtraction has to work at the 1 part per billion level. Very small amounts of scattered or diffracted energy can be significant if they are modulated by the chopper. The results are presented for 1-D and 2-D diffraction calculations for the lightweight, low-cost LDR concept that uses an active chopping quaternary to correct the wavefront errors introduced by the primary. Fourier transforms were used to evaluate the diffraction of 1 mm waves through this system. Unbalanced signals due to dust and thermal gradients were also studied.

  17. Techniques for correcting velocity and density fluctuations of ion beams in ion inducti on accelerators

    NASA Astrophysics Data System (ADS)

    Woo, K. M.; Yu, S. S.; Barnard, J. J.

    2013-06-01

    It is well known that the imperfection of pulse power sources that drive the linear induction accelerators can lead to time-varying fluctuation in the accelerating voltages, which in turn leads to longitudinal emittance growth. We show that this source of emittance growth is correctable, even in space-charge dominated beams with significant transients induced by space-charge waves. Two correction methods are proposed, and their efficacy in reducing longitudinal emittance is demonstrated with three-dimensional particle-in-cell simulations.

  18. Background oriented schlieren in a density stratified fluid.

    PubMed

    Verso, Lilly; Liberzon, Alex

    2015-10-01

    Non-intrusive quantitative fluid density measurement methods are essential in the stratified flow experiments. Digital imaging leads to synthetic schlieren methods in which the variations of the index of refraction are reconstructed computationally. In this study, an extension to one of these methods, called background oriented schlieren, is proposed. The extension enables an accurate reconstruction of the density field in stratified liquid experiments. Typically, the experiments are performed by the light source, background pattern, and the camera positioned on the opposite sides of a transparent vessel. The multimedia imaging through air-glass-water-glass-air leads to an additional aberration that destroys the reconstruction. A two-step calibration and image remapping transform are the key components that correct the images through the stratified media and provide a non-intrusive full-field density measurements of transparent liquids.

  19. Enhanced Analysis Techniques for an Imaging Neutron and Gamma Ray Spectrometer

    NASA Astrophysics Data System (ADS)

    Madden, Amanda C.

    The presence of gamma rays and neutrons is a strong indicator of the presence of Special Nuclear Material (SNM). The imaging Neutron and gamma ray SPECTrometer (NSPECT) developed by the University of New Hampshire and Michigan Aerospace corporation detects the fast neutrons and prompt gamma rays from fissile material, and the gamma rays from radioactive material. The instrument operates as a double scatter device, requiring a neutron or a gamma ray to interact twice in the instrument. While this detection requirement decreases the efficiency of the instrument, it offers superior background rejection and the ability to measure the energy and momentum of the incident particle. These measurements create energy spectra and images of the emitting source for source identification and localization. The dual species instrument provides superior detection than a single species alone. In realistic detection scenarios, few particles are detected from a potential threat due to source shielding, detection at a distance, high background, and weak sources. This contributes to a small signal to noise ratio, and threat detection becomes difficult. To address these difficulties, several enhanced data analysis tools were developed. A Receiver Operating Characteristic Curve (ROC) helps set instrumental alarm thresholds as well as to identify the presence of a source. Analysis of a dual-species ROC curve provides superior detection capabilities. Bayesian analysis helps to detect and identify the presence of a source through model comparisons, and helps create a background corrected count spectra for enhanced spectroscopy. Development of an instrument response using simulations and numerical analyses will help perform spectra and image deconvolution. This thesis will outline the principles of operation of the NSPECT instrument using the double scatter technology, traditional analysis techniques, and enhanced analysis techniques as applied to data from the NSPECT instrument, and an

  20. Demagnetizing correction in fluxmetric measurements of magnetization curves and hysteresis loops of ferromagnetic cylinders

    NASA Astrophysics Data System (ADS)

    Chen, Du-Xing; Pardo, Enric; Zhu, Yong-Hong; Xiang, Li-Xiong; Ding, Jia-Quan

    2018-03-01

    A technique is proposed for demagnetizing correction of the measured magnetization curve and hysteresis loop, i.e., the M∗ (Ha) curve, of a ferromagnetic cylinder into the true M (H) curve of the material, where Ha is the uniform applied field provided by a long solenoid and M∗ is the magnetization measured by a fluxmeter with the measuring coil surrounding the cylinder midplane. Different from ordinary demagnetizing correction by using a fixed demagnetizing factor, an (Ha,M∗) -dependent fluxmetric demagnetizing factor Nf (γ,χd) is used in this technique, where γ is the ratio of cylinder length to diameter, χd is the differential susceptibility on the corrected M (H) curve, and Nf (γ,χd) is approximated by accurately calculated Nf (γ, χ) of paramagnetic cylinders of the same γ and χ =χd . The validity of the technique is studied by comparing results for several samples of different lengths cut from the same cylinder. Such a demagnetizing correction is unambiguous but its success requires very high accuracy in the Nf determination and M∗ (Ha) measurements.

  1. Background Studies in CZT Detectors at Balloon Altitudes

    NASA Astrophysics Data System (ADS)

    Slavis, K. R.; Dowkontt, P. F.; Epstein, J. W.; Hink, P. L.; Matteson, J. L.; Duttweiler, F.; Huszar, G. L.; Leblanc, P. C.; Skelton, R. T.; Stephan, E. A.

    1998-12-01

    Cadmium Zinc Telluride (CZT) is a room temperature semiconductor detector well suited for high energy X-ray astronomy. We have developed a CZT detector with crossed strip readout, 500 micron resolution, and an advanced electrode design that greatly improves energy resolution. The latter varies from 3 keV to 6 keV FWHM over the range from 14-184 keV. We have conducted two balloon flights using this cross-strip detector and a standard planar detector sensitive in the energy range of 20-350 keV. These flights utilized a total of seven shielding schemes: 3 passive (7, 2, and 0 mm thick Pb/Sn/Cu), 2 active (NaI-CsI with 2 opening angles) and 2 hybrid passive-active. In the active shielding modes, the shield pulse heights were telemetered for each CZT event, allowing us to study the effect of shield energy-loss threshold on the background. The flights were launched from Fort Sumner, NM in October 1997 and May 1998, and had float altitudes of 109,000 and 105,000 feet respectively. Periodic energy calibrations showed the detector performance to be identical to that in the laboratory. The long duration of the May flight, 22 hours, enables us to study activation effects in the background. We present results on the effectiveness of each of the shielding schemes, activation effects and two new background reduction techniques for the strip detector. These reduction techniques employ the depth of interaction, as indicated by the ratio of cathode to anode pulse height, and multiple-site signatures to reject events that are unlikely to be X-rays incident on the detector's face. The depth of interaction technique reduces the background by a factor of 4 in the 20-40 keV energy range with passive shielding. Our preliminary results indicate a background level of 8.6x10(-3) cts/cm(2) -s-keV using passive shielding and 6x10(-4) cts/cm(2) -s-keV using active shielding in the 20-40 keV range.

  2. Modelling of celestial backgrounds

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.; Lim, Jae-Wan; Jeon, Yun-Ho

    2018-05-01

    For applications where a sensor's image includes the celestial background, stars and Solar System Bodies compromise the ability of the sensor system to correctly classify a target. Such false targets are particularly significant for the detection of weak target signatures which only have a small relative angular motion. The detection of celestial features is well established in the visible spectral band. However, given the increasing sensitivity and low noise afforded by emergent infrared focal plane array technology together with larger and more efficient optics, the signatures of celestial features can also impact performance at infrared wavelengths. A methodology has been developed which allows the rapid generation of celestial signatures in any required spectral band using star data from star catalogues and other open-source information. Within this paper, the radiometric calculations are presented to determine the irradiance values of stars and planets in any spectral band.

  3. Background Model for the Majorana Demonstrator

    NASA Astrophysics Data System (ADS)

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.

  4. Background model for the Majorana Demonstrator

    DOE PAGES

    Cuesta, C.; Abgrall, N.; Aguayo, E.; ...

    2015-01-01

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example usingmore » powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.« less

  5. New technique for calibrating hydrocarbon gas flowmeters

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Puster, R. L.

    1984-01-01

    A technique for measuring calibration correction factors for hydrocarbon mass flowmeters is described. It is based on the Nernst theorem for matching the partial pressure of oxygen in the combustion products of the test hydrocarbon, burned in oxygen-enriched air, with that in normal air. It is applied to a widely used type of commercial thermal mass flowmeter for a number of hydrocarbons. The calibration correction factors measured using this technique are in good agreement with the values obtained by other independent procedures. The technique is successfully applied to the measurement of differences as low as one percent of the effective hydrocarbon content of the natural gas test samples.

  6. Using NASA Techniques to Atmospherically Correct AWiFS Data for Carbon Sequestration Studies

    NASA Technical Reports Server (NTRS)

    Holekamp, Kara L.

    2007-01-01

    Carbon dioxide is a greenhouse gas emitted in a number of ways, including the burning of fossil fuels and the conversion of forest to agriculture. Research has begun to quantify the ability of vegetative land cover and oceans to absorb and store carbon dioxide. The USDA (U.S. Department of Agriculture) Forest Service is currently evaluating a DSS (decision support system) developed by researchers at the NASA Ames Research Center called CASA-CQUEST (Carnegie-Ames-Stanford Approach-Carbon Query and Evaluation Support Tools). CASA-CQUEST is capable of estimating levels of carbon sequestration based on different land cover types and of predicting the effects of land use change on atmospheric carbon amounts to assist land use management decisions. The CASA-CQUEST DSS currently uses land cover data acquired from MODIS (the Moderate Resolution Imaging Spectroradiometer), and the CASA-CQUEST project team is involved in several projects that use moderate-resolution land cover data derived from Landsat surface reflectance. Landsat offers higher spatial resolution than MODIS, allowing for increased ability to detect land use changes and forest disturbance. However, because of the rate at which changes occur and the fact that disturbances can be hidden by regrowth, updated land cover classifications may be required before the launch of the Landsat Data Continuity Mission, and consistent classifications will be needed after that time. This candidate solution investigates the potential of using NASA atmospheric correction techniques to produce science-quality surface reflectance data from the Indian Remote Sensing Advanced Wide-Field Sensor on the RESOURCESAT-1 mission to produce land cover classification maps for the CASA-CQUEST DSS.

  7. Wind Tunnel Wall Interference Assessment and Correction, 1983

    NASA Technical Reports Server (NTRS)

    Newman, P. A. (Editor); Barnwell, R. W. (Editor)

    1984-01-01

    Technical information focused upon emerging wall interference assessment/correction (WIAC) techniques applicable to transonic wind tunnels with conventional and passively or partially adapted walls is given. The possibility of improving the assessment and correction of data taken in conventional transonic wind tunnels by utilizing simultaneously obtained flow field data (generally taken near the walls) appears to offer a larger, nearer-term payoff than the fully adaptive wall concept. Development of WIAC procedures continues, and aspects related to validating the concept need to be addressed. Thus, the scope of wall interference topics discussed was somewhat limited.

  8. Motion correction in periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and turboprop MRI.

    PubMed

    Tamhane, Ashish A; Arfanakis, Konstantinos

    2009-07-01

    Periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and Turboprop MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo (FSE) and gradient and spin-echo (GRASE), respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction are discussed for PROPELLER and Turboprop MRI. (c) 2009 Wiley-Liss, Inc.

  9. Use of bias correction techniques to improve seasonal forecasts for reservoirs - A case-study in northwestern Mediterranean.

    PubMed

    Marcos, Raül; Llasat, Ma Carmen; Quintana-Seguí, Pere; Turco, Marco

    2018-01-01

    In this paper, we have compared different bias correction methodologies to assess whether they could be advantageous for improving the performance of a seasonal prediction model for volume anomalies in the Boadella reservoir (northwestern Mediterranean). The bias correction adjustments have been applied on precipitation and temperature from the European Centre for Middle-range Weather Forecasting System 4 (S4). We have used three bias correction strategies: two linear (mean bias correction, BC, and linear regression, LR) and one non-linear (Model Output Statistics analogs, MOS-analog). The results have been compared with climatology and persistence. The volume-anomaly model is a previously computed Multiple Linear Regression that ingests precipitation, temperature and in-flow anomaly data to simulate monthly volume anomalies. The potential utility for end-users has been assessed using economic value curve areas. We have studied the S4 hindcast period 1981-2010 for each month of the year and up to seven months ahead considering an ensemble of 15 members. We have shown that the MOS-analog and LR bias corrections can improve the original S4. The application to volume anomalies points towards the possibility to introduce bias correction methods as a tool to improve water resource seasonal forecasts in an end-user context of climate services. Particularly, the MOS-analog approach gives generally better results than the other approaches in late autumn and early winter. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  11. Scatter and veiling glare corrections for quantitative digital subtraction angiography

    NASA Astrophysics Data System (ADS)

    Ersahin, Atila; Molloi, Sabee Y.; Qian, Yao-Jin

    1994-05-01

    In order to quantitate anatomical and physiological parameters such as vessel dimensions and volumetric blood flow, it is necessary to make corrections for scatter and veiling glare (SVG), which are the major sources of nonlinearities in videodensitometric digital subtraction angiography (DSA). A convolution filtering technique has been investigated to estimate SVG distribution in DSA images without the need to sample the SVG for each patient. This technique utilizes exposure parameters and image gray levels to estimate SVG intensity by predicting the total thickness for every pixel in the image. At this point, corrections were also made for variation of SVG fraction with beam energy and field size. To test its ability to estimate SVG intensity, the correction technique was applied to images of a Lucite step phantom, anthropomorphic chest phantom, head phantom, and animal models at different thicknesses, projections, and beam energies. The root-mean-square (rms) percentage error of these estimates were obtained by comparison with direct SVG measurements made behind a lead strip. The average rms percentage errors in the SVG estimate for the 25 phantom studies and for the 17 animal studies were 6.22% and 7.96%, respectively. These results indicate that the SVG intensity can be estimated for a wide range of thicknesses, projections, and beam energies.

  12. New techniques for fluorescence background rejection in microscopy and endoscopy

    NASA Astrophysics Data System (ADS)

    Ventalon, Cathie

    2009-03-01

    Confocal microscopy is a popular technique in the bioimaging community, mainly because it provides optical sectioning. However, its standard implementation requires 3-dimensional scanning of focused illumination throughout the sample. Efficient non-scanning alternatives have been implemented, among which the simple and well-established incoherent structured illumination microscopy (SIM) [1]. We recently proposed a similar technique, called Dynamic Speckle Illumination (DSI) microscopy, wherein the incoherent grid illumination pattern is replaced with a coherent speckle illumination pattern from a laser, taking advantage of the fact that speckle contrast is highly maintained in a scattering media, making the technique well adapted to tissue imaging [2]. DSI microscopy relies on the illumination of a sample with a sequence of dynamic speckle patterns and an image processing algorithm based only on an a priori knowledge of speckle statistics. The choice of this post-processing algorithm is crucial to obtain a good sectioning strength: in particular, we developed a novel post-processing algorithm based one wavelet pre-filtering of the raw images and obtained near-confocal fluorescence sectioning in a mouse brain labeled with GFP, with a good image quality maintained throughout a depth of ˜100 μm [3]. In the purpose of imaging fluorescent tissue at higher depth, we recently applied structured illumination to endoscopy. We used a similar set-up wherein the illumination pattern (a one-dimensional grid) is transported to the sample with an imaging fiber bundle with miniaturized objective and the fluorescence image is collected through the same bundle. Using a post-processing algorithm similar to the one previously described [3], we obtained high-quality images of a fluorescein-labeled rat colonic mucosa [4], establishing the potential of our endomicroscope for bioimaging applications. [4pt] Ref: [0pt] [1] M. A. A. Neil et al, Opt. Lett. 22, 1905 (1997) [0pt] [2] C

  13. Evaluation of scatter limitation correction: a new method of correcting photopenic artifacts caused by patient motion during whole-body PET/CT imaging.

    PubMed

    Miwa, Kenta; Umeda, Takuro; Murata, Taisuke; Wagatsuma, Kei; Miyaji, Noriaki; Terauchi, Takashi; Koizumi, Mitsuru; Sasaki, Masayuki

    2016-02-01

    Overcorrection of scatter caused by patient motion during whole-body PET/computed tomography (CT) imaging can induce the appearance of photopenic artifacts in the PET images. The present study aimed to quantify the accuracy of scatter limitation correction (SLC) for eliminating photopenic artifacts. This study analyzed photopenic artifacts in (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT images acquired from 12 patients and from a National Electrical Manufacturers Association phantom with two peripheral plastic bottles that simulated the human body and arms, respectively. The phantom comprised a sphere (diameter, 10 or 37 mm) containing fluorine-18 solutions with target-to-background ratios of 2, 4, and 8. The plastic bottles were moved 10 cm posteriorly between CT and PET acquisitions. All PET data were reconstructed using model-based scatter correction (SC), no scatter correction (NSC), and SLC, and the presence or absence of artifacts on the PET images was visually evaluated. The SC and SLC images were also semiquantitatively evaluated using standardized uptake values (SUVs). Photopenic artifacts were not recognizable in any NSC and SLC image from all 12 patients in the clinical study. The SUVmax of mismatched SLC PET/CT images were almost equal to those of matched SC and SLC PET/CT images. Applying NSC and SLC substantially eliminated the photopenic artifacts on SC PET images in the phantom study. SLC improved the activity concentration of the sphere for all target-to-background ratios. The highest %errors of the 10 and 37-mm spheres were 93.3 and 58.3%, respectively, for mismatched SC, and 73.2 and 22.0%, respectively, for mismatched SLC. Photopenic artifacts caused by SC error induced by CT and PET image misalignment were corrected using SLC, indicating that this method is useful and practical for clinical qualitative and quantitative PET/CT assessment.

  14. Solving for the Surface: An Automated Approach to THEMIS Atmospheric Correction

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Salvatore, M. R.; Smith, R.; Edwards, C. S.; Christensen, P. R.

    2013-12-01

    Here we present the initial results of an automated atmospheric correction algorithm for the Thermal Emission Imaging System (THEMIS) instrument, whereby high spectral resolution Thermal Emission Spectrometer (TES) data are queried to generate numerous atmospheric opacity values for each THEMIS infrared image. While the pioneering methods of Bandfield et al. [2004] also used TES spectra to atmospherically correct THEMIS data, the algorithm presented here is a significant improvement because of the reduced dependency on user-defined inputs for individual images. Additionally, this technique is particularly useful for correcting THEMIS images that have captured a range of atmospheric conditions and/or surface elevations, issues that have been difficult to correct for using previous techniques. Thermal infrared observations of the Martian surface can be used to determine the spatial distribution and relative abundance of many common rock-forming minerals. This information is essential to understanding the planet's geologic and climatic history. However, the Martian atmosphere also has absorptions in the thermal infrared which complicate the interpretation of infrared measurements obtained from orbit. TES has sufficient spectral resolution (143 bands at 10 cm-1 sampling) to linearly unmix and remove atmospheric spectral end-members from the acquired spectra. THEMIS has the benefit of higher spatial resolution (~100 m/pixel vs. 3x5 km/TES-pixel) but has lower spectral resolution (8 surface sensitive spectral bands). As such, it is not possible to isolate the surface component by unmixing the atmospheric contribution from the THEMIS spectra, as is done with TES. Bandfield et al. [2004] developed a technique using atmospherically corrected TES spectra as tie-points for constant radiance offset correction and surface emissivity retrieval. This technique is the primary method used to correct THEMIS but is highly susceptible to inconsistent results if great care in the

  15. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that

  16. Rapid correction of electron microprobe data for multicomponent metallic systems

    NASA Technical Reports Server (NTRS)

    Gupta, K. P.; Sivakumar, R.

    1973-01-01

    This paper describes an empirical relation for the correction of electron microprobe data for multicomponent metallic systems. It evaluates the empirical correction parameter, a for each element in a binary alloy system using a modification of Colby's MAGIC III computer program and outlines a simple and quick way of correcting the probe data. This technique has been tested on a number of multicomponent metallic systems and the agreement with the results using theoretical expressions is found to be excellent. Limitations and suitability of this relation are discussed and a model calculation is also presented in the Appendix.

  17. Survey of background scattering from materials found in small-angle neutron scattering.

    PubMed

    Barker, J G; Mildner, D F R

    2015-08-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300-700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3 He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3 He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed.

  18. Survey of background scattering from materials found in small-angle neutron scattering

    PubMed Central

    Barker, J. G.; Mildner, D. F. R.

    2015-01-01

    Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088

  19. Ray tracing evaluation of a technique for correcting the refraction errors in satellite tracking data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.; Rowlett, J. R.; Hendrickson, B. E.

    1978-01-01

    Errors may be introduced in satellite laser ranging data by atmospheric refractivity. Ray tracing data have indicated that horizontal refractivity gradients may introduce nearly 3-cm rms error when satellites are near 10-degree elevation. A correction formula to compensate for the horizontal gradients has been developed. Its accuracy is evaluated by comparing it to refractivity profiles. It is found that if both spherical and gradient correction formulas are employed in conjunction with meteorological measurements, a range resolution of one cm or less is feasible for satellite elevation angles above 10 degrees.

  20. Issues in Correctional Training and Casework. Correctional Monograph.

    ERIC Educational Resources Information Center

    Wolford, Bruce I., Ed.; Lawrenz, Pam, Ed.

    The eight papers contained in this monograph were drawn from two national meetings on correctional training and casework. Titles and authors are: "The Challenge of Professionalism in Correctional Training" (Michael J. Gilbert); "A New Perspective in Correctional Training" (Jack Lewis); "Reasonable Expectations in Correctional Officer Training:…

  1. Extragalactic background light measurements and applications.

    PubMed

    Cooray, Asantha

    2016-03-01

    This review covers the measurements related to the extragalactic background light intensity from γ-rays to radio in the electromagnetic spectrum over 20 decades in wavelength. The cosmic microwave background (CMB) remains the best measured spectrum with an accuracy better than 1%. The measurements related to the cosmic optical background (COB), centred at 1 μm, are impacted by the large zodiacal light associated with interplanetary dust in the inner Solar System. The best measurements of COB come from an indirect technique involving γ-ray spectra of bright blazars with an absorption feature resulting from pair-production off of COB photons. The cosmic infrared background (CIB) peaking at around 100 μm established an energetically important background with an intensity comparable to the optical background. This discovery paved the way for large aperture far-infrared and sub-millimetre observations resulting in the discovery of dusty, starbursting galaxies. Their role in galaxy formation and evolution remains an active area of research in modern-day astrophysics. The extreme UV (EUV) background remains mostly unexplored and will be a challenge to measure due to the high Galactic background and absorption of extragalactic photons by the intergalactic medium at these EUV/soft X-ray energies. We also summarize our understanding of the spatial anisotropies and angular power spectra of intensity fluctuations. We motivate a precise direct measurement of the COB between 0.1 and 5 μm using a small aperture telescope observing either from the outer Solar System, at distances of 5 AU or more, or out of the ecliptic plane. Other future applications include improving our understanding of the background at TeV energies and spectral distortions of CMB and CIB.

  2. Extragalactic background light measurements and applications

    PubMed Central

    Cooray, Asantha

    2016-01-01

    This review covers the measurements related to the extragalactic background light intensity from γ-rays to radio in the electromagnetic spectrum over 20 decades in wavelength. The cosmic microwave background (CMB) remains the best measured spectrum with an accuracy better than 1%. The measurements related to the cosmic optical background (COB), centred at 1 μm, are impacted by the large zodiacal light associated with interplanetary dust in the inner Solar System. The best measurements of COB come from an indirect technique involving γ-ray spectra of bright blazars with an absorption feature resulting from pair-production off of COB photons. The cosmic infrared background (CIB) peaking at around 100 μm established an energetically important background with an intensity comparable to the optical background. This discovery paved the way for large aperture far-infrared and sub-millimetre observations resulting in the discovery of dusty, starbursting galaxies. Their role in galaxy formation and evolution remains an active area of research in modern-day astrophysics. The extreme UV (EUV) background remains mostly unexplored and will be a challenge to measure due to the high Galactic background and absorption of extragalactic photons by the intergalactic medium at these EUV/soft X-ray energies. We also summarize our understanding of the spatial anisotropies and angular power spectra of intensity fluctuations. We motivate a precise direct measurement of the COB between 0.1 and 5 μm using a small aperture telescope observing either from the outer Solar System, at distances of 5 AU or more, or out of the ecliptic plane. Other future applications include improving our understanding of the background at TeV energies and spectral distortions of CMB and CIB. PMID:27069645

  3. GUP parameter from quantum corrections to the Newtonian potential

    NASA Astrophysics Data System (ADS)

    Scardigli, Fabio; Lambiase, Gaetano; Vagenas, Elias C.

    2017-04-01

    We propose a technique to compute the deformation parameter of the generalized uncertainty principle by using the leading quantum corrections to the Newtonian potential. We just assume General Relativity as theory of Gravitation, and the thermal nature of the GUP corrections to the Hawking spectrum. With these minimal assumptions our calculation gives, to first order, a specific numerical result. The physical meaning of this value is discussed, and compared with the previously obtained bounds on the generalized uncertainty principle deformation parameter.

  4. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine

    PubMed Central

    Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng

    2016-01-01

    Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555

  5. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine.

    PubMed

    Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng

    2016-01-01

    Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.

  6. Influence of different topographic correction strategies on mountain vegetation classification accuracy in the Lancang Watershed, China

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiming; de Wulf, Robert R.; van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Clercq, Eva M.; Ou, Xiaokun

    2011-01-01

    Mapping of vegetation using remote sensing in mountainous areas is considerably hampered by topographic effects on the spectral response pattern. A variety of topographic normalization techniques have been proposed to correct these illumination effects due to topography. The purpose of this study was to compare six different topographic normalization methods (Cosine correction, Minnaert correction, C-correction, Sun-canopy-sensor correction, two-stage topographic normalization, and slope matching technique) for their effectiveness in enhancing vegetation classification in mountainous environments. Since most of the vegetation classes in the rugged terrain of the Lancang Watershed (China) did not feature a normal distribution, artificial neural networks (ANNs) were employed as a classifier. Comparing the ANN classifications, none of the topographic correction methods could significantly improve ETM+ image classification overall accuracy. Nevertheless, at the class level, the accuracy of pine forest could be increased by using topographically corrected images. On the contrary, oak forest and mixed forest accuracies were significantly decreased by using corrected images. The results also showed that none of the topographic normalization strategies was satisfactorily able to correct for the topographic effects in severely shadowed areas.

  7. Three-dimensional surface profile intensity correction for spatially modulated imaging

    NASA Astrophysics Data System (ADS)

    Gioux, Sylvain; Mazhar, Amaan; Cuccia, David J.; Durkin, Anthony J.; Tromberg, Bruce J.; Frangioni, John V.

    2009-05-01

    We describe a noncontact profile correction technique for quantitative, wide-field optical measurement of tissue absorption (μa) and reduced scattering (μs') coefficients, based on geometric correction of the sample's Lambertian (diffuse) reflectance intensity. Because the projection of structured light onto an object is the basis for both phase-shifting profilometry and modulated imaging, we were able to develop a single instrument capable of performing both techniques. In so doing, the surface of the three-dimensional object could be acquired and used to extract the object's optical properties. The optical properties of flat polydimethylsiloxane (silicone) phantoms with homogenous tissue-like optical properties were extracted, with and without profilometry correction, after vertical translation and tilting of the phantoms at various angles. Objects having a complex shape, including a hemispheric silicone phantom and human fingers, were acquired and similarly processed, with vascular constriction of a finger being readily detectable through changes in its optical properties. Using profilometry correction, the accuracy of extracted absorption and reduced scattering coefficients improved from two- to ten-fold for surfaces having height variations as much as 3 cm and tilt angles as high as 40 deg. These data lay the foundation for employing structured light for quantitative imaging during surgery.

  8. Environmental corrections of a dual-induction logging while drilling tool in vertical wells

    NASA Astrophysics Data System (ADS)

    Kang, Zhengming; Ke, Shizhen; Jiang, Ming; Yin, Chengfang; Li, Anzong; Li, Junjian

    2018-04-01

    With the development of Logging While Drilling (LWD) technology, dual-induction LWD logging is not only widely applied in deviated wells and horizontal wells, but it is used commonly in vertical wells. Accordingly, it is necessary to simulate the response of LWD tools in vertical wells for logging interpretation. In this paper, the investigation characteristics, the effects of the tool structure, skin effect and drilling environment of a dual-induction LWD tool are simulated by the three-dimensional (3D) finite element method (FEM). In order to closely simulate the actual situation, real structure of the tool is taking into account. The results demonstrate that the influence of the background value of the tool structure can be eliminated. The values of deducting the background of a tool structure and analytical solution have a quantitative agreement in homogeneous formations. The effect of measurement frequency could be effectively eliminated by chart of skin effect correction. In addition, the measurement environment, borehole size, mud resistivity, shoulder bed, layer thickness and invasion, have an effect on the true resistivity. To eliminate these effects, borehole correction charts, shoulder bed correction charts and tornado charts are computed based on real tool structure. Based on correction charts, well logging data can be corrected automatically by a suitable interpolation method, which is convenient and fast. Verified with actual logging data in vertical wells, this method could obtain the true resistivity of formation.

  9. Effect of novel inhaler technique reminder labels on the retention of inhaler technique skills in asthma: a single-blind randomized controlled trial.

    PubMed

    Basheti, Iman A; Obeidat, Nathir M; Reddel, Helen K

    2017-02-09

    Inhaler technique can be corrected with training, but skills drop off quickly without repeated training. The aim of our study was to explore the effect of novel inhaler technique labels on the retention of correct inhaler technique. In this single-blind randomized parallel-group active-controlled study, clinical pharmacists enrolled asthma patients using controller medication by Accuhaler [Diskus] or Turbuhaler. Inhaler technique was assessed using published checklists (score 0-9). Symptom control was assessed by asthma control test. Patients were randomized into active (ACCa; THa) and control (ACCc; THc) groups. All patients received a "Show-and-Tell" inhaler technique counseling service. Active patients also received inhaler labels highlighting their initial errors. Baseline data were available for 95 patients, 68% females, mean age 44.9 (SD 15.2) years. Mean inhaler scores were ACCa:5.3 ± 1.0; THa:4.7 ± 0.9, ACCc:5.5 ± 1.1; THc:4.2 ± 1.0. Asthma was poorly controlled (mean ACT scores ACCa:13.9 ± 4.3; THa:12.1 ± 3.9; ACCc:12.7 ± 3.3; THc:14.3 ± 3.7). After training, all patients had correct technique (score 9/9). After 3 months, there was significantly less decline in inhaler technique scores for active than control groups (mean difference: Accuhaler -1.04 (95% confidence interval -1.92, -0.16, P = 0.022); Turbuhaler -1.61 (-2.63, -0.59, P = 0.003). Symptom control improved significantly, with no significant difference between active and control patients, but active patients used less reliever medication (active 2.19 (SD 1.78) vs. control 3.42 (1.83) puffs/day, P = 0.002). After inhaler training, novel inhaler technique labels improve retention of correct inhaler technique skills with dry powder inhalers. Inhaler technique labels represent a simple, scalable intervention that has the potential to extend the benefit of inhaler training on asthma outcomes. REMINDER LABELS IMPROVE INHALER TECHNIQUE: Personalized

  10. Nerve transfers in tetraplegia I: Background and technique

    PubMed Central

    Brown, Justin M.

    2011-01-01

    Background: The recovery of hand function is consistently rated as the highest priority for persons with tetraplegia. Recovering even partial arm and hand function can have an enormous impact on independence and quality of life of an individual. Currently, tendon transfers are the accepted modality for improving hand function. In this procedure, the distal end of a functional muscle is cut and reattached at the insertion site of a nonfunctional muscle. The tendon transfer sacrifices the function at a lesser location to provide function at a more important location. Nerve transfers are conceptually similar to tendon transfers and involve cutting and connecting a healthy but less critical nerve to a more important but paralyzed nerve to restore its function. Methods: We present a case of a 28-year-old patient with a C5-level ASIA B (international classification level 1) injury who underwent nerve transfers to restore arm and hand function. Intact peripheral innervation was confirmed in the paralyzed muscle groups corresponding to finger flexors and extensors, wrist flexors and extensors, and triceps bilaterally. Volitional control and good strength were present in the biceps and brachialis muscles, the deltoid, and the trapezius. The patient underwent nerve transfers to restore finger flexion and extension, wrist flexion and extension, and elbow extension. Intraoperative motor-evoked potentials and direct nerve stimulation were used to identify donor and recipient nerve branches. Results: The patient tolerated the procedure well, with a preserved function in both elbow flexion and shoulder abduction. Conclusions: Nerve transfers are a technically feasible means of restoring the upper extremity function in tetraplegia in cases that may not be amenable to tendon transfers. PMID:21918736

  11. Chandra ACIS-I particle background: an analytical model

    NASA Astrophysics Data System (ADS)

    Bartalucci, I.; Mazzotta, P.; Bourdin, H.; Vikhlinin, A.

    2014-06-01

    Aims: Imaging and spectroscopy of X-ray extended sources require a proper characterisation of a spatially unresolved background signal. This background includes sky and instrumental components, each of which are characterised by its proper spatial and spectral behaviour. While the X-ray sky background has been extensively studied in previous work, here we analyse and model the instrumental background of the ACIS-I detector on board the Chandra X-ray observatory in very faint mode. Methods: Caused by interaction of highly energetic particles with the detector, the ACIS-I instrumental background is spectrally characterised by the superimposition of several fluorescence emission lines onto a continuum. To isolate its flux from any sky component, we fitted an analytical model of the continuum to observations performed in very faint mode with the detector in the stowed position shielded from the sky, and gathered over the eight-year period starting in 2001. The remaining emission lines were fitted to blank-sky observations of the same period. We found 11 emission lines. Analysing the spatial variation of the amplitude, energy and width of these lines has further allowed us to infer that three lines of these are presumably due to an energy correction artefact produced in the frame store. Results: We provide an analytical model that predicts the instrumental background with a precision of 2% in the continuum and 5% in the lines. We use this model to measure the flux of the unresolved cosmic X-ray background in the Chandra deep field south. We obtain a flux of 10.2+0.5-0.4 × 10-13 erg cm-2 deg-2 s-1 for the [1-2] keV band and (3.8 ± 0.2) × 10-12 erg cm-2 deg-2 s-1 for the [2-8] keV band.

  12. Nanowire growth kinetics in aberration corrected environmental transmission electron microscopy

    DOE PAGES

    Chou, Yi -Chia; Panciera, Federico; Reuter, Mark C.; ...

    2016-03-15

    Here, we visualize atomic level dynamics during Si nanowire growth using aberration corrected environmental transmission electron microscopy, and compare with lower pressure results from ultra-high vacuum microscopy. We discuss the importance of higher pressure observations for understanding growth mechanisms and describe protocols to minimize effects of the higher pressure background gas.

  13. Correction of hyperopia by intrastromal cutting and biocompatible filler injection (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Freidank, Sebastian; Vogel, Alfred; Anderson, Richard R.; Birngruber, Reginald; Linz, Norbert

    2017-02-01

    For ametropic eyes, LASIK is a common surgical procedure to correct the refractive error. However, the correction of hyperopia is more difficult than that of myopia because the increase of the central corneal curvature by excimer ablation is only possible by intrastromal tissue removal within a ring-like zone in the corneal periphery. For high hyperopia, the ring-shaped indentation leads to problems with the stability and reproducibility of the correction due to epithelial regrowth. Recently, it was shown that the correction of hyperopia can be achieved by implanting intracorneal inlays into a laser-dissected intrastromal pocket. In this paper we demonstrate the feasibility of a new approach in which a transparent, and biocompatible liquid filler material is injected into a laser-dissected corneal pocket, and the refractive change is monitored via OCT. This technique allows for a precise and adjustable change of the corneal curvature. Precise cutting of the intrastromal pocket was achieved by focusing UV laser picosecond pulses from a microchip laser system into the cornea. After laser dissection, the transparent filler material was injected into the pocket. The increase of the refractive power by filler injection was evaluated by taking OCT images from the cornea. With this novel technique, it is possible to precisely correct hyperopia of up to 10 diopters. An astigmatism correction is also possible by using ellipsoidal intrastromal pockets.

  14. InSAR Tropospheric Correction Methods: A Statistical Comparison over Different Regions

    NASA Astrophysics Data System (ADS)

    Bekaert, D. P.; Walters, R. J.; Wright, T. J.; Hooper, A. J.; Parker, D. J.

    2015-12-01

    Observing small magnitude surface displacements through InSAR is highly challenging, and requires advanced correction techniques to reduce noise. In fact, one of the largest obstacles facing the InSAR community is related to tropospheric noise correction. Spatial and temporal variations in temperature, pressure, and relative humidity result in a spatially-variable InSAR tropospheric signal, which masks smaller surface displacements due to tectonic or volcanic deformation. Correction methods applied today include those relying on weather model data, GNSS and/or spectrometer data. Unfortunately, these methods are often limited by the spatial and temporal resolution of the auxiliary data. Alternatively a correction can be estimated from the high-resolution interferometric phase by assuming a linear or a power-law relationship between the phase and topography. For these methods, the challenge lies in separating deformation from tropospheric signals. We will present results of a statistical comparison of the state-of-the-art tropospheric corrections estimated from spectrometer products (MERIS and MODIS), a low and high spatial-resolution weather model (ERA-I and WRF), and both the conventional linear and power-law empirical methods. We evaluate the correction capability over Southern Mexico, Italy, and El Hierro, and investigate the impact of increasing cloud cover on the accuracy of the tropospheric delay estimation. We find that each method has its strengths and weaknesses, and suggest that further developments should aim to combine different correction methods. All the presented methods are included into our new open source software package called TRAIN - Toolbox for Reducing Atmospheric InSAR Noise (Bekaert et al., in review), which is available to the community Bekaert, D., R. Walters, T. Wright, A. Hooper, and D. Parker (in review), Statistical comparison of InSAR tropospheric correction techniques, Remote Sensing of Environment

  15. Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.

    PubMed

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2017-05-26

    In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.

  16. Investigation of Overlap Correction Techniques for Application in the Micro-Pulse Lidar Network (MPLNET)

    NASA Technical Reports Server (NTRS)

    Berkoff, Timothy A.; Welton, Ellsworth J.; Campbell, James R.; Scott, Vibart S.; Spinhirne, James D.

    2003-01-01

    The Micro-Pulse Lidar NETwork (MPLNET) is comprised of micro-pulse lidars (MPL) stationed around the globe to provide measurements of aerosol and cloud vertical distribution on a continuous basis. MPLNET sites are co-located with sunphotometers in the AErosol Robotic NETwork (AERONET) to provide joint measurements of aerosol optical depth, size, and other inherent optical properties. The IPCC 2001 report discusses . the importance of obtaining routine measurements of aerosol vertical structure, especially for absorbing aerosols. MPLNET provides exactly this sort of measurement, including calculation of aerosol extinction profiles, in a near real-time basis for all sites in the network. In order to obtain aerosol profiles, near range signal returns (0-6 km) must be accurately measured by the MPL. This measurement is complicated by the instrument s overlap range: Le., the minimum distance at which returning signals are completely in the instrument s field-of-view (FOV). Typical MPL overlap distances are large, between 5 - 6 km, due to the narrow FOV of the MPL receiver. A function describing the MPL overlap must be determined and used to correct signals in this range. Currently, overlap functions for MPLNET are determined using horizontal MPL measurements along a path with 10-1 5 km clear line-of-sight and a homogenous atmosphere. These conditions limit the location and ease in which successful overlaps can be obtained. Furthermore, the current MPLNET process of correcting for overlap increases the uncertainty and bias error for the near range signals and the resulting aerosol extinction profiles. To address these issues, an alternative overlap correction method using a small-diameter, wide FOV receiver is being considered for potential use in MPLNET. The wide FOV receiver has a much shorter overlap distance and will be used to calculate the overlap function of the MPL receiver. This approach has a significant benefit in that overlap corrections could be obtained

  17. 77 FR 18914 - National Motor Vehicle Title Information System (NMVTIS): Technical Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-29

    ... 1121-AA79 National Motor Vehicle Title Information System (NMVTIS): Technical Corrections AGENCY... (OJP) is promulgating this direct final rule for its National Motor Vehicle Title Information System... INFORMATION CONTACT paragraph. II. Background The National Motor Vehicle Title Information System was...

  18. Massive graviton on arbitrary background: derivation, syzygies, applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, Laura; Deffayet, Cédric; IHES, Institut des Hautes Études Scientifiques,Le Bois-Marie, 35 route de Chartres, F-91440 Bures-sur-Yvette

    2015-06-23

    We give the detailed derivation of the fully covariant form of the quadratic action and the derived linear equations of motion for a massive graviton in an arbitrary background metric (which were presented in arXiv:1410.8302 [hep-th]). Our starting point is the de Rham-Gabadadze-Tolley (dRGT) family of ghost free massive gravities and using a simple model of this family, we are able to express this action and these equations of motion in terms of a single metric in which the graviton propagates, hence removing in particular the need for a “reference metric' which is present in the non perturbative formulation. Wemore » show further how 5 covariant constraints can be obtained including one which leads to the tracelessness of the graviton on flat space-time and removes the Boulware-Deser ghost. This last constraint involves powers and combinations of the curvature of the background metric. The 5 constraints are obtained for a background metric which is unconstrained, i.e. which does not have to obey the background field equations. We then apply these results to the case of Einstein space-times, where we show that the 5 constraints become trivial, and Friedmann-Lemaître-Robertson-Walker space-times, for which we correct in particular some results that appeared elsewhere. To reach our results, we derive several non trivial identities, syzygies, involving the graviton fields, its derivatives and the background metric curvature. These identities have their own interest. We also discover that there exist backgrounds for which the dRGT equations cannot be unambiguously linearized.« less

  19. Massive graviton on arbitrary background: derivation, syzygies, applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, Laura; Deffayet, Cédric; Strauss, Mikael von, E-mail: bernard@iap.fr, E-mail: deffayet@iap.fr, E-mail: strauss@iap.fr

    2015-06-01

    We give the detailed derivation of the fully covariant form of the quadratic action and the derived linear equations of motion for a massive graviton in an arbitrary background metric (which were presented in arXiv:1410.8302 [hep-th]). Our starting point is the de Rham-Gabadadze-Tolley (dRGT) family of ghost free massive gravities and using a simple model of this family, we are able to express this action and these equations of motion in terms of a single metric in which the graviton propagates, hence removing in particular the need for a ''reference metric' which is present in the non perturbative formulation. Wemore » show further how 5 covariant constraints can be obtained including one which leads to the tracelessness of the graviton on flat space-time and removes the Boulware-Deser ghost. This last constraint involves powers and combinations of the curvature of the background metric. The 5 constraints are obtained for a background metric which is unconstrained, i.e. which does not have to obey the background field equations. We then apply these results to the case of Einstein space-times, where we show that the 5 constraints become trivial, and Friedmann-Lemaître-Robertson-Walker space-times, for which we correct in particular some results that appeared elsewhere. To reach our results, we derive several non trivial identities, syzygies, involving the graviton fields, its derivatives and the background metric curvature. These identities have their own interest. We also discover that there exist backgrounds for which the dRGT equations cannot be unambiguously linearized.« less

  20. A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors

    PubMed Central

    Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca

    2012-01-01

    Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI

  1. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  2. Electroweak Sudakov Corrections to New Physics Searches at the LHC

    NASA Astrophysics Data System (ADS)

    Chiesa, Mauro; Montagna, Guido; Barzè, Luca; Moretti, Mauro; Nicrosini, Oreste; Piccinini, Fulvio; Tramontano, Francesco

    2013-09-01

    We compute the one-loop electroweak Sudakov corrections to the production process Z(νν¯)+n jets, with n=1, 2, 3, in pp collisions at the LHC. It represents the main irreducible background to new physics searches at the energy frontier. The results are obtained at the leading and next-to-leading logarithmic accuracy by implementing the general algorithm of Denner and Pozzorini in the event generator for multiparton processes alpgen. For the standard selection cuts used by the ATLAS and CMS Collaborations, we show that the Sudakov corrections to the relevant observables can grow up to -40% at s=14TeV. We also include the contribution due to undetected real radiation of massive gauge bosons, to show to what extent the partial cancellation with the large negative virtual corrections takes place in realistic event selections.

  3. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    ERIC Educational Resources Information Center

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  4. Correction of Spatial Bias in Oligonucleotide Array Data

    PubMed Central

    Lemieux, Sébastien

    2013-01-01

    Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083

  5. Comparison of techniques for correction of magnification of pelvic X-rays for hip surgery planning.

    PubMed

    The, Bertram; Kootstra, Johan W J; Hosman, Anton H; Verdonschot, Nico; Gerritsma, Carina L E; Diercks, Ron L

    2007-12-01

    The aim of this study was to develop an accurate method for correction of magnification of pelvic x-rays to enhance accuracy of hip surgery planning. All investigated methods aim at estimating the anteroposterior location of the hip joint in supine position to correctly position a reference object for correction of magnification. An existing method-which is currently being used in clinical practice in our clinics-is based on estimating the position of the hip joint by palpation of the greater trochanter. It is only moderately accurate and difficult to execute reliably in clinical practice. To develop a new method, 99 patients who already had a hip implant in situ were included; this enabled determining the true location of the hip joint deducted from the magnification of the prosthesis. Physical examination was used to obtain predictor variables possibly associated with the height of the hip joint. This included a simple dynamic hip joint examination to estimate the position of the center of rotation. Prediction equations were then constructed using regression analysis. The performance of these prediction equations was compared with the performance of the existing protocol. The mean absolute error in predicting the height of the hip joint center using the old method was 20 mm (range -79 mm to +46 mm). This was 11 mm for the new method (-32 mm to +39 mm). The prediction equation is: height (mm) = 34 + 1/2 abdominal circumference (cm). The newly developed prediction equation is a superior method for predicting the height of the hip joint center for correction of magnification of pelvic x-rays. We recommend its implementation in the departments of radiology and orthopedic surgery.

  6. Mobile Image Based Color Correction Using Deblurring

    PubMed Central

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2016-01-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space. PMID:28572697

  7. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  8. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    PubMed

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Investigation of background acoustical effect on online surveys: A case study of a farmers' market customer survey

    NASA Astrophysics Data System (ADS)

    Tang, Xingdi

    Since the middle of 1990s, internet has become a new platform for surveys. Previous studies have discussed the visual design features of internet surveys. However, the application of acoustics as a design characteristic of online surveys has been rarely investigated. The present study aimed to fill that research gap. The purpose of the study was to assess the impact of background sound on respondents' engagement and satisfaction with online surveys. Two forms of background sound were evaluated; audio recorded in studios and audio edited with convolution reverb technique. The author recruited 80 undergraduate students for the experiment. These students were assigned to one of three groups. Each of the three groups was asked to evaluate their engagement and satisfaction with a specific online survey. The content of the online survey was the same. However, the three groups was exposed to the online survey with no background sound, with background sound recorded in studios; and with background sound edited with convolution reverb technique. The results showed no significant difference in engagement and satisfaction in the three groups of online surveys; without background sound, background sound recorded in studios, and background sound edited with convolution reverb technique. The author suggests that background sound does not contribute to online surveys in all the contexts. The industry practitioners should be careful to evaluate the survey context to decide whether the background sound should be added. Particularly, ear-piercing noise or acoustics which may link to respondents' unpleasant experience should be avoided. Moreover, although the results did not support the advantage of the revolution reverb technique in improving respondents' engagement and satisfaction, the author suggests that the potential of the revolution reverb technique in the applications of online surveys can't be totally denied, since it may be useful for some contexts which need further

  10. Correction methods for underwater turbulence degraded imaging

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.; Hou, W.; Restaino, S. R.; Matt, S.; Gładysz, S.

    2014-10-01

    The use of remote sensing techniques such as adaptive optics and image restoration post processing to correct for aberrations in a wavefront of light propagating through turbulent environment has become customary for many areas including astronomy, medical imaging, and industrial applications. EO imaging underwater has been mainly concentrated on overcoming scattering effects rather than dealing with underwater turbulence. However, the effects of turbulence have crucial impact over long image-transmission ranges and under extreme turbulence conditions become important over path length of a few feet. Our group has developed a program that attempts to define under which circumstances application of atmospheric remote sensing techniques could be envisioned. In our experiments we employ the NRL Rayleigh-Bénard convection tank for simulated turbulence environment at Stennis Space Center, MS. A 5m long water tank is equipped with heating and cooling plates that generate a well measured thermal gradient that in turn produces various degrees of turbulence. The image or laser beam spot can be propagated along the tank's length where it is distorted by induced turbulence. In this work we report on the experimental and theoretical findings of the ongoing program. The paper will introduce the experimental setup, the techniques used, and the measurements made as well as describe novel methods for postprocessing and correction of images degraded by underwater turbulence.

  11. Incidence of Speech-Correcting Surgery in Children With Isolated Cleft Palate.

    PubMed

    Gustafsson, Charlotta; Heliövaara, Arja; Leikola, Junnu; Rautio, Jorma

    2018-01-01

    Speech-correcting surgeries (pharyngoplasty) are performed to correct velopharyngeal insufficiency (VPI). This study aimed to analyze the need for speech-correcting surgery in children with isolated cleft palate (ICP) and to determine differences among cleft extent, gender, and primary technique used. In addition, we assessed the timing and number of secondary procedures performed and the incidence of operated fistulas. Retrospective medical chart review study from hospital archives and electronic records. These comprised the 423 consecutive nonsyndromic children (157 males and 266 females) with ICP treated at the Cleft Palate and Craniofacial Center of Helsinki University Hospital during 1990 to 2016. The total incidence of VPI surgery was 33.3% and the fistula repair rate, 7.8%. Children with cleft of both the hard and soft palate (n = 300) had a VPI secondary surgery rate of 37.3% (fistula repair rate 10.7%), whereas children with only cleft of the soft palate (n = 123) had a corresponding rate of 23.6% (fistula repair rate 0.8%). Gender and primary palatoplasty technique were not considered significant factors in need for VPI surgery. The majority of VPI surgeries were performed before school age. One fifth of patients receiving speech-correcting surgery had more than one subsequent procedure. The need for speech-correcting surgery and fistula repair was related to the severity of the cleft. Although the majority of the corrective surgeries were done before the age of 7 years, a considerable number were performed at a later stage, necessitating long-term observation.

  12. Correcting deformities of the aged earlobe.

    PubMed

    Connell, Bruce F

    2005-01-01

    An earlobe that appears aged or malpositioned can sabotage the results of a well performed face lift. The most frequently noted sign of a naturally aged earlobe is increased length. Improper planning of face lift incisions may also result in disfigurement of the ear. The author suggests simple excisional techniques to correct the aged earlobe, as well as methods to avoid subsequent earlobe distortion when performing a face lift.

  13. New window into stochastic gravitational wave background.

    PubMed

    Rotti, Aditya; Souradeep, Tarun

    2012-11-30

    A stochastic gravitational wave background (SGWB) would gravitationally lens the cosmic microwave background (CMB) photons. We correct the results provided in existing literature for modifications to the CMB polarization power spectra due to lensing by gravitational waves. Weak lensing by gravitational waves distorts all four CMB power spectra; however, its effect is most striking in the mixing of power between the E mode and B mode of CMB polarization. This suggests the possibility of using measurements of the CMB angular power spectra to constrain the energy density (Ω(GW)) of the SGWB. Using current data sets (QUAD, WMAP, and ACT), we find that the most stringent constraints on the present Ω(GW) come from measurements of the angular power spectra of CMB temperature anisotropies. In the near future, more stringent bounds on Ω(GW) can be expected with improved upper limits on the B modes of CMB polarization. Any detection of B modes of CMB polarization above the expected signal from large scale structure lensing could be a signal for a SGWB.

  14. Normalized Shape and Location of Perturbed Craniofacial Structures in the Xenopus Tadpole Reveal an Innate Ability to Achieve Correct Morphology

    PubMed Central

    Vandenberg, Laura N.; Adams, Dany S.; Levin, Michael

    2012-01-01

    Background Embryonic development can often adjust its morphogenetic processes to counteract external perturbation. The existence of self-monitoring responses during pattern formation is of considerable importance to the biomedicine of birth defects, but has not been quantitatively addressed. To understand the computational capabilities of biological tissues in a molecularly-tractable model system, we induced craniofacial defects in Xenopus embryos, then tracked tadpoles with craniofacial deformities and used geometric morphometric techniques to characterize changes in the shape and position of the craniofacial structures. Results Canonical variate analysis revealed that the shapes and relative positions of perturbed jaws and branchial arches were corrected during the first few months of tadpole development. Analysis of the relative movements of the anterior-most structures indicates that misplaced structures move along the anterior-posterior and left-right axes in ways that are significantly different from their normal movements. Conclusions Our data suggest a model in which craniofacial structures utilize a measuring mechanism to assess and adjust their location relative to other local organs. Understanding the correction mechanisms at work in this system could lead to the better understanding of the adaptive decision-making capabilities of living tissues and suggest new approaches to correct birth defects in humans. PMID:22411736

  15. [Lumbar canal stenosis in achondroplasia. Prevention and correction of lumbosacral lordosis].

    PubMed

    Gómez Prat, A; García Ollé, L; Ginebreda Martí, I; Gairí Tahull, J; Vilarrubias Guillamet, J

    2001-02-01

    To determine through the measurement of different angles the correction of lumbar hyperlordosis after bilateral femoral lengthening using the Icatme technique and to assess the absence of neurological symptomatology secondary to stenosis of the lumbar canal after femoral lengthening. Thirty-four patients with achondroplasia were studied. Mean age was 22.3 years. The patients underwent femoral lengthening using the Icatme technique. X rays of the lateral rachis taken before and after lengthening were used to measure a series of angles. The lumbar lordosis angle, Sez's angle and the L5S1 angle decreased while the lumbosacral angle, Jungham's angle and the sacrum angle increased, leading to correction of lumbar hyperlordosis, verticalization of the sacrum and improvement in thoracolumbar and lumbosacral inflection. Values were similar to the standard for individuals without achondroplasia. Femoral lengthening using the Icatme technique in achondroplastics modifies the statics of the lumbar spine, making them similar to those of nonachondroplastics. The procedure corrects lumbar hyperlordosis and prevents the appearance of neurological symptomatology due to stenosis of the lumbar canal. The incidence of neurological complications due to stenosis of the lumbar canal in achondroplastics who have undergone femoral lengthening is low compared with that of achondroplastics of the same age and sex who have not undergone this procedure.

  16. Studies of the Low-energy Gamma Background

    NASA Astrophysics Data System (ADS)

    Bikit, K.; Mrđa, D.; Bikit, I.; Slivka, J.; Veskovic, M.; Knezevic, D.

    The investigations of contribution to the low-energy part of background gamma spectrum (below 100 keV) and knowing detection efficiency for this region are important for both, a fundamental, as well as for applied research. In this work, the components contributing to the low-energy region of background gamma spectrum for shielded detector are analyzed, including the production and spectral distribution of muon-induced continuous low-energy radiation in the vicinity of high-purity germanium detector.In addition, the detection efficiency for low energy gamma region is determined using the GEANT 4 simulation package. This technique offers excellent opportunity to predict the detection response in mentioned region. Unfortunately, the frequently weakly known dead layer thickness on the surface of the extended-range detector, as well as some processes which are not incorporated in simulation (e.g. charge collection from detector active volume) may limit the reliability of simulation technique. Thus, the 14, 17, 21, 26, 33, 59.5 keV transitions in the calibrated 241Am point source were used to check the simulated efficiencies.

  17. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    NASA Astrophysics Data System (ADS)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  18. Island custom blocking technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carabetta, R.J.

    The technique of Island blocking is being used more frequently since the advent of our new head and neck blocking techniques and the implementation of a newly devised lung protocol. The system presented affords the mould room personnel a quick and accurate means of island block fabrication without the constant remeasuring or subtle shifting to approximate correct placement. The cookie cutter is easily implemented into any department's existing block cutting techniques. The device is easily and inexpensively made either in a machine shop or acquired by contacting the author.

  19. Application of the minimum correlation technique to the correction of the magnetic field measured by magnetometers on spacecraft

    NASA Technical Reports Server (NTRS)

    Mariani, F.

    1979-01-01

    Some aspects of the problem of obtaining precise, absolute determination of the vector of low magnetic fields existing in the interplanetary medium are addressed. In the case of a real S/C, there is always the possibility of a spurious field which includes the spacecraft residual field and/or possible field from the sensors, due to both electronic drifts or changes of the magnetic properties of the sensor core. These latter effects may occur during storage of the sensors prior to launching and/or in-flight. The reliability is demonstrated for a method which postulates that there should be no correlation between changes in measured field magnitude and changes in the measured inclination of the field with respect to any one of three fixed Cartesian component directions. Application of this minimum correlation technique to data from IMP-8 and Helios 1-2 shows it is appropriate for determination of the zero offset corrections of triaxial magnetometers. In general, a number of the order of 1000 consecutive data points is sufficient for a good determination.

  20. Breast tissue decomposition with spectral distortion correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Zhao, Bo; Baturin, Pavlo; Behroozi, Farnaz; Molloi, Sabee

    2014-01-01

    Purpose: To investigate the feasibility of an accurate measurement of water, lipid, and protein composition of breast tissue using a photon-counting spectral computed tomography (CT) with spectral distortion corrections. Methods: Thirty-eight postmortem breasts were imaged with a cadmium-zinc-telluride-based photon-counting spectral CT system at 100 kV. The energy-resolving capability of the photon-counting detector was used to separate photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose for each breast ranged from 1.8 to 2.2 mGy. Two spectral distortion correction techniques were implemented, respectively, on the raw images to correct the nonlinear detector response due to pulse pileup and charge-sharing artifacts. Dual energy decomposition was then used to characterize each breast in terms of water, lipid, and protein content. In the meantime, the breasts were chemically decomposed into their respective water, lipid, and protein components to provide a gold standard for comparison with dual energy decomposition results. Results: The accuracy of the tissue compositional measurement with spectral CT was determined by comparing to the reference standard from chemical analysis. The averaged root-mean-square error in percentage composition was reduced from 15.5% to 2.8% after spectral distortion corrections. Conclusions: The results indicate that spectral CT can be used to quantify the water, lipid, and protein content in breast tissue. The accuracy of the compositional analysis depends on the applied spectral distortion correction technique. PMID:25281953

  1. ForCent model development and testing using the Enriched Background Isotope Study experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parton, W.J.; Hanson, P. J.; Swanston, C.

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulatesmore » the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less

  2. Comparison of observation level versus 24-hour average atmospheric loading corrections in VLBI analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.; van Dam, T. M.

    2009-04-01

    Variations in the horizontal distribution of atmospheric mass induce displacements of the Earth's surface. Theoretical estimates of the amplitude of the surface displacement indicate that the predicted surface displacement is often large enough to be detected by current geodetic techniques. In fact, the effects of atmospheric pressure loading have been detected in Global Positioning System (GPS) coordinate time series [van Dam et al., 1994; Dong et al., 2002; Scherneck et al., 2003; Zerbini et al., 2004] and very long baseline interferometery (VLBI) coordinates [Rabble and Schuh, 1986; Manabe et al., 1991; van Dam and Herring, 1994; Schuh et al., 2003; MacMillan and Gipson, 1994; and Petrov and Boy, 2004]. Some of these studies applied the atmospheric displacement at the observation level and in other studies, the predicted atmospheric and observed geodetic surface displacements have been averaged over 24 hours. A direct comparison of observation level and 24 hour corrections has not been carried out for VLBI to determine if one or the other approach is superior. In this presentation, we address the following questions: 1) Is it better to correct geodetic data at the observation level rather than applying corrections averaged over 24 hours to estimated geodetic coordinates a posteriori? 2) At the sub-daily periods, the atmospheric mass signal is composed of two components: a tidal component and a non-tidal component. If observation level corrections reduce the scatter of VLBI data more than a posteriori correction, is it sufficient to only model the atmospheric tides or must the entire atmospheric load signal be incorporated into the corrections? 3) When solutions from different geodetic techniques (or analysis centers within a technique) are combined (e.g., for ITRF2008), not all solutions may have applied atmospheric loading corrections. Are any systematic effects on the estimated TRF introduced when atmospheric loading is applied?

  3. Improving the limits of detection of low background alpha emission measurements

    NASA Astrophysics Data System (ADS)

    McNally, Brendan D.; Coleman, Stuart; Harris, Jack T.; Warburton, William K.

    2018-01-01

    Alpha particle emission - even at extremely low levels - is a significant issue in the search for rare events (e.g., double beta decay, dark matter detection). Traditional measurement techniques require long counting times to measure low sample rates in the presence of much larger instrumental backgrounds. To address this, a commercially available instrument developed by XIA uses pulse shape analysis to discriminate alpha emissions produced by the sample from those produced by other surfaces of the instrument itself. Experience with this system has uncovered two residual sources of background: cosmogenics and radon emanation from internal components. An R&D program is underway to enhance the system and extend the pulse shape analysis technique further, so that these residual sources can be identified and rejected as well. In this paper, we review the theory of operation and pulse shape analysis techniques used in XIA's alpha counter, and briefly explore data suggesting the origin of the residual background terms. We will then present our approach to enhance the system's ability to identify and reject these terms. Finally, we will describe a prototype system that incorporates our concepts and demonstrates their feasibility.

  4. ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parton, William; Hanson, Paul J; Swanston, Chris

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamicsmore » of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.« less

  5. Differential Deposition to Correct Surface Figure Deviations in Astronomical Grazing-Incidence X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Kilaru, Kiranmayee; Ramsey, Brian D.; Gubarev, Mikhail V.

    2011-01-01

    A coating technique is being developed to correct the surface figure deviations in reflective-grazing-incidence X-ray optics. These optics are typically designed to have precise conic profiles, and any deviation in this profile, as a result of fabrication, results in a degradation of the imaging performance. To correct the mirror profiles, physical vapor deposition has been utilized to selectively deposit a filler material inside the mirror shell. The technique, termed differential deposition, has been implemented as a proof of concept on miniature X-ray optics developed at MSFC for medical-imaging applications. The technique is now being transferred to larger grazing-incidence optics suitable for astronomy and progress to date is reported.

  6. Correction of spin diffusion during iterative automated NOE assignment

    NASA Astrophysics Data System (ADS)

    Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  7. Platysma Flap with Z-Plasty for Correction of Post-Thyroidectomy Swallowing Deformity

    PubMed Central

    Jeon, Min Kyeong; Kang, Seok Joo

    2013-01-01

    Background Recently, the number of thyroid surgery cases has been increasing; consequently, the number of patients who visit plastic surgery departments with a chief complaint of swallowing deformity has also increased. We performed a scar correction technique on post-thyroidectomy swallowing deformity via platysma flap with Z-plasty and obtained satisfactory aesthetic and functional outcomes. Methods The authors performed operations upon 18 patients who presented a definitive retraction on the swallowing mechanism as an objective sign of swallowing deformity, or throat or neck discomfort on swallowing mechanism such as sensation of throat traction as a subjective sign after thyoridectomy from January 2009 till June 2012. The scar tissue that adhered to the subcutaneous tissue layer was completely excised. A platysma flap as mobile interference was applied to remove the continuity of the scar adhesion, and additionally, Z-plasty for prevention of midline platysma banding was performed. Results The follow-up results of the 18 patients indicated that the definitive retraction on the swallowing mechanism was completely removed. Throat or neck discomfort on the swallowing mechanism such as sensation of throat traction also was alleviated in all 18 patients. When preoperative and postoperative Vancouver scar scales were compared to each other, the scale had decreased significantly after surgery (P<0.05). Conclusions Our simple surgical method involved the formation of a platysma flap with Z-plasty as mobile interference for the correction of post-thyroidectomy swallowing deformity. This method resulted in aesthetically and functionally satisfying outcomes. PMID:23898442

  8. Background Model for the Majorana Demonstrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuesta, C.; Abgrall, N.; Aguayo, Estanislao

    2015-06-01

    The Majorana Collaboration is constructing a prototype system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment to search for neutrinoless double-beta (0v BB) decay in 76Ge. In view of the requirement that the next generation of tonne-scale Ge-based 0vBB-decay experiment be capable of probing the neutrino mass scale in the inverted-hierarchy region, a major goal of theMajorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursuedmore » through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using Geant4 simulations of the different background components whose purity levels are constrained from radioassay measurements.« less

  9. Correction of Congenital Auricular Deformities Using the Ear-Molding Technique.

    PubMed

    Woo, Taeyong; Kim, Young Seok; Roh, Tai Suk; Lew, Dae Hyun; Yun, In Sik

    2016-11-01

    Studies of the ear-molding technique have emphasized the importance of initiating molding early to achieve the best results. In the present study, we describe the immediate effects and long-term outcomes of this technique, focusing on children who were older than the ideal age of treatment initiation. Patients who visited our institution from July 2014 to November 2015 were included. Medical charts were reviewed to collect data on demographics, the duration of treatment, the types of deformities, and the manner of recognition of the deformity and referral to our institution. Parents were surveyed to assess the degree of improvement, the level of procedural discomfort at the end of treatment, any changes in the shape of the molded auricle, and overall satisfaction 12 months after their last follow-up visits. A review of 28 ears in 18 patients was conducted, including the following types of deformities: constricted ear (64.2%), Stahl ear (21.4%), prominent ear (7.1%), and cryptotia (7.1%). The average score for the degree of improvement, rated on a 5-point scale (1, very poor; 5, excellent), was 3.5 at the end of treatment, with a score of 2.6 for procedural discomfort (1, very mild; 5, very severe). After 12 months, the shapes of all ears were well maintained. The average overall satisfaction score was 3.6 (1, very dissatisfied; 5, very satisfied). We had reasonable outcomes in older patients. After 1 year of follow-up, these outcomes were well maintained. Patients past the ideal age at presentation can still be candidates for the molding technique.

  10. Kepler Planet Detection Metrics: Automatic Detection of Background Objects Using the Centroid Robovetter

    NASA Technical Reports Server (NTRS)

    Mullally, Fergal

    2017-01-01

    We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.

  11. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  12. Thin wing corrections for phase-change heat-transfer data.

    NASA Technical Reports Server (NTRS)

    Hunt, J. L.; Pitts, J. I.

    1971-01-01

    Since no methods are available for determining the magnitude of the errors incurred when the semiinfinite slab assumption is violated, a computer program was developed to calculate the heat-transfer coefficients to both sides of a finite, one-dimensional slab subject to the boundary conditions ascribed to the phase-change coating technique. The results have been correlated in the form of correction factors to the semiinfinite slab solutions in terms of parameters normally used with the technique.

  13. The Recovery of Optical Quality after Laser Vision Correction

    PubMed Central

    Jung, Hyeong-Gi

    2013-01-01

    Purpose To evaluate the optical quality after laser in situ keratomileusis (LASIK) or serial photorefractive keratectomy (PRK) using a double-pass system and to follow the recovery of optical quality after laser vision correction. Methods This study measured the visual acuity, manifest refraction and optical quality before and one day, one week, one month, and three months after laser vision correction. Optical quality parameters including the modulation transfer function, Strehl ratio and intraocular scattering were evaluated with a double-pass system. Results This study included 51 eyes that underwent LASIK and 57 that underwent PRK. The optical quality three months post-surgery did not differ significantly between these laser vision correction techniques. Furthermore, the preoperative and postoperative optical quality did not differ significantly in either group. Optical quality recovered within one week after LASIK but took between one and three months to recover after PRK. The optical quality of patients in the PRK group seemed to recover slightly more slowly than their uncorrected distance visual acuity. Conclusions Optical quality recovers to the preoperative level after laser vision correction, so laser vision correction is efficacious for correcting myopia. The double-pass system is a useful tool for clinical assessment of optical quality. PMID:23908570

  14. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  15. [Individual indirect bonding technique (IIBT) using set-up model].

    PubMed

    Kyung, H M

    1989-01-01

    There has been much progress in Edgewise Appliance since E.H. Angle. One of the most important procedures in edgewise appliance is correct bracket position. Not only conventional edgewise appliance but also straight wire appliance & lingual appliance cannot be used more effectively unless the bracket position is accurate. Improper bracket positioning may reveal much problems during treatment, especially in finishing state. It may require either rebonding after the removal of the malpositioned bracket or the greater number of arch wire and the more complex wire bending, causing much difficulty in performing effective treatments. This made me invent Individual Indirect Bonding Technique with the use of multi-purpose set-up model in order to determine a correct and objective bracket position according to individual patients. This technique is more accurate than former indirect bonding techniques in bracket positioning, because it decides the bracket position on a set-up model which has produced to have the occlusal relationship the clinician desired. This technique is especially effective in straight wire appliance and lingual appliance in which the correct bracket positioning is indispensible.

  16. Correction to hill (2005).

    PubMed

    Hill, Clara E

    2006-01-01

    Reports an error in "Therapist Techniques, Client Involvement, and the Therapeutic Relationship: Inextricably Intertwined in the Therapy Process" by Clara E. Hill (Psychotherapy: Theory, Research, Practice, Training, 2005 Win, Vol 42(4), 431-442). An author's name was incorrectly spelled in a reference. The correct reference is presented. (The following abstract of the original article appeared in record 2006-03309-003.) I propose that therapist techniques, client involvement, and the therapeutic relationship are inextricably intertwined and need to be considered together in any discussion of the therapy process. Furthermore, I present a pantheoretical model of how these three variables evolve over four stages of successful therapy: initial impression formation, beginning the therapy (involves the components of facilitating client exploration and developing case conceptualization and treatment strategies), the core work of therapy (involves the components of theory-relevant tasks and overcoming obstacles), and termination. Theoretical propositions as well as implications for training and research are presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  17. Correlation processing for correction of phase distortions in subaperture imaging.

    PubMed

    Tavh, B; Karaman, M

    1999-01-01

    Ultrasonic subaperture imaging combines synthetic aperture and phased array approaches and permits low-cost systems with improved image quality. In subaperture processing, a large array is synthesized using echo signals collected from a number of receive subapertures by multiple firings of a phased transmit subaperture. Tissue inhomogeneities and displacements in subaperture imaging may cause significant phase distortions on received echo signals. Correlation processing on reference echo signals can be used for correction of the phase distortions, for which the accuracy and robustness are critically limited by the signal correlation. In this study, we explore correlation processing techniques for adaptive subaperture imaging with phase correction for motion and tissue inhomogeneities. The proposed techniques use new subaperture data acquisition schemes to produce reference signal sets with improved signal correlation. The experimental test results were obtained using raw radio frequency (RF) data acquired from two different phantoms with 3.5 MHz, 128-element transducer array. The results show that phase distortions can effectively be compensated by the proposed techniques in real-time adaptive subaperture imaging.

  18. Optimizing inhalation technique using web-based videos in obstructive lung diseases.

    PubMed

    Müller, Tobias; Müller, Annegret; Hübel, Christian; Knipel, Verena; Windisch, Wolfram; Cornelissen, Christian Gabriel; Dreher, Michael

    2017-08-01

    Inhaled agents are widely used in the treatment of chronic airway diseases. Correct technique is required to ensure appropriate drug deposition, but poor technique is common. This study investigated whether inhalation technique could be improved by patient training using short videos from the German Airway League. Outpatients from a university hospital respiratory clinic who had incorrect inhalation technique were asked to demonstrate this again immediately after viewing the training videos, and after 4-8 weeks' follow-up. Inhalation technique was rated by a study nurse using specific checklists. One hundred and twelve patients with obstructive lung disease treated with inhaled bronchodilators or corticosteroids were included. More than half (51.8%) had at least one mistake in inhalation technique at baseline. Of these, most (88%) understood the training videos, 76% demonstrated correct device use immediately after training, and 72% were still able to demonstrate correct inhalation technique at follow-up (p = 0.0008 for trend). In addition, the number of mistakes decreased significantly after video training (by 1.82 [95% confidence interval 1.39-2.25]; p < 0.0001 vs. baseline). German Airway League inhalation technique training videos were easy to understand and effectively improved inhalation technique in patients with airway diseases. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Adaptive error correction codes for face identification

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2012-06-01

    Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.

  20. A color-corrected strategy for information multiplexed Fourier ptychographic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Mingqun; Zhang, Yuzhen; Chen, Qian; Sun, Jiasong; Fan, Yao; Zuo, Chao

    2017-12-01

    Fourier ptychography (FP) is a novel computational imaging technique that provides both wide field of view (FoV) and high-resolution (HR) imaging capacity for biomedical imaging. Combined with information multiplexing technology, wavelength multiplexed (or color multiplexed) FP imaging can be implemented by lighting up R/G/B LED units simultaneously. Furthermore, a HR image can be recovered at each wavelength from the multiplexed dataset. This enhances the efficiency of data acquisition. However, since the same dataset of intensity measurement is used to recover the HR image at each wavelength, the mean value in each channel would converge to the same value. In this paper, a color correction strategy embedded in the multiplexing FP scheme is demonstrated, which is termed as color corrected wavelength multiplexed Fourier ptychography (CWMFP). Three images captured by turning on a LED array in R/G/B are required as priori knowledge to improve the accuracy of reconstruction in the recovery process. Using the reported technique, the redundancy requirement of information multiplexed FP is reduced. Moreover, the accuracy of reconstruction at each channel is improved with correct color reproduction of the specimen.

  1. Interactive QR code beautification with full background image embedding

    NASA Astrophysics Data System (ADS)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  2. Conservation of ζ with radiative corrections from heavy field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, Takahiro; Yukawa Institute for Theoretical Physics, Kyoto University,Kyoto, 606-8502; Urakawa, Yuko

    2016-06-08

    In this paper, we address a possible impact of radiative corrections from a heavy scalar field χ on the curvature perturbation ζ. Integrating out χ, we derive the effective action for ζ, which includes the loop corrections of the heavy field χ. When the mass of χ is much larger than the Hubble scale H, the loop corrections of χ only yield a local contribution to the effective action and hence the effective action simply gives an action for ζ in a single field model, where, as is widely known, ζ is conserved in time after the Hubble crossing time.more » Meanwhile, when the mass of χ is comparable to H, the loop corrections of χ can give a non-local contribution to the effective action. Because of the non-local contribution from χ, in general, ζ may not be conserved, even if the classical background trajectory is determined only by the evolution of the inflaton. In this paper, we derive the condition that ζ is conserved in time in the presence of the radiative corrections from χ. Namely, we show that when the dilatation invariance, which is a part of the diffeomorphism invariance, is preserved at the quantum level, the loop corrections of the massive field χ do not disturb the constant evolution of ζ at super Hubble scales. In this discussion, we show the Ward-Takahashi identity for the dilatation invariance, which yields a consistency relation for the correlation functions of the massive field χ.« less

  3. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  4. SU-F-T-70: A High Dose Rate Total Skin Electron Irradiation Technique with A Specific Inter-Film Variation Correction Method for Very Large Electron Beam Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Rosenfield, J; Dong, X

    2016-06-15

    Purpose: Rotational total skin electron irradiation (RTSEI) is used in the treatment of cutaneous T-cell lymphoma. Due to inter-film uniformity variations the dosimetry measurement of a large electron beam of a very low energy is challenging. This work provides a method to improve the accuracy of flatness and symmetry for a very large treatment field of low electron energy used in dual beam RTSEI. Methods: RTSEI is delivered by dual angles field a gantry of ±20 degrees of 270 to cover the upper and the lower halves of the patient body with acceptable beam uniformity. The field size is inmore » the order of 230cm in vertical height and 120 cm in horizontal width and beam energy is a degraded 6 MeV (6 mm of PMMA spoiler). We utilized parallel plate chambers, Gafchromic films and OSLDs as a measuring devices for absolute dose, B-Factor, stationary and rotational percent depth dose and beam uniformity. To reduce inter-film dosimetric variation we introduced a new specific correction method to analyze beam uniformity. This correction method uses some image processing techniques combining film value before and after radiation dose to compensate the inter-variation dose response differences among films. Results: Stationary and rotational depth of dose demonstrated that the Rp is 2 cm for rotational and the maximum dose is shifted toward the surface (3mm). The dosimetry for the phantom showed that dose uniformity reduced to 3.01% for the vertical flatness and 2.35% for horizontal flatness after correction thus achieving better flatness and uniformity. The absolute dose readings of calibrated films after our correction matched with the readings from OSLD. Conclusion: The proposed correction method for Gafchromic films will be a useful tool to correct inter-film dosimetric variation for the future clinical film dosimetry verification in very large fields, allowing the optimizations of other parameters.« less

  5. A study of nuclear recoil backgrounds in dark matter detectors

    NASA Astrophysics Data System (ADS)

    Westerdale, Shawn S.

    Despite the great success of the Standard Model of particle physics, a preponderance of astrophysical evidence suggests that it cannot explain most of the matter in the universe. This so-called dark matter has eluded direct detection, though many theoretical extensions to the Standard Model predict the existence of particles with a mass on the 1-1000 GeV scale that interact only via the weak nuclear force. Particles in this class are referred to as Weakly Interacting Massive Particles (WIMPs), and their high masses and low scattering cross sections make them viable dark matter candidates. The rarity of WIMP-nucleus interactions makes them challenging to detect: any background can mask the signal they produce. Background rejection is therefore a major problem in dark matter detection. Many experiments greatly reduce their backgrounds by employing techniques to reject electron recoils. However, nuclear recoil backgrounds, which produce signals similar to what we expect from WIMPs, remain problematic. There are two primary sources of such backgrounds: surface backgrounds and neutron recoils. Surface backgrounds result from radioactivity on the inner surfaces of the detector sending recoiling nuclei into the detector. These backgrounds can be removed with fiducial cuts, at some cost to the experiment's exposure. In this dissertation we briefly discuss a novel technique for rejecting these events based on signals they make in the wavelength shifter coating on the inner surfaces of some detectors. Neutron recoils result from neutrons scattering off of nuclei in the detector. These backgrounds may produce a signal identical to what we expect from WIMPs and are extensively discussed here. We additionally present a new tool for calculating (alpha, n) yields in various materials. We introduce the concept of a neutron veto system designed to shield against, measure, and provide an anti-coincidence veto signal for background neutrons. We discuss the research and development

  6. A Study of Nuclear Recoil Backgrounds in Dark Matter Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westerdale, Shawn S.

    2016-01-01

    Despite the great success of the Standard Model of particle physics, a preponderance of astrophysical evidence suggests that it cannot explain most of the matter in the universe. This so-called dark matter has eluded direct detection, though many theoretical extensions to the Standard Model predict the existence of particles with a mass on themore » $1-1000$ GeV scale that interact only via the weak nuclear force. Particles in this class are referred to as Weakly Interacting Massive Particles (WIMPs), and their high masses and low scattering cross sections make them viable dark matter candidates. The rarity of WIMP-nucleus interactions makes them challenging to detect: any background can mask the signal they produce. Background rejection is therefore a major problem in dark matter detection. Many experiments greatly reduce their backgrounds by employing techniques to reject electron recoils. However, nuclear recoil backgrounds, which produce signals similar to what we expect from WIMPs, remain problematic. There are two primary sources of such backgrounds: surface backgrounds and neutron recoils. Surface backgrounds result from radioactivity on the inner surfaces of the detector sending recoiling nuclei into the detector. These backgrounds can be removed with fiducial cuts, at some cost to the experiment's exposure. In this dissertation we briefly discuss a novel technique for rejecting these events based on signals they make in the wavelength shifter coating on the inner surfaces of some detectors. Neutron recoils result from neutrons scattering from nuclei in the detector. These backgrounds may produce a signal identical to what we expect from WIMPs and are extensively discussed here. We additionally present a new tool for calculating ($$\\alpha$$, n)yields in various materials. We introduce the concept of a neutron veto system designed to shield against, measure, and provide an anti-coincidence veto signal for background neutrons. We discuss the research and

  7. Turboprop IDEAL: a motion-resistant fat-water separation technique.

    PubMed

    Huo, Donglai; Li, Zhiqiang; Aboussouan, Eric; Karis, John P; Pipe, James G

    2009-01-01

    Suppression of the fat signal in MRI is very important for many clinical applications. Multi-point water-fat separation methods, such as IDEAL (Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation), can robustly separate water and fat signal, but inevitably increase scan time, making separated images more easily affected by patient motions. PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) and Turboprop techniques offer an effective approach to correct for motion artifacts. By combining these techniques together, we demonstrate that the new TP-IDEAL method can provide reliable water-fat separation with robust motion correction. The Turboprop sequence was modified to acquire source images, and motion correction algorithms were adjusted to assure the registration between different echo images. Theoretical calculations were performed to predict the optimal shift and spacing of the gradient echoes. Phantom images were acquired, and results were compared with regular FSE-IDEAL. Both T1- and T2-weighted images of the human brain were used to demonstrate the effectiveness of motion correction. TP-IDEAL images were also acquired for pelvis, knee, and foot, showing great potential of this technique for general clinical applications.

  8. Radon backgrounds in the DEAP-1 liquid-argon-based Dark Matter detector

    NASA Astrophysics Data System (ADS)

    Amaudruz, P.-A.; Batygov, M.; Beltran, B.; Boudjemline, K.; Boulay, M. G.; Cai, B.; Caldwell, T.; Chen, M.; Chouinard, R.; Cleveland, B. T.; Contreras, D.; Dering, K.; Duncan, F.; Ford, R.; Gagnon, R.; Giuliani, F.; Gold, M.; Golovko, V. V.; Gorel, P.; Graham, K.; Grant, D. R.; Hakobyan, R.; Hallin, A. L.; Harvey, P.; Hearns, C.; Jillings, C. J.; Kuźniak, M.; Lawson, I.; Li, O.; Lidgard, J.; Liimatainen, P.; Lippincott, W. H.; Mathew, R.; McDonald, A. B.; McElroy, T.; McFarlane, K.; McKinsey, D.; Muir, A.; Nantais, C.; Nicolics, K.; Nikkel, J.; Noble, T.; O'Dwyer, E.; Olsen, K. S.; Ouellet, C.; Pasuthip, P.; Pollmann, T.; Rau, W.; Retiere, F.; Ronquest, M.; Skensved, P.; Sonley, T.; Tang, J.; Vázquez-Jáuregui, E.; Veloce, L.; Ward, M.

    2015-03-01

    The DEAP-1 7 kg single phase liquid argon scintillation detector was operated underground at SNOLAB in order to test the techniques and measure the backgrounds inherent to single phase detection, in support of the DEAP-3600 Dark Matter detector. Backgrounds in DEAP are controlled through material selection, construction techniques, pulse shape discrimination, and event reconstruction. This report details the analysis of background events observed in three iterations of the DEAP-1 detector, and the measures taken to reduce them. The 222 Rn decay rate in the liquid argon was measured to be between 16 and 26 μBq kg-1. We found that the background spectrum near the region of interest for Dark Matter detection in the DEAP-1 detector can be described considering events from three sources: radon daughters decaying on the surface of the active volume, the expected rate of electromagnetic events misidentified as nuclear recoils due to inefficiencies in the pulse shape discrimination, and leakage of events from outside the fiducial volume due to imperfect position reconstruction. These backgrounds statistically account for all observed events, and they will be strongly reduced in the DEAP-3600 detector due to its higher light yield and simpler geometry.

  9. Practice makes perfect: self-reported adherence a positive marker of inhaler technique maintenance.

    PubMed

    Azzi, Elizabeth; Srour, Pamela; Armour, Carol; Rand, Cynthia; Bosnic-Anticevich, Sinthia

    2017-04-24

    Poor inhaler technique and non-adherence to treatment are major problems in the management of asthma. Patients can be taught how to achieve good inhaler technique, however maintenance remains problematic, with 50% of patients unable to demonstrate correct technique. The aim of this study was to determine the clinical, patient-related and/or device-related factors that predict inhaler technique maintenance. Data from a quality-controlled longitudinal community care dataset was utilized. 238 patients using preventer medications where included. Data consisted of patient demographics, clinical data, medication-related factors and patient-reported outcomes. Mixed effects logistic regression was used to identify predictors of inhaler technique maintenance at 1 month. The variables found to be independently associated with inhaler technique maintenance using logistic regression (Χ 2 (3,n = 238) = 33.24, p < 0.000) were inhaler technique at Visit 1 (OR 7.1), device type (metered dose inhaler and dry powder inhalers) (OR 2.2) and self-reported adherent behavior in the prior 7 days (OR 1.3). This research is the first to unequivocally establish a predictive relationship between inhaler technique maintenance and actual patient adherence, reinforcing the notion that inhaler technique maintenance is more than just a physical skill. Inhaler technique maintenance has an underlying behavioral component, which future studies need to investigate. BEHAVIORAL ELEMENT TO CORRECT LONG-TERM INHALER TECHNIQUES: Patients who consciously make an effort to perfect asthma inhaler technique will maintain their skills long-term. Elizabeth Azzi at the University of Sydney, Australia, and co-workers further add evidence that there is a strong behavioral component to patients retaining correct inhaler technique over time. Poor inhaler technique can limit asthma control, affecting quality of life and increasing the chances of severe exacerbations. Azzi's team followed 238 patients to

  10. Fluorescence lifetime correlation spectroscopy for precise concentration detection in vivo by background subtraction

    NASA Astrophysics Data System (ADS)

    Gärtner, Maria; Mütze, Jörg; Ohrt, Thomas; Schwille, Petra

    2009-07-01

    In vivo studies of single molecule dynamics by means of Fluorescence correlation spectroscopy can suffer from high background. Fluorescence lifetime correlation spectroscopy provides a tool to distinguish between signal and unwanted contributions via lifetime separation. By studying the motion of the RNA-induced silencing complex (RISC) within two compartments of a human cell, the nucleus and the cytoplasm, we observed clear differences in concentration as well as mobility of the protein complex between those two locations. Especially in the nucleus, where the fluorescence signal is very weak, a correction for background is crucial to provide reliable results of the particle number. Utilizing the fluorescent lifetime of the different contributions, we show that it is possible to distinguish between the fluorescent signal and the autofluorescent background in vivo in a single measurement.

  11. Intracavity adaptive optics. 1: Astigmatism correction performance.

    PubMed

    Spinhirne, J M; Anafi, D; Freeman, R H; Garcia, H R

    1981-03-15

    A detailed experimental study has been conducted on adaptive optical control methodologies inside a laser resonator. A comparison is presented of several optimization techniques using a multidither zonal coherent optical adaptive technique system within a laser resonator for the correction of astigmatism. A dramatic performance difference is observed when optimizing on beam quality compared with optimizing on power-in-the-bucket. Experimental data are also presented on proper selection criteria for dither frequencies when controlling phase front errors. The effects of hardware limitations and design considerations on the performance of the system are presented, and general conclusions and physical interpretations on the results are made when possible.

  12. Correcting geometric and photometric distortion of document images on a smartphone

    NASA Astrophysics Data System (ADS)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  13. 78 FR 16783 - Approval and Promulgation of Implementation Plans; Georgia; Control Techniques Guidelines and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-19

    ... Promulgation of Implementation Plans; Georgia; Control Techniques Guidelines and Reasonably Available Control...), related to reasonably available control technology (RACT) requirements. This correcting amendment corrects... October 21, 2009, SIP submittal for certain source categories for which EPA has issued control technique...

  14. Geometric correction method for 3d in-line X-ray phase contrast image reconstruction

    PubMed Central

    2014-01-01

    Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768

  15. Infrared images target detection based on background modeling in the discrete cosine domain

    NASA Astrophysics Data System (ADS)

    Ye, Han; Pei, Jihong

    2018-02-01

    Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.

  16. The Effects of Background Noise on Dichotic Listening to Consonant-Vowel Syllables

    ERIC Educational Resources Information Center

    Sequeira, Sarah Dos Santos; Specht, Karsten; Hamalainen, Heikki; Hugdahl, Kenneth

    2008-01-01

    Lateralization of verbal processing is frequently studied with the dichotic listening technique, yielding a so called right ear advantage (REA) to consonant-vowel (CV) syllables. However, little is known about how background noise affects the REA. To address this issue, we presented CV-syllables either in silence or with traffic background noise…

  17. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. CORRECTING PHOTOLYSIS RATES ON THE BASIS OF SATELLITE OBSERVED CLOUDS

    EPA Science Inventory

    Clouds can significantly affect photochemical activities in the boundary layer by altering radiation intensity, and therefore their correct specification in the air quality models is of outmost importance. In this study we introduce a technique for using the satellite observed c...

  19. [Surgical Correction of Scoliosis: Does Intraoperative CT Navigation Prolong Operative Time?

    PubMed

    Skála-Rosenbaum, J; Ježek, M; Džupa, V; Kadeřábek, R; Douša, P; Rusnák, R; Krbec, M

    2016-01-01

    PURPOSE OF THE STUDY The aim of the study was to compare the duration of corrective surgery for scoliosis in relation to the intra-operative use of either fluoroscopic or CT navigation. MATERIAL AND METHODS The indication for surgery was adolescent idiopathic scoliosis in younger patients and degenerative scoliosis in middleage or elderly patients. In a retrospective study, treatment outcomes in 43 consecutive patients operated on between April 2011 and April 2014 were compared. Only patients undergoing surgical correction of five or more spinal segments (fixation of six and more vertebrae) were included. RESULTS Transpedicular screw fixation of six to 13 vertebrae was performed under C-arm fluoroscopy guidance in 22 patients, and transpedicular screws were inserted in six to 14 vertebrae using the O-arm imaging system in 21 patients. A total of 246 screws were placed using the C-arm system and 340 screws were inserted using the O-arm system (p < 0.001). The procedures with use of the O-arm system were more complicated and required an average operative time longer by 48% (measured from the first skin incision to the completion of skin suture). However, the mean time needed for one screw placement (the sum of all surgical procedures with the use of a navigation technique divided by the number of screws placed using this technique) was the same in both techniques (19 min). DISCUSSION With good teamwork (surgeons, anaesthesiologists and a radiologist attending to the O-arm system), the time required to obtain one intra-operative CT scan is 3 to 5 minutes. The study showed that the mean time for placement of one screw was identical in both techniques although the average operative time was longer in surgery with O-arm navigation. The 19- minute interval was not the real placement time per screw. It was the sum of all operative times of surgical procedures (from first incision to suture completion including the whole approach within the range of planned stabilization

  20. Atmospheric correction of AVIRIS data in ocean waters

    NASA Technical Reports Server (NTRS)

    Terrie, Gregory; Arnone, Robert

    1992-01-01

    Hyperspectral data offers unique capabilities for characterizing the ocean environment. The spectral characterization of the composition of ocean waters can be organized into biological and terrigenous components. Biological photosynthetic pigments in ocean waters have unique spectral ocean color signatures which can be associated with different biological species. Additionally, suspended sediment has different scattering coefficients which result in ocean color signatures. Measuring the spatial distributions of these components in the maritime environments provides important tools for understanding and monitoring the ocean environment. These tools have significant applications in pollution, carbon cycle, current and water mass detection, location of fronts and eddies, sewage discharge and fate etc. Ocean color was used from satellite for describing the spatial variability of chlorophyll, water clarity (K(sub 490)), suspended sediment concentration, currents etc. Additionally, with improved atmospheric correction methods, ocean color results produced global products of spectral water leaving radiance (L(sub W)). Ocean color results clearly indicated strong applications for characterizing the spatial and temporal variability of bio-optical oceanography. These studies were largely the results of advanced atmospheric correction techniques applied to multispectral imagery. The atmosphere contributes approximately 80 percent - 90 percent of the satellite received radiance in the blue-green portion of the spectrum. In deep ocean waters, maximum transmission of visible radiance is achieved at 490nm. Conversely, nearly all of the light is absorbed by the water at wavelengths greater than about 650nm and thus appears black. These spectral ocean properties are exploited by algorithms developed for the atmospheric correction used in satellite ocean color processing. The objective was to apply atmospheric correction techniques that were used for procesing satellite Coastal

  1. Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs

    DOE PAGES

    Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo; ...

    2015-12-17

    Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less

  2. Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo

    Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less

  3. Correcting for the effects of pupil discontinuities with the ACAD method

    NASA Astrophysics Data System (ADS)

    Mazoyer, Johan; Pueyo, Laurent; N'Diaye, Mamadou; Mawet, Dimitri; Soummer, Rémi; Norman, Colin

    2016-07-01

    The current generation of ground-based coronagraphic instruments uses deformable mirrors to correct for phase errors and to improve contrast levels at small angular separations. Improving these techniques, several space and ground based instruments are currently developed using two deformable mirrors to correct for both phase and amplitude errors. However, as wavefront control techniques improve, more complex telescope pupil geometries (support structures, segmentation) will soon be a limiting factor for these next generation coronagraphic instruments. The technique presented in this proceeding, the Active Correction of Aperture Discontinuities method, is taking advantage of the fact that most future coronagraphic instruments will include two deformable mirrors, and is proposing to find the shapes and actuator movements to correct for the effect introduced by these complex pupil geometries. For any coronagraph previously designed for continuous apertures, this technique allow to obtain similar performance in contrast with a complex aperture (with segmented and secondary mirror support structures), with high throughput and flexibility to adapt to changing pupil geometry (e.g. in case of segment failure or maintenance of the segments). We here present the results of the parametric analysis realized on the WFIRST pupil for which we obtained high contrast levels with several deformable mirror setups (size, separation between them), coronagraphs (Vortex charge 2, vortex charge 4, APLC) and spectral bandwidths. However, because contrast levels and separation are not the only metrics to maximize the scientific return of an instrument, we also included in this study the influence of these deformable mirror shapes on the throughput of the instrument and sensitivity to pointing jitters. Finally, we present results obtained on another potential space based telescope segmented aperture. The main result of this proceeding is that we now obtain comparable performance than the

  4. Contribution to the G 0 violation of parity experience: calculation and simulation of radiative corrections and the background noise study; Contribution a l'experience G0 de violation de la parite : calcul et simulation des corrections radiatives et etude du bruit de fond (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guler, Hayg

    2003-12-17

    In the framework of quantum chromodynamics, the nucleon is made of three valence quarks surrpounded by a sea of gluons and quark-antiquark pairs. Only the only lightest quarks (u, d and s) contribute significantly to the nucleon properties. In Go we using the property of weak interaction to violate parity symmetry, in order to determine separately the contributions of the three types of quarks to nucleon form factors. The experiment, which takes place at Thomas Jefferson laboratory (USA), aims at measuring parity violation asymmetry in electron-proton scattering. By doing several measurements at different momentum squared of the exchanged photons andmore » for different kinematics (forward angle when the proton is detected and backward angle it will be the electron) will permit to determine separately strange quarks electric and magnetic contributions to nucleon form factors. To extract an asymmetry with small errors, it is necessary to correct all the beam parameters, and to have high enough counting rates in detectors. A special electronics was developed to treat information coming from 16 scintillator pairs for each of the 8 sectors of the Go spectrometer. A complete calculation of radiative corrections has been clone and Monte Carlo simulations with the GEANT program has permitted to determine the shape of the experimental spectra including inelastic background. This work will allow to do a comparison between experimental data and theoretical calculations based on the Standard Model.« less

  5. Ozone Correction for AM0 Calibrated Solar Cells for the Aircraft Method

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Rieke, William J.; Blankenship, Kurt S.

    2002-01-01

    The aircraft solar cell calibration method has provided cells calibrated to space conditions for 37 years. However, it is susceptible to systematic errors due to ozone concentrations in the stratosphere. The present correction procedure applies a 1 percent increase to the measured I(sub SC) values. High band-gap cells are more sensitive to ozone absorbed wavelengths (0.4 to 0.8 microns) so it becomes important to reassess the correction technique. This paper evaluates the ozone correction to be 1+O3xFo, where O3 is the total ozone along the optical path, and Fo is 29.8 x 10(exp -6)/du for a Silicon solar cell, 42.6 x 10(exp -6)/du for a GaAs cell and 57.2 x 10(exp -6)/du for an InGaP cell. These correction factors work best to correct data points obtained during the flight rather than as a correction to the final result.

  6. Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    Depending on detector number, there are random fluctuations in the background level for spectral band 1 of magnitudes ranging from 2 to 3.5 digital numbers (DN). Similar variability is observed in all the other reflective bands, but with smaller magnitude in the range 0.5 to 2.5 DN. Observations of background reference levels show that line dependent variations in raw TM image data and in the associated calibration data can be measured and corrected within an operational environment by applying simple offset corrections on a line-by-line basis. The radiometric calibration procedure defined by the Canadian Center for Remote Sensing was revised accordingly in order to prevent striping in the output product.

  7. Processing techniques for global land 1-km AVHRR data

    USGS Publications Warehouse

    Eidenshink, Jeffery C.; Steinwand, Daniel R.; Wivell, Charles E.; Hollaren, Douglas M.; Meyer, David

    1993-01-01

    The U.S. Geological Survey's (USGS) Earth Resources Observation Systems (EROS) Data Center (EDC) in cooperation with several international science organizations has developed techniques for processing daily Advanced Very High Resolution Radiometer (AVHRR) 1-km data of the entire global land surface. These techniques include orbital stitching, geometric rectification, radiometric calibration, and atmospheric correction. An orbital stitching algorithm was developed to combine consecutive observations acquired along an orbit by ground receiving stations into contiguous half-orbital segments. The geometric rectification process uses an AVHRR satellite model that contains modules for forward mapping, forward terrain correction, and inverse mapping with terrain correction. The correction is accomplished by using the hydrologic features coastlines and lakes from the Digital Chart of the World. These features are rasterized into the satellite projection and are matched to the AVHRR imagery using binary edge correlation techniques. The resulting coefficients are related to six attitude correction parameters: roll, roll rate, pitch, pitch rate, yaw, and altitude. The image can then be precision corrected to a variety of map projections and user-selected image frames. Because the AVHRR lacks onboard calibration for the optical wavelengths, a series of time-variant calibration coefficients derived from vicarious calibration methods and are used to model the degradation profile of the instruments. Reducing atmospheric effects on AVHRR data is important. A method has been develop that will remove the effects of molecular scattering and absorption from clear sky observations, using climatological measurements of ozone. Other methods to remove the effects of water vapor and aerosols are being investigated.

  8. Thermal corrections to the Casimir energy in a general weak gravitational field

    NASA Astrophysics Data System (ADS)

    Nazari, Borzoo

    2016-12-01

    We calculate finite temperature corrections to the energy of the Casimir effect of a two conducting parallel plates in a general weak gravitational field. After solving the Klein-Gordon equation inside the apparatus, mode frequencies inside the apparatus are obtained in terms of the parameters of the weak background. Using Matsubara’s approach to quantum statistical mechanics gravity-induced thermal corrections of the energy density are obtained. Well-known weak static and stationary gravitational fields are analyzed and it is found that in the low temperature limit the energy of the system increases compared to that in the zero temperature case.

  9. Plenoptic background oriented schlieren imaging

    NASA Astrophysics Data System (ADS)

    Klemkowsky, Jenna N.; Fahringer, Timothy W.; Clifford, Christopher J.; Bathel, Brett F.; Thurow, Brian S.

    2017-09-01

    The combination of the background oriented schlieren (BOS) technique with the unique imaging capabilities of a plenoptic camera, termed plenoptic BOS, is introduced as a new addition to the family of schlieren techniques. Compared to conventional single camera BOS, plenoptic BOS is capable of sampling multiple lines-of-sight simultaneously. Displacements from each line-of-sight are collectively used to build a four-dimensional displacement field, which is a vector function structured similarly to the original light field captured in a raw plenoptic image. The displacement field is used to render focused BOS images, which qualitatively are narrow depth of field slices of the density gradient field. Unlike focused schlieren methods that require manually changing the focal plane during data collection, plenoptic BOS synthetically changes the focal plane position during post-processing, such that all focal planes are captured in a single snapshot. Through two different experiments, this work demonstrates that plenoptic BOS is capable of isolating narrow depth of field features, qualitatively inferring depth, and quantitatively estimating the location of disturbances in 3D space. Such results motivate future work to transition this single-camera technique towards quantitative reconstructions of 3D density fields.

  10. Correction of tailor's bunion with the Boesch technique: a retrospective study.

    PubMed

    Legenstein, Robert; Bonomo, Johannes; Huber, Wolfgang; Boesch, Peter

    2007-07-01

    The Boesch technique(1,2) is a minimally-invasive and time-saving subcutaneous subcapital metatarsal osteotomy. Since 1984, we have been using this osteotomy technique for patients with a symptomatic tailor's bunion in whom conservative treatment has failed. This distal osteotomy is stabilized by a combination of a Kirschner wire and a special dressing. The results of this technique in patients with symptomatic tailor's bunion were reviewed. Between March, 1998, and June, 2002, surgery was done in 77 feet of 65 patients with a mean age of 64.6 years. The mean followup was 56.6 (range 14 to 79) months. The 100-point American Orthopaedic foot and Ankle Society (AOFAS) Lesser Metatarsophalangeal-Interphalangeal Scale was used for scoring. 86.4% of 57 patients (66 feet) were free of pain at final followup. The mean 4-5 intermetatarsal angle was 12 degrees before and 8 degrees after surgery. The mean lateral deviation of the fifth metatarsal was 5.7 degrees before and 5.1 degrees after surgery. The mean fifth metatarsophalangeal angle was 17.8 degrees before and 6.2 degrees after surgery. The mean preoperative 100-point AOFAS score was 59.1 (range 23 to 88) and the postoperative score, 95.2 (range 73 to 100). The overall results were excellent in 87.9%, (58 feet) good in 6.1% (4 feet), and satisfactory in 6.1%; none was poor. The advantages of the subcutaneous subcapital Boesch technique are that it is time saving, it causes less bone and soft-tissue trauma, and it is performed under local anesthesia without a tourniquet. It is an effective operative option for symptomatic tailor's bunion; excellent and good clinical and radiographic results were found in 86.4% (57 patients, 66 feet) of the patients.

  11. A multilevel correction adaptive finite element method for Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  12. Modeling background radiation in Southern Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Daniel A.; Burnley, Pamela C.; Adcock, Christopher T.

    Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials by creating a high resolution background model. The intention is for this method to be used in an emergency response scenario where the background radiation envi-ronment is unknown. Two studymore » areas in Southern Nevada have been modeled using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas that are homogenous in terms of K, U, and Th, referred to as background radiation units, are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by the Department of Energy's Remote Sensing Lab - Nellis, allowing for the refinement of the technique. By using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide and define radiation background units within alluvium, successful models have been produced for Government Wash, north of Lake Mead, and for the western shore of Lake Mohave, east of Searchlight, NV.« less

  13. Modeling background radiation in Southern Nevada

    DOE PAGES

    Haber, Daniel A.; Burnley, Pamela C.; Adcock, Christopher T.; ...

    2017-02-06

    Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials by creating a high resolution background model. The intention is for this method to be used in an emergency response scenario where the background radiation envi-ronment is unknown. Two studymore » areas in Southern Nevada have been modeled using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas that are homogenous in terms of K, U, and Th, referred to as background radiation units, are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by the Department of Energy's Remote Sensing Lab - Nellis, allowing for the refinement of the technique. By using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide and define radiation background units within alluvium, successful models have been produced for Government Wash, north of Lake Mead, and for the western shore of Lake Mohave, east of Searchlight, NV.« less

  14. Anomaly-corrected supersymmetry algebra and supersymmetric holographic renormalization

    NASA Astrophysics Data System (ADS)

    An, Ok Song

    2017-12-01

    We present a systematic approach to supersymmetric holographic renormalization for a generic 5D N=2 gauged supergravity theory with matter multiplets, including its fermionic sector, with all gauge fields consistently set to zero. We determine the complete set of supersymmetric local boundary counterterms, including the finite counterterms that parameterize the choice of supersymmetric renormalization scheme. This allows us to derive holographically the superconformal Ward identities of a 4D superconformal field theory on a generic background, including the Weyl and super-Weyl anomalies. Moreover, we show that these anomalies satisfy the Wess-Zumino consistency condition. The super-Weyl anomaly implies that the fermionic operators of the dual field theory, such as the supercurrent, do not transform as tensors under rigid supersymmetry on backgrounds that admit a conformal Killing spinor, and their anticommutator with the conserved supercharge contains anomalous terms. This property is explicitly checked for a toy model. Finally, using the anomalous transformation of the supercurrent, we obtain the anomaly-corrected supersymmetry algebra on curved backgrounds admitting a conformal Killing spinor.

  15. Gaussian Process Kalman Filter for Focal Plane Wavefront Correction and Exoplanet Signal Extraction

    NASA Astrophysics Data System (ADS)

    Sun, He; Kasdin, N. Jeremy

    2018-01-01

    Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference point spread function (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.

  16. Surgical Correction of Whistle Deformity Using Cross-Muscle Flap in Secondary Cleft Lip

    PubMed Central

    Choi, Woo Young; Kim, Gyu Bo; Han, Yun Ju

    2012-01-01

    Background The whistle deformity is one of the common sequelae of secondary cleft lip deformities. Santos reported using a crossed-denuded flap for primary cleft lip repair to prevent a vermilion notching. The authors modified this technique to correct the whistle deformity, calling their version the cross-muscle flap. Methods From May 2005 to January 2011, 14 secondary unilateral cleft lip patients were treated. All suffered from a whistle deformity, which is characterized by the deficiency of the central tubercle, notching in the upper lip, and bulging on the lateral segment. The mean age of the patients was 13.8 years and the mean follow-up period was 21.8 weeks. After elevation from the lateral vermilion and medial tubercle, two muscle flaps were crossed and turned over. The authors measured the three vertical heights and compared the two height ratios before and after surgery for evaluation of the postoperative results. Results None of the patients had any notable complications and the whistle deformity was corrected in all cases. The vertical height ratios at the midline on the upper lip and the affected Cupid's bow point were increased (P<0.05). The motion of the upper lip was acceptable. Conclusions A cross muscle flap is simple and it leaves a minimal scar on the lip. We were able to reconstruct the whistle deformity in secondary unilateral cleft lip patients with a single state procedure using a cross-muscle flap. PMID:23094241

  17. V+jets Background and Systematic Uncertainties in Top Quark Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adomeit, Stefanie; Peters, Reinhild Yvonne

    2014-12-01

    Vector boson production in association with jets is an important process to test perturbative quantum chromodynamics and also a background process in top quark analyses. Measurements on vector boson production in association with light and heavy flavour jets are presented, performed by the D0 and CDF collaborations at the Tevatron as well as the ATLAS and CMS experiments at LHC. Techniques applied in top quark analyses to estimate the vector boson+jets background are also discussed.

  18. Alphas and surface backgrounds in liquid argon dark matter detectors

    NASA Astrophysics Data System (ADS)

    Stanford, Christopher J.

    Current observations from astrophysics indicate the presence of dark matter, an invisible form of matter that makes up a large part of the mass of the universe. One of the leading theories for dark matter is that it is made up of Weakly Interacting Massive Particles (WIMPs). One of the ways we try to discover WIMPs is by directly detecting their interaction with regular matter. This can be done using a scintillator such as liquid argon, which gives off light when a particle interacts with it. Liquid argon (LAr) is a favorable means of detecting WIMPs because it has an inherent property that enables a technique called pulse-shape discrimination (PSD). PSD can distinguish a WIMP signal from the constant background of electromagnetic signals from other sources, like gamma rays. However, there are other background signals that PSD is not as capable of rejecting, such as those caused by alpha decays on the interior surfaces of the detector. Radioactive elements that undergo alpha decay are introduced to detector surfaces during construction by radon gas that is naturally present in the air, as well as other means. When these surface isotopes undergo alpha decay, they can produce WIMP-like signals in the detector. We present here two LAr experiments. The first (RaDOSE) discovered a property of an organic compound that led to a technique for rejecting surface alpha decays in LAr detectors with high efficiency. The second (DarkSide-50) is a dark matter experiment operated at LNGS in Italy and is the work of an international collaboration. A detailed look is given into alpha decays and surface backgrounds present in the detector, and projections are made of alpha-related backgrounds for 500 live days of data. The technique developed with RaDOSE is applied to DarkSide-50 to determine its effectiveness in practice. It is projected to suppress the surface background in DarkSide-50 by more than a factor of 1000.

  19. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    NASA Astrophysics Data System (ADS)

    Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.

    2017-08-01

    Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  20. [Endoscopically assisted fronto-orbitary correction in trigonocephaly].

    PubMed

    Hinojosa, J; Esparza, J; García-Recuero, I; Romance, A

    2007-01-01

    The development of multidisciplinar Units for Craneofacial Surgery has led to a considerable decrease in morbidity even in the cases of more complex craniofacial syndromes. The use of minimally invasive techniques for the correction of some of these malformations allows the surgeon to minimize the incidence of complications by means of a decrease in the surgical time, blood salvage and shortening of postoperative hospitalization in comparison to conventional craniofacial techniques. Simple and milder craniosynostosis are best approached by these techniques and render the best results. Different osteotomies resembling standard fronto-orbital remodelling besides simple suturectomies and the use of postoperative cranial orthesis may improve the final aesthetic appearence. In endoscopic treatment of trigonocephaly the use of preauricular incisions achieves complete pterional resection, lower lateral orbital osteotomies and successful precoronal frontal osteotomies to obtain long lasting and satisfactory outcomes.

  1. Dual-beam manually-actuated distortion-corrected imaging (DMDI) with micromotor catheters.

    PubMed

    Lee, Anthony M D; Hohert, Geoffrey; Angkiriwang, Patricia T; MacAulay, Calum; Lane, Pierre

    2017-09-04

    We present a new paradigm for performing two-dimensional scanning called dual-beam manually-actuated distortion-corrected imaging (DMDI). DMDI operates by imaging the same object with two spatially-separated beams that are being mechanically scanned rapidly in one dimension with slower manual actuation along a second dimension. Registration of common features between the two imaging channels allows remapping of the images to correct for distortions due to manual actuation. We demonstrate DMDI using a 4.7 mm OD rotationally scanning dual-beam micromotor catheter (DBMC). The DBMC requires a simple, one-time calibration of the beam paths by imaging a patterned phantom. DMDI allows for distortion correction of non-uniform axial speed and rotational motion of the DBMC. We show the utility of this technique by demonstrating en face OCT image distortion correction of a manually-scanned checkerboard phantom and fingerprint scan.

  2. Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.

    PubMed

    Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei

    2013-04-01

    The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.

  3. A Novel Technique of Posterolateral Suturing in Thoracoscopic Diaphragmatic Hernia Repair

    PubMed Central

    Boo, Yoon Jung; Rohleder, Stephan; Muensterer, Oliver J.

    2017-01-01

    Background  Closure of the posterolateral defect in some cases of congenital diaphragmatic hernia (CDH) can be difficult. Percutaneous transcostal suturing is often helpful to create a complete, watertight closure of the diaphragm. A challenge with the technique is passing the needle out the same tract that it entered so that no skin is caught when the knots are laid down into the subcutaneous tissue. This report describes a novel technique using a Tuohy needle to percutaneously suture the posterolateral defect during thoracoscopic repair of CDH. Case  We report a case of a 6-week-old infant who presented with a CDH and ipsilateral intrathoracic kidney that was repaired using thoracoscopic approach. The posterolateral part of the defect was repaired by percutaneous transcostal suturing and extracorporeal knot tying. To assure correct placement of the sutures and knots, a Tuohy needle was used to guide the suture around the rib and out through the same subcutaneous tract. The total operative time was 145 minutes and there were no perioperative complications. The patient was followed up for 3 months, during which there was no recurrence. Conclusion  Our percutaneous Tuohy technique for closure of the posterolateral part of CDH enables a secure, rapid, and tensionless repair. PMID:28804698

  4. A Novel Technique of Posterolateral Suturing in Thoracoscopic Diaphragmatic Hernia Repair.

    PubMed

    Boo, Yoon Jung; Rohleder, Stephan; Muensterer, Oliver J

    2017-01-01

    Background  Closure of the posterolateral defect in some cases of congenital diaphragmatic hernia (CDH) can be difficult. Percutaneous transcostal suturing is often helpful to create a complete, watertight closure of the diaphragm. A challenge with the technique is passing the needle out the same tract that it entered so that no skin is caught when the knots are laid down into the subcutaneous tissue. This report describes a novel technique using a Tuohy needle to percutaneously suture the posterolateral defect during thoracoscopic repair of CDH. Case  We report a case of a 6-week-old infant who presented with a CDH and ipsilateral intrathoracic kidney that was repaired using thoracoscopic approach. The posterolateral part of the defect was repaired by percutaneous transcostal suturing and extracorporeal knot tying. To assure correct placement of the sutures and knots, a Tuohy needle was used to guide the suture around the rib and out through the same subcutaneous tract. The total operative time was 145 minutes and there were no perioperative complications. The patient was followed up for 3 months, during which there was no recurrence. Conclusion  Our percutaneous Tuohy technique for closure of the posterolateral part of CDH enables a secure, rapid, and tensionless repair.

  5. An Ensemble Method for Spelling Correction in Consumer Health Questions

    PubMed Central

    Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina

    2015-01-01

    Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208

  6. Energy-based adaptive focusing of waves: application to noninvasive aberration correction of ultrasonic wavefields

    PubMed Central

    Herbert, Eric; Pernot, Mathieu; Montaldo, Gabriel; Fink, Mathias; Tanter, Mickael

    2009-01-01

    An aberration correction method based on the maximization of the wave intensity at the focus of an emitting array is presented. The potential of this new adaptive focusing technique is investigated for ultrasonic focusing in biological tissues. The acoustic intensity is maximized non invasively through the direct measurement or indirect estimation of the beam energy at the focus for a series of spatially coded emissions. For ultrasonic waves, the acoustic energy at the desired focus can be indirectly estimated from the local displacements induced in tissues by the ultrasonic radiation force of the beam. Based on the measurement of these displacements, this method allows the precise estimation of the phase and amplitude aberrations and consequently the correction of aberrations along the beam travel path. The proof of concept is first performed experimentally using a large therapeutic array with strong electronic phase aberrations (up to 2π). Displacements induced by the ultrasonic radiation force at the desired focus are indirectly estimated using the time shift of backscattered echoes recorded on the array. The phase estimation is deduced accurately using a direct inversion algorithm which reduces the standard deviation of the phase distribution from σ = 1.89 before correction to σ = 0.53 following correction. The corrected beam focusing quality is verified using a needle hydrophone. The peak intensity obtained through the aberrator is found to be −7.69 dB below the reference intensity obtained without any aberration. Using the phase correction, a sharp focus is restored through the aberrator with a relative peak intensity of −0.89 dB. The technique is tested experimentally using a linear transmit/receive array through a real aberrating layer. The array is used to automatically correct its beam quality, as it both generates the radiation force with coded excitations and indirectly estimates the acoustic intensity at the focus with speckle tracking. This

  7. Impact of a primordial magnetic field on cosmic microwave background B modes with weak lensing

    NASA Astrophysics Data System (ADS)

    Yamazaki, Dai G.

    2018-05-01

    We discuss the manner in which the primordial magnetic field (PMF) suppresses the cosmic microwave background (CMB) B mode due to the weak-lensing (WL) effect. The WL effect depends on the lensing potential (LP) caused by matter perturbations, the distribution of which at cosmological scales is given by the matter power spectrum (MPS). Therefore, the WL effect on the CMB B mode is affected by the MPS. Considering the effect of the ensemble average energy density of the PMF, which we call "the background PMF," on the MPS, the amplitude of MPS is suppressed in the wave number range of k >0.01 h Mpc-1 . The MPS affects the LP and the WL effect in the CMB B mode; however, the PMF can damp this effect. Previous studies of the CMB B mode with the PMF have only considered the vector and tensor modes. These modes boost the CMB B mode in the multipole range of ℓ>1000 , whereas the background PMF damps the CMB B mode owing to the WL effect in the entire multipole range. The matter density in the Universe controls the WL effect. Therefore, when we constrain the PMF and the matter density parameters from cosmological observational data sets, including the CMB B mode, we expect degeneracy between these parameters. The CMB B mode also provides important information on the background gravitational waves, inflation theory, matter density fluctuations, and the structure formations at the cosmological scale through the cosmological parameter search. If we study these topics and correctly constrain the cosmological parameters from cosmological observations, including the CMB B mode, we need to correctly consider the background PMF.

  8. Progress in the Development of CdZnTe Unipolar Detectors for Different Anode Geometries and Data Corrections

    PubMed Central

    Zhang, Qiushi; Zhang, Congzhe; Lu, Yanye; Yang, Kun; Ren, Qiushi

    2013-01-01

    CdZnTe detectors have been under development for the past two decades, providing good stopping power for gamma rays, lightweight camera heads and improved energy resolution. However, the performance of this type of detector is limited primarily by incomplete charge collection problems resulting from charge carriers trapping. This paper is a review of the progress in the development of CdZnTe unipolar detectors with some data correction techniques for improving performance of the detectors. We will first briefly review the relevant theories. Thereafter, two aspects of the techniques for overcoming the hole trapping issue are summarized, including irradiation direction configuration and pulse shape correction methods. CdZnTe detectors of different geometries are discussed in detail, covering the principal of the electrode geometry design, the design and performance characteristics, some detector prototypes development and special correction techniques to improve the energy resolution. Finally, the state of art development of 3-D position sensing and Compton imaging technique are also discussed. Spectroscopic performance of CdZnTe semiconductor detector will be greatly improved even to approach the statistical limit on energy resolution with the combination of some of these techniques. PMID:23429509

  9. Titanium vs cobalt chromium: what is the best rod material to enhance adolescent idiopathic scoliosis correction with sublaminar bands?

    PubMed

    Angelliaume, Audrey; Ferrero, E; Mazda, K; Le Hanneur, M; Accabled, F; de Gauzy, J Sales; Ilharreborde, B

    2017-06-01

    Cobalt chromium (CoCr) rods have recently gained popularity in adolescent idiopathic scoliosis (AIS) surgical treatment, replacing titanium (Ti) rods, with promising frontal correction rates in all-screw constructs. Posteromedial translation has been shown to emphasize thoracic sagittal correction, but the influence of rod material in this correction technique has never been investigated. The aim of this study was to compare the postoperative correction between Ti and CoCr rods for the treatment of thoracic AIS using posteromedial translation technique. 70 patients operated for thoracic (Lenke 1 or 2) AIS, in 2 institutions, between 2010 and 2013, were included. All patients underwent posterior fusion with hybrid constructs using posteromedial translation technique. The only difference between groups in the surgical procedure was the rod material (Ti or CoCr rods). Radiological measurements were compared preoperatively, postoperatively and at last follow-up (minimum 2 years). Preoperatively, groups were similar in terms of coronal and sagittal parameters. Postoperatively, no significant difference was observed between Ti and CoCr regarding frontal corrections, even when the preoperative flexibility of the curves was taken into account (p = 0.13). CoCr rods allowed greater restoration of T4T12 thoracic kyphosis, which remained stable over time (p = 0.01). Most common postoperative complication was proximal junctional kyphosis (n = 4). However, no significant difference was found between groups regarding postoperative complications rate. CoCr and Ti rods both provide significant and stable frontal correction in AIS treated with posteromedial translation technique using hybrid constructs. However, CoCr might be considered to emphasize sagittal correction in hypokyphotic patients.

  10. Station corrections for the Katmai Region Seismic Network

    USGS Publications Warehouse

    Searcy, Cheryl K.

    2003-01-01

    Most procedures for routinely locating earthquake hypocenters within a local network are constrained to using laterally homogeneous velocity models to represent the Earth's crustal velocity structure. As a result, earthquake location errors may arise due to actual lateral variations in the Earth's velocity structure. Station corrections can be used to compensate for heterogeneous velocity structure near individual stations (Douglas, 1967; Pujol, 1988). The HYPOELLIPSE program (Lahr, 1999) used by the Alaska Volcano Observatory (AVO) to locate earthquakes in Cook Inlet and the Aleutian Islands is a robust and efficient program that uses one-dimensional velocity models to determine hypocenters of local and regional earthquakes. This program does have the capability of utilizing station corrections within it's earthquake location proceedure. The velocity structures of Cook Inlet and Aleutian volcanoes very likely contain laterally varying heterogeneities. For this reason, the accuracy of earthquake locations in these areas will benefit from the determination and addition of station corrections. In this study, I determine corrections for each station in the Katmai region. The Katmai region is defined to lie between latitudes 57.5 degrees North and 59.00 degrees north and longitudes -154.00 and -156.00 (see Figure 1) and includes Mount Katmai, Novarupta, Mount Martin, Mount Mageik, Snowy Mountain, Mount Trident, and Mount Griggs volcanoes. Station corrections were determined using the computer program VELEST (Kissling, 1994). VELEST inverts arrival time data for one-dimensional velocity models and station corrections using a joint hypocenter determination technique. VELEST can also be used to locate single events.

  11. Local concurrent error detection and correction in data structures using virtual backpointers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.C.J.; Chen, P.P.; Fuchs, W.K.

    1989-11-01

    A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.

  12. Correction of caudal deflections of the nasal septum with a modified Goldman septoplasty technique: how we do it.

    PubMed

    Lawson, William; Westreich, Richard

    2007-10-01

    Correcting deviations of the caudal septum can be challenging because of cartilage memory, the need to provide adequate nasal tip and dorsal septal support, and the longterm effects of healing. The authors describe a minimally invasive, endonasal approach to the correction of caudal septal deviations. The procedure involves a hemitransfixion incision, unilateral flap elevation, and cartilage repositioning by limited dissection and excision.

  13. Review of Static Approaches to Surgical Correction of Presbyopia

    PubMed Central

    Zare Mehrjerdi, Mohammad Ali; Mohebbi, Masomeh; Zandian, Mehdi

    2017-01-01

    Presbyopia is the primary cause of reduction in the quality of life of people in their 40s, due to dependence on spectacles. Therefore, presbyopia correction has become an evolving and rapidly progressive field in refractive surgery. There are two primary options for presbyopia correction: the dynamic approach uses the residual accommodative capacity of the eye, and the static approach attempts to enhance the depth of focus of the optical system. The dynamic approach attempts to reverse suspected pathophysiologic changes. Dynamic approaches such as accommodative intraocular lenses (IOLs), scleral expansion techniques, refilling, and photodisruption of the crystalline lens have attracted less clinical interest due to inconsistent results and the complexity of the techniques. We have reviewed the most popular static techniques in presbyopia surgery, including multifocal IOLs, PresbyLASIK, and corneal inlays, but we should emphasize that these techniques are very different from the physiologic status of an untouched eye. A systematic PubMed search for the keywords “presbylasik”, “multifocal IOL”, and “presbyopic corneal inlay” revealed 634 articles; 124 were controlled clinical trials, 95 were published in the previous 10 years, and 78 were English with available full text. We reviewed the abstracts and rejected the unrelated articles; other references were included as needed. This narrative review compares different treatments according to available information on the optical basis of each treatment modality, including the clinical outcomes such as near, intermediate, and far visual acuity, spectacles independence, quality of vision, and dysphotopic phenomena. PMID:29090052

  14. [The validation of the effect of correcting spectral background changes based on floating reference method by simulation].

    PubMed

    Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin

    2015-02-01

    There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.

  15. A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging

    PubMed Central

    Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.

    2014-01-01

    Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990

  16. Thermodynamically constrained correction to ab initio equations of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, Martin; Mattsson, Thomas R.

    2014-07-07

    We show how equations of state generated by density functional theory methods can be augmented to match experimental data without distorting the correct behavior in the high- and low-density limits. The technique is thermodynamically consistent and relies on knowledge of the density and bulk modulus at a reference state and an estimation of the critical density of the liquid phase. We apply the method to four materials representing different classes of solids: carbon, molybdenum, lithium, and lithium fluoride. It is demonstrated that the corrected equations of state for both the liquid and solid phases show a significantly reduced dependence ofmore » the exchange-correlation functional used.« less

  17. cgCorrect: a method to correct for confounding cell-cell variation due to cell growth in single-cell transcriptomics

    NASA Astrophysics Data System (ADS)

    Blasi, Thomas; Buettner, Florian; Strasser, Michael K.; Marr, Carsten; Theis, Fabian J.

    2017-06-01

    Accessing gene expression at a single-cell level has unraveled often large heterogeneity among seemingly homogeneous cells, which remains obscured when using traditional population-based approaches. The computational analysis of single-cell transcriptomics data, however, still imposes unresolved challenges with respect to normalization, visualization and modeling the data. One such issue is differences in cell size, which introduce additional variability into the data and for which appropriate normalization techniques are needed. Otherwise, these differences in cell size may obscure genuine heterogeneities among cell populations and lead to overdispersed steady-state distributions of mRNA transcript numbers. We present cgCorrect, a statistical framework to correct for differences in cell size that are due to cell growth in single-cell transcriptomics data. We derive the probability for the cell-growth-corrected mRNA transcript number given the measured, cell size-dependent mRNA transcript number, based on the assumption that the average number of transcripts in a cell increases proportionally to the cell’s volume during the cell cycle. cgCorrect can be used for both data normalization and to analyze the steady-state distributions used to infer the gene expression mechanism. We demonstrate its applicability on both simulated data and single-cell quantitative real-time polymerase chain reaction (PCR) data from mouse blood stem and progenitor cells (and to quantitative single-cell RNA-sequencing data obtained from mouse embryonic stem cells). We show that correcting for differences in cell size affects the interpretation of the data obtained by typically performed computational analysis.

  18. Ten-year experience with the muscle split technique, bioabsorbable plates, and postoperative bracing for correction of pectus carinatum: the Innsbruck protocol.

    PubMed

    Del Frari, Barbara; Schwabegger, Anton H

    2011-06-01

    We reviewed further clinical experience with our approach for pectus carinatum repair: modified surgical approach of pectoralis muscle split technique, bioabsorbable plates with screws, and postoperative compressive brace. From April 2000 to February 2010, 55 patients underwent pectus carinatum repair at our department with modifications of conventional Ravitch repair. There were 14 female and 41 male patients, mean age of 19.3 years at the onset of treatment. Postoperative treatment involved fitting of a lightweight, patient-controlled chest brace. Average follow-up was 13.7 months. Patient satisfaction was excellent for 40 patients (72.7%) and good for the remaining 15 (27.3%); aesthetic appearance was excellent for 37 patients (67.3%) and good for the remaining 18 (32.7%). Postoperative evaluation was objective measurement with a thorax caliper and clinical examination. No major perioperative complications were observed. Postoperative complications were mild recurrence of deformity (n = 3) and persistent, mild, single costal cartilage protrusion (n = 2). No patient had palpable plates or screws, and there was no material breakdown. The combination of muscle split technique and absorbable osteosynthesis represents an alternative in pectus carinatum repair. The pectoralis muscle split technique allows early patient mobilization and rehabilitation. Bioabsorbable plates get completely absorbed, avoiding second operation, and chest brace provides postoperative immobilization of the anterior thoracic wall during healing and avoids development of hypertrophic scars. Our combined approach to the correction of pectus carinatum deformities yields predominantly excellent esthetic results, with low morbidity, low costs, and less invasiveness, leading to high patient satisfaction. Copyright © 2011 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  19. Relationship of forces acting on implant rods and degree of scoliosis correction.

    PubMed

    Salmingo, Remel Alingalan; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu

    2013-02-01

    Adolescent idiopathic scoliosis is a complex spinal pathology characterized as a three-dimensional spine deformity combined with vertebral rotation. Various surgical techniques for correction of severe scoliotic deformity have evolved and became more advanced in applying the corrective forces. The objective of this study was to investigate the relationship between corrective forces acting on deformed rods and degree of scoliosis correction. Implant rod geometries of six adolescent idiopathic scoliosis patients were measured before and after surgery. An elasto-plastic finite element model of the implant rod before surgery was reconstructed for each patient. An inverse method based on Finite Element Analysis was used to apply forces to the implant rod model such that it was deformed the same after surgery. Relationship between the magnitude of corrective forces and degree of correction expressed as change of Cobb angle was evaluated. The effects of screw configuration on the corrective forces were also investigated. Corrective forces acting on rods and degree of correction were not correlated. Increase in number of implant screws tended to decrease the magnitude of corrective forces but did not provide higher degree of correction. Although greater correction was achieved with higher screw density, the forces increased at some level. The biomechanics of scoliosis correction is not only dependent to the corrective forces acting on implant rods but also associated with various parameters such as screw placement configuration and spine stiffness. Considering the magnitude of forces, increasing screw density is not guaranteed as the safest surgical strategy. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Finite temperature corrections to tachyon mass in intersecting D-branes

    NASA Astrophysics Data System (ADS)

    Sethi, Varun; Chowdhury, Sudipto Paul; Sarkar, Swarnendu

    2017-04-01

    We continue with the analysis of finite temperature corrections to the Tachyon mass in intersecting branes which was initiated in [1]. In this paper we extend the computation to the case of intersecting D3 branes by considering a setup of two intersecting branes in flat-space background. A holographic model dual to BCS superconductor consisting of intersecting D8 branes in D4 brane background was proposed in [2]. The background considered here is a simplified configuration of this dual model. We compute the one-loop Tachyon amplitude in the Yang-Mills approximation and show that the result is finite. Analyzing the amplitudes further we numerically compute the transition temperature at which the Tachyon becomes massless. The analytic expressions for the one-loop amplitudes obtained here reduce to those for intersecting D1 branes obtained in [1] as well as those for intersecting D2 branes.