Sample records for background subtraction algorithm

  1. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  2. Compressive Sensing for Background Subtraction

    DTIC Science & Technology

    2009-12-20

    i) reconstructing an image using only a single optical pho- todiode (infrared, hyperspectral, etc.) along with a digital micromirror device (DMD... curves , we use the full images, run the background subtraction algorithm proposed in [19], and obtain baseline background subtracted images. We then...the images to generate the ROC curve . 5.5 Silhouettes vs. Difference Images We have used a multi camera set up for a 3D voxel reconstruction using the

  3. ViBe: a universal background subtraction algorithm for video sequences.

    PubMed

    Barnich, Olivier; Van Droogenbroeck, Marc

    2011-06-01

    This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.

  4. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    NASA Astrophysics Data System (ADS)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  5. A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) for extraction of drug metabolites in liquid chromatography/mass spectrometry data from biological matrices.

    PubMed

    Zhu, Peijuan; Ding, Wei; Tong, Wei; Ghosal, Anima; Alton, Kevin; Chowdhury, Swapan

    2009-06-01

    A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) is implemented using the statistical programming language R to remove non-drug-related ion signals from accurate mass liquid chromatography/mass spectrometry (LC/MS) data. The background-subtraction part of the algorithm is similar to a previously published procedure (Zhang H and Yang Y. J. Mass Spectrom. 2008, 43: 1181-1190). The noise reduction algorithm (NoRA) is an add-on feature to help further clean up the residual matrix ion noises after background subtraction. It functions by removing ion signals that are not consistent across many adjacent scans. The effectiveness of BgS-NoRA was examined in biological matrices by spiking blank plasma extract, bile and urine with diclofenac and ibuprofen that have been pre-metabolized by microsomal incubation. Efficient removal of background ions permitted the detection of drug-related ions in in vivo samples (plasma, bile, urine and feces) obtained from rats orally dosed with (14)C-loratadine with minimal interference. Results from these experiments demonstrate that BgS-NoRA is more effective in removing analyte-unrelated ions than background subtraction alone. NoRA is shown to be particularly effective in the early retention region for urine samples and middle retention region for bile samples, where the matrix ion signals still dominate the total ion chromatograms (TICs) after background subtraction. In most cases, the TICs after BgS-NoRA are in excellent qualitative correlation to the radiochromatograms. BgS-NoRA will be a very useful tool in metabolite detection and identification work, especially in first-in-human (FIH) studies and multiple dose toxicology studies where non-radio-labeled drugs are administered. Data from these types of studies are critical to meet the latest FDA guidance on Metabolite in Safety Testing (MIST). Copyright (c) 2009 John Wiley & Sons, Ltd.

  6. Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Horne, William C.

    2015-01-01

    An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.

  7. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  8. PCA-based approach for subtracting thermal background emission in high-contrast imaging data

    NASA Astrophysics Data System (ADS)

    Hunziker, S.; Quanz, S. P.; Amara, A.; Meyer, M. R.

    2018-03-01

    Aims.Ground-based observations at thermal infrared wavelengths suffer from large background radiation due to the sky, telescope and warm surfaces in the instrument. This significantly limits the sensitivity of ground-based observations at wavelengths longer than 3 μm. The main purpose of this work is to analyse this background emission in infrared high-contrast imaging data as illustrative of the problem, show how it can be modelled and subtracted and demonstrate that it can improve the detection of faint sources, such as exoplanets. Methods: We used principal component analysis (PCA) to model and subtract the thermal background emission in three archival high-contrast angular differential imaging datasets in the M' and L' filter. We used an M' dataset of β Pic to describe in detail how the algorithm works and explain how it can be applied. The results of the background subtraction are compared to the results from a conventional mean background subtraction scheme applied to the same dataset. Finally, both methods for background subtraction are compared by performing complete data reductions. We analysed the results from the M' dataset of HD 100546 only qualitatively. For the M' band dataset of β Pic and the L' band dataset of HD 169142, which was obtained with an angular groove phase mask vortex vector coronagraph, we also calculated and analysed the achieved signal-to-noise ratio (S/N). Results: We show that applying PCA is an effective way to remove spatially and temporarily varying thermal background emission down to close to the background limit. The procedure also proves to be very successful at reconstructing the background that is hidden behind the point spread function. In the complete data reductions, we find at least qualitative improvements for HD 100546 and HD 169142, however, we fail to find a significant increase in S/N of β Pic b. We discuss these findings and argue that in particular datasets with strongly varying observing conditions or

  9. Dual-tracer background subtraction approach for fluorescent molecular tomography

    PubMed Central

    Holt, Robert W.; El-Ghussein, Fadi; Davis, Scott C.; Samkoe, Kimberley S.; Gunn, Jason R.; Leblond, Frederic

    2013-01-01

    Abstract. Diffuse fluorescence tomography requires high contrast-to-background ratios to accurately reconstruct inclusions of interest. This is a problem when imaging the uptake of fluorescently labeled molecularly targeted tracers in tissue, which can result in high levels of heterogeneously distributed background uptake. We present a dual-tracer background subtraction approach, wherein signal from the uptake of an untargeted tracer is subtracted from targeted tracer signal prior to image reconstruction, resulting in maps of targeted tracer binding. The approach is demonstrated in simulations, a phantom study, and in a mouse glioma imaging study, demonstrating substantial improvement over conventional and homogenous background subtraction image reconstruction approaches. PMID:23292612

  10. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  11. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    ERIC Educational Resources Information Center

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  12. Comparison of Three Instructional Sequences for the Addition and Subtraction Algorithms. Technical Report 273.

    ERIC Educational Resources Information Center

    Wiles, Clyde A.

    The study's purpose was to investigate the differential effects on the achievement of second-grade students that could be attributed to three instructional sequences for the learning of the addition and subtraction algorithms. One sequence presented the addition algorithm first (AS), the second presented the subtraction algorithm first (SA), and…

  13. Transactional Algorithm for Subtracting Fractions: Go Shopping

    ERIC Educational Resources Information Center

    Pinckard, James Seishin

    2009-01-01

    The purpose of this quasi-experimental research study was to examine the effects of an alternative or transactional algorithm for subtracting mixed numbers within the middle school setting. Initial data were gathered from the student achievement of four mathematics teachers at three different school sites. The results indicated students who…

  14. New subtraction algorithms for evaluation of lesions on dynamic contrast-enhanced MR mammography.

    PubMed

    Choi, Byung Gil; Kim, Hak Hee; Kim, Euy Neyng; Kim, Bum-soo; Han, Ji-Youn; Yoo, Seung-Schik; Park, Seog Hee

    2002-12-01

    We report new subtraction algorithms for the detection of lesions in dynamic contrast-enhanced MR mammography(CE MRM). Twenty-five patients with suspicious breast lesions underwent dynamic CE MRM using 3D fast low-angle shot. After the acquisition of the T1-weighted scout images, dynamic images were acquired six times after the bolus injection of contrast media. Serial subtractions, step-by-step subtractions, and reverse subtractions, were performed. Two radiologists attempted to differentiate benign from malignant lesion in consensus. The sensitivity, specificity, and accuracy of the method leading to the differentiation of malignant tumor from benign lesions were 85.7, 100, and 96%, respectively. Subtraction images allowed for better visualization of the enhancement as well as its temporal pattern than visual inspection of dynamic images alone. Our findings suggest that the new subtraction algorithm is adequate for screening malignant breast lesions and can potentially replace the time-intensity profile analysis on user-selected regions of interest.

  15. A 3D image sensor with adaptable charge subtraction scheme for background light suppression

    NASA Astrophysics Data System (ADS)

    Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.

    2013-02-01

    We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.

  16. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    PubMed

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  18. Informed baseline subtraction of proteomic mass spectrometry data aided by a novel sliding window algorithm.

    PubMed

    Stanford, Tyman E; Bagley, Christopher J; Solomon, Patty J

    2016-01-01

    Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios ( m / z ), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel 'continuous' line segment algorithm that efficiently operates over data with a transformed m / z -axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m / z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Several of the transformations investigated were able to reduce, if not entirely remove, the peak width and peak location relationship resulting in near-optimal baseline subtraction using the automated pipeline. The proposed novel 'continuous' line segment algorithm is shown to far outperform naive sliding window algorithms with regard to the computational time required. The improvement in computational time was at least four-fold on real MALDI TOF-MS data and at least an order of

  19. The Pedestrian Detection Method Using an Extension Background Subtraction about the Driving Safety Support Systems

    NASA Astrophysics Data System (ADS)

    Muranaka, Noriaki; Date, Kei; Tokumaru, Masataka; Imanishi, Shigeru

    In recent years, the traffic accident occurs frequently with explosion of traffic density. Therefore, we think that the safe and comfortable transportation system to defend the pedestrian who is the traffic weak is necessary. First, we detect and recognize the pedestrian (the crossing person) by the image processing. Next, we inform all the drivers of the right or left turn that the pedestrian exists by the sound and the image and so on. By prompting a driver to do safe driving in this way, the accident to the pedestrian can decrease. In this paper, we are using a background subtraction method for the movement detection of the movement object. In the background subtraction method, the update method in the background was important, and as for the conventional way, the threshold values of the subtraction processing and background update were identical. That is, the mixing rate of the input image and the background image of the background update was a fixation value, and the fine tuning which corresponded to the environment change of the weather was difficult. Therefore, we propose the update method of the background image that the estimated mistake is difficult to be amplified. We experiment and examines in the comparison about five cases of sunshine, cloudy, evening, rain, sunlight change, except night. This technique can set separately the threshold values of the subtraction processing and background update processing which suited the environmental condition of the weather and so on. Therefore, the fine tuning becomes possible freely in the mixing rate of the input image and the background image of the background update. Because the setting of the parameter which suited an environmental condition becomes important to minimize mistaking percentage, we examine about the setting of a parameter.

  20. Diffraction, chopping, and background subtraction for LDR

    NASA Technical Reports Server (NTRS)

    Wright, Edward L.

    1988-01-01

    The Large Deployable Reflector (LDR) will be an extremely sensitive infrared telescope if the noise due to the photons in the large thermal background is the only limiting factor. For observations with a 3 arcsec aperture in a broadband at 100 micrometers, a 20-meter LDR will emit 10(exp 12) per second, while the photon noise limited sensitivity in a deep survey observation will be 3,000 photons per second. Thus the background subtraction has to work at the 1 part per billion level. Very small amounts of scattered or diffracted energy can be significant if they are modulated by the chopper. The results are presented for 1-D and 2-D diffraction calculations for the lightweight, low-cost LDR concept that uses an active chopping quaternary to correct the wavefront errors introduced by the primary. Fourier transforms were used to evaluate the diffraction of 1 mm waves through this system. Unbalanced signals due to dust and thermal gradients were also studied.

  1. To BG or not to BG: Background Subtraction for EIT Coronal Loops

    NASA Astrophysics Data System (ADS)

    Beene, J. E.; Schmelz, J. T.

    2003-05-01

    One of the few observational tests for various coronal heating models is to determine the temperature profile along coronal loops. Since loops are such an abundant coronal feature, this method originally seemed quite promising - that the coronal heating problem might actually be solved by determining the temperature as a function of arc length and comparing these observations with predictions made by different models. But there are many instruments currently available to study loops, as well as various techniques used to determine their temperature characteristics. Consequently, there are many different, mostly conflicting temperature results. We chose data for ten coronal loops observed with the Extreme ultraviolet Imaging Telescope (EIT), and chose specific pixels along each loop, as well as corresponding nearby background pixels where the loop emission was not present. Temperature analysis from the 171-to-195 and 195-to-284 angstrom image ratios was then performed on three forms of the data: the original data alone, the original data with a uniform background subtraction, and the original data with a pixel-by-pixel background subtraction. The original results show loops of constant temperature, as other authors have found before us, but the 171-to-195 and 195-to-284 results are significantly different. Background subtraction does not change the constant-temperature result or the value of the temperature itself. This does not mean that loops are isothermal, however, because the background pixels, which are not part of any contiguous structure, also produce a constant-temperature result with the same value as the loop pixels. These results indicate that EIT temperature analysis should not be trusted, and the isothermal loops that result from EIT (and TRACE) analysis may be an artifact of the analysis process. Solar physics research at the University of Memphis is supported by NASA grants NAG5-9783 and NAG5-12096.

  2. A multi-band spectral subtraction-based algorithm for real-time noise cancellation applied to gunshot acoustics

    NASA Astrophysics Data System (ADS)

    Ramos, António L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald

    2013-06-01

    Acoustical sniper positioning is based on the detection and direction-of-arrival estimation of the shockwave and the muzzle blast acoustical signals. In real-life situations, the detection and direction-of-arrival estimation processes is usually performed under the influence of background noise sources, e.g., vehicles noise, and might result in non-negligible inaccuracies than can affect the system performance and reliability negatively, specially when detecting the muzzle sound under long range distance and absorbing terrains. This paper introduces a multi-band spectral subtraction based algorithm for real-time noise reduction, applied to gunshot acoustical signals. The ballistic shockwave and the muzzle blast signals exhibit distinct frequency contents that are affected differently by additive noise. In most real situations, the noise component is colored and a multi-band spectral subtraction approach for noise reduction contributes to reducing the presence of artifacts in denoised signals. The proposed algorithm is tested using a dataset generated by combining signals from real gunshots and real vehicle noise. The noise component was generated using a steel tracked military tank running on asphalt and includes, therefore, the sound from the vehicle engine, which varies slightly in frequency over time according to the engine's rpm, and the sound from the steel tracks as the vehicle moves.

  3. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  4. Wind profiling for a coherent wind Doppler lidar by an auto-adaptive background subtraction approach.

    PubMed

    Wu, Yanwei; Guo, Pan; Chen, Siying; Chen, He; Zhang, Yinchao

    2017-04-01

    Auto-adaptive background subtraction (AABS) is proposed as a denoising method for data processing of the coherent Doppler lidar (CDL). The method is proposed specifically for a low-signal-to-noise-ratio regime, in which the drifting power spectral density of CDL data occurs. Unlike the periodogram maximum (PM) and adaptive iteratively reweighted penalized least squares (airPLS), the proposed method presents reliable peaks and is thus advantageous in identifying peak locations. According to the analysis results of simulated and actually measured data, the proposed method outperforms the airPLS method and the PM algorithm in the furthest detectable range. The proposed method improves the detection range approximately up to 16.7% and 40% when compared to the airPLS method and the PM method, respectively. It also has smaller mean wind velocity and standard error values than the airPLS and PM methods. The AABS approach improves the quality of Doppler shift estimates and can be applied to obtain the whole wind profiling by the CDL.

  5. Optimizing Energy Consumption in Vehicular Sensor Networks by Clustering Using Fuzzy C-Means and Fuzzy Subtractive Algorithms

    NASA Astrophysics Data System (ADS)

    Ebrahimi, A.; Pahlavani, P.; Masoumi, Z.

    2017-09-01

    Traffic monitoring and managing in urban intelligent transportation systems (ITS) can be carried out based on vehicular sensor networks. In a vehicular sensor network, vehicles equipped with sensors such as GPS, can act as mobile sensors for sensing the urban traffic and sending the reports to a traffic monitoring center (TMC) for traffic estimation. The energy consumption by the sensor nodes is a main problem in the wireless sensor networks (WSNs); moreover, it is the most important feature in designing these networks. Clustering the sensor nodes is considered as an effective solution to reduce the energy consumption of WSNs. Each cluster should have a Cluster Head (CH), and a number of nodes located within its supervision area. The cluster heads are responsible for gathering and aggregating the information of clusters. Then, it transmits the information to the data collection center. Hence, the use of clustering decreases the volume of transmitting information, and, consequently, reduces the energy consumption of network. In this paper, Fuzzy C-Means (FCM) and Fuzzy Subtractive algorithms are employed to cluster sensors and investigate their performance on the energy consumption of sensors. It can be seen that the FCM algorithm and Fuzzy Subtractive have been reduced energy consumption of vehicle sensors up to 90.68% and 92.18%, respectively. Comparing the performance of the algorithms implies the 1.5 percent improvement in Fuzzy Subtractive algorithm in comparison.

  6. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  7. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  8. Contexts for Column Addition and Subtraction

    ERIC Educational Resources Information Center

    Lopez Fernandez, Jorge M.; Velazquez Estrella, Aileen

    2011-01-01

    In this article, the authors discuss their approach to column addition and subtraction algorithms. Adapting an original idea of Paul Cobb and Erna Yackel's from "A Contextual Investigation of Three-Digit Addition and Subtraction" related to packing and unpacking candy in a candy factory, the authors provided an analogous context by…

  9. Temporal subtraction contrast-enhanced dedicated breast CT

    PubMed Central

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.

    2016-01-01

    Purpose To develop a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. Methods An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, Intensity Difference Adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using Normalized Cross Correlation (NCC), Symmetric Uncertainty Coefficient (SUC), Normalized Mutual Information (NMI), Mean Square Error (MSE) and Target Registration Error (TRE). Results The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE(0–16%), NCC (0–6%), NMI (0–13%) and TRE (0–34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies

  10. Temporal subtraction contrast-enhanced dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.

    2016-09-01

    The development of a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, intensity difference adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using normalized cross correlation (NCC), symmetric uncertainty coefficient, normalized mutual information (NMI), mean square error (MSE) and target registration error (TRE). The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE (0-16%), NCC (0-6%), NMI (0-13%) and TRE (0-34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was

  11. FPGA implementation for real-time background subtraction based on Horprasert model.

    PubMed

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W.

  12. FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model

    PubMed Central

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J.; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W. PMID:22368487

  13. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    PubMed

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  14. The spectrum of static subtracted geometries

    NASA Astrophysics Data System (ADS)

    Andrade, Tomás; Castro, Alejandra; Cohen-Maldonado, Diego

    2017-05-01

    Subtracted geometries are black hole solutions of the four dimensional STU model with rather interesting ties to asymptotically flat black holes. A peculiar feature is that the solutions to the Klein-Gordon equation on this subtracted background can be organized according to representations of the conformal group SO(2, 2). We test if this behavior persists for the linearized fluctuations of gravitational and matter fields on static, electrically charged backgrounds of this kind. We find that there is a subsector of the modes that do display conformal symmetry, while some modes do not. We also discuss two different effective actions that describe these subtracted geometries and how the spectrum of quasinormal modes is dramatically different depending upon the action used.

  15. getimages: Background derivation and image flattening method

    NASA Astrophysics Data System (ADS)

    Men'shchikov, Alexander

    2017-05-01

    getimages performs background derivation and image flattening for high-resolution images obtained with space observatories. It is based on median filtering with sliding windows corresponding to a range of spatial scales from the observational beam size up to a maximum structure width X. The latter is a single free parameter of getimages that can be evaluated manually from the observed image. The median filtering algorithm provides a background image for structures of all widths below X. The same median filtering procedure applied to an image of standard deviations derived from a background-subtracted image results in a flattening image. Finally, a flattened image is computed by dividing the background-subtracted by the flattening image. Standard deviations in the flattened image are now uniform outside sources and filaments. Detecting structures in such radically simplified images results in much cleaner extractions that are more complete and reliable. getimages also reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images. The code (a Bash script) uses FORTRAN utilities from getsources (ascl:1507.014), which must be installed.

  16. Object tracking via background subtraction for monitoring illegal activity in crossroad

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Jeong, Sunghwan; Park, Sang Hyun; Lee, Joonwhoan

    2016-07-01

    In the field of intelligent transportation system a great number of vision-based techniques have been proposed to prevent pedestrians from being hit by vehicles. This paper presents a system that can perform pedestrian and vehicle detection and monitoring of illegal activity in zebra crossings. In zebra crossing, according to the traffic light status, to fully avoid a collision, a driver or pedestrian should be warned earlier if they possess any illegal moves. In this research, at first, we detect the traffic light status of pedestrian and monitor the crossroad for vehicle pedestrian moves. The background subtraction based object detection and tracking is performed to detect pedestrian and vehicles in crossroads. Shadow removal, blob segmentation, trajectory analysis etc. are used to improve the object detection and classification performance. We demonstrate the experiment in several video sequences which are recorded in different time and environment such as day time and night time, sunny and raining environment. Our experimental results show that such simple and efficient technique can be used successfully as a traffic surveillance system to prevent accidents in zebra crossings.

  17. M-band imaging of the HR 8799 planetary system using an innovative LOCI-based background subtraction technique

    DOE PAGES

    Galicher, Raphael; Marois, Christian; Macintosh, Bruce; ...

    2011-09-02

    Multi-wavelength observations/spectroscopy of exoplanetary atmospheres are the basis of the emerging exciting field of comparative exoplanetology. The HR 8799 planetary system is an ideal laboratory to study our current knowledge gap between massive field brown dwarfs and the cold 5 Gyr old solar system planets. The HR 8799 planets have so far been imaged at J- to L-band, with only upper limits available at M-band. We present here deep high-contrast Keck II adaptive optics M-band observations that show the imaging detection of three of the four currently known HR 8799 planets. Such detections were made possible due to the developmentmore » of an innovative LOCI-based background subtraction scheme that is three times more efficient than a classical median background subtraction for Keck II AO data, representing a gain in telescope time of up to a factor of nine. These M-band detections extend the broadband photometric coverage out to ~5 μm and provide access to the strong CO fundamental absorption band at 4.5 μm. The new M-band photometry shows that the HR 8799 planets are located near the L/T-type dwarf transition, similar to what was found by other studies. Finally, we also confirm that the best atmospheric fits are consistent with low surface gravity, dusty, and non-equilibrium CO/CH 4 chemistry models.« less

  18. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  19. Fluorescence background subtraction technique for hybrid fluorescence molecular tomography/x-ray computed tomography imaging of a mouse model of early stage lung cancer.

    PubMed

    Ale, Angelique; Ermolayev, Vladimir; Deliolanis, Nikolaos C; Ntziachristos, Vasilis

    2013-05-01

    The ability to visualize early stage lung cancer is important in the study of biomarkers and targeting agents that could lead to earlier diagnosis. The recent development of hybrid free-space 360-deg fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) imaging yields a superior optical imaging modality for three-dimensional small animal fluorescence imaging over stand-alone optical systems. Imaging accuracy was improved by using XCT information in the fluorescence reconstruction method. Despite this progress, the detection sensitivity of targeted fluorescence agents remains limited by nonspecific background accumulation of the fluorochrome employed, which complicates early detection of murine cancers. Therefore we examine whether x-ray CT information and bulk fluorescence detection can be combined to increase detection sensitivity. Correspondingly, we research the performance of a data-driven fluorescence background estimator employed for subtraction of background fluorescence from acquisition data. Using mice containing known fluorochromes ex vivo, we demonstrate the reduction of background signals from reconstructed images and sensitivity improvements. Finally, by applying the method to in vivo data from K-ras transgenic mice developing lung cancer, we find small tumors at an early stage compared with reconstructions performed using raw data. We conclude with the benefits of employing fluorescence subtraction in hybrid FMT-XCT for early detection studies.

  20. Image Processing Of Images From Peripheral-Artery Digital Subtraction Angiography (DSA) Studies

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Tarbox, Lawrence R.; Cist, David B.; Faul, David D.

    1988-06-01

    A system is being developed to test the possibility of doing peripheral, digital subtraction angiography (DSA) with a single contrast injection using a moving gantry system. Given repositioning errors that occur between the mask and contrast-containing images, factors affecting the success of subtractions following image registration have been investigated theoretically and experimentally. For a 1 mm gantry displacement, parallax and geometric image distortion (pin-cushion) both give subtraction errors following registration that are approximately 25% of the error resulting from no registration. Image processing techniques improve the subtractions. The geometric distortion effect is reduced using a piece-wise, 8 parameter unwarping method. Plots of image similarity measures versus pixel shift are well behaved and well fit by a parabola, leading to the development of an iterative, automatic registration algorithm that uses parabolic prediction of the new minimum. The registration algorithm converges quickly (less than 1 second on a MicroVAX) and is relatively immune to the region of interest (ROI) selected.

  1. Sky Subtraction with Fiber-Fed Spectrograph

    NASA Astrophysics Data System (ADS)

    Rodrigues, Myriam

    2017-09-01

    "Historically, fiber-fed spectrographs had been deemed inadequate for the observation of faint targets, mainly because of the difficulty to achieve high accuracy on the sky subtraction. The impossibility to sample the sky in the immediate vicinity of the target in fiber instruments has led to a commonly held view that a multi-object fibre spectrograph cannot achieve an accurate sky subtraction under 1% contrary to their slit counterpart. The next generation of multi-objects spectrograph at the VLT (MOONS) and the planed MOS for the E-ELT (MOSAIC) are fiber-fed instruments, and are aimed to observed targets fainter than the sky continuum level. In this talk, I will present the state-of-art on sky subtraction strategies and data reduction algorithm specifically developed for fiber-fed spectrographs. I will also present the main results of an observational campaign to better characterise the sky spatial and temporal variations ( in particular the continuum and faint sky lines)."

  2. Multivariate Spatial Condition Mapping Using Subtractive Fuzzy Cluster Means

    PubMed Central

    Sabit, Hakilo; Al-Anbuky, Adnan

    2014-01-01

    Wireless sensor networks are usually deployed for monitoring given physical phenomena taking place in a specific space and over a specific duration of time. The spatio-temporal distribution of these phenomena often correlates to certain physical events. To appropriately characterise these events-phenomena relationships over a given space for a given time frame, we require continuous monitoring of the conditions. WSNs are perfectly suited for these tasks, due to their inherent robustness. This paper presents a subtractive fuzzy cluster means algorithm and its application in data stream mining for wireless sensor systems over a cloud-computing-like architecture, which we call sensor cloud data stream mining. Benchmarking on standard mining algorithms, the k-means and the FCM algorithms, we have demonstrated that the subtractive fuzzy cluster means model can perform high quality distributed data stream mining tasks comparable to centralised data stream mining. PMID:25313495

  3. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  4. Subtraction with hadronic initial states at NLO: an NNLO-compatible scheme

    NASA Astrophysics Data System (ADS)

    Somogyi, Gábor

    2009-05-01

    We present an NNLO-compatible subtraction scheme for computing QCD jet cross sections of hadron-initiated processes at NLO accuracy. The scheme is constructed specifically with those complications in mind, that emerge when extending the subtraction algorithm to next-to-next-to-leading order. It is therefore possible to embed the present scheme in a full NNLO computation without any modifications.

  5. Background derivation and image flattening: getimages

    NASA Astrophysics Data System (ADS)

    Men'shchikov, A.

    2017-11-01

    Modern high-resolution images obtained with space observatories display extremely strong intensity variations across images on all spatial scales. Source extraction in such images with methods based on global thresholding may bring unacceptably large numbers of spurious sources in bright areas while failing to detect sources in low-background or low-noise areas. It would be highly beneficial to subtract background and equalize the levels of small-scale fluctuations in the images before extracting sources or filaments. This paper describes getimages, a new method of background derivation and image flattening. It is based on median filtering with sliding windows that correspond to a range of spatial scales from the observational beam size up to a maximum structure width Xλ. The latter is a single free parameter of getimages that can be evaluated manually from the observed image ℐλ. The median filtering algorithm provides a background image \\tilde{Bλ} for structures of all widths below Xλ. The same median filtering procedure applied to an image of standard deviations 𝓓λ derived from a background-subtracted image \\tilde{Sλ} results in a flattening image \\tilde{Fλ}. Finally, a flattened detection image I{λD} = \\tilde{Sλ}/\\tilde{Fλ} is computed, whose standard deviations are uniform outside sources and filaments. Detecting sources in such greatly simplified images results in much cleaner extractions that are more complete and reliable. As a bonus, getimages reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images.

  6. True ion pick (TIPick): a denoising and peak picking algorithm to extract ion signals from liquid chromatography/mass spectrometry data.

    PubMed

    Ho, Tsung-Jung; Kuo, Ching-Hua; Wang, San-Yuan; Chen, Guan-Yuan; Tseng, Yufeng J

    2013-02-01

    Liquid Chromatography-Time of Flight Mass Spectrometry has become an important technique for toxicological screening and metabolomics. We describe TIPick a novel algorithm that accurately and sensitively detects target compounds in biological samples. TIPick comprises two main steps: background subtraction and peak picking. By subtracting a blank chromatogram, TIPick eliminates chemical signals of blank injections and reduces false positive results. TIPick detects peaks by calculating the S(CC(INI)) values of extracted ion chromatograms (EICs) without considering peak shapes, and it is able to detect tailing and fronting peaks. TIPick also uses duplicate injections to enhance the signals of the peaks and thus improve the peak detection power. Commonly seen split peaks caused by either saturation of the mass spectrometer detector or a mathematical background subtraction algorithm can be resolved by adjusting the mass error tolerance of the EICs and by comparing the EICs before and after background subtraction. The performance of TIPick was tested in a data set containing 297 standard mixtures; the recall, precision and F-score were 0.99, 0.97 and 0.98, respectively. TIPick was successfully used to construct and analyze the NTU MetaCore metabolomics chemical standards library, and it was applied for toxicological screening and metabolomics studies. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Ambient-Light-Canceling Camera Using Subtraction of Frames

    NASA Technical Reports Server (NTRS)

    Morookian, John Michael

    2004-01-01

    The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light

  8. Comparative efficiency of a scheme of cyclic alternating-period subtraction

    NASA Astrophysics Data System (ADS)

    Golikov, V. S.; Artemenko, I. G.; Malinin, A. P.

    1986-06-01

    The estimation of the detection quality of a signal on a background of correlated noise according to the Neumann-Pearson criterion is examined. It is shown that, in a number of cases, the cyclic alternating-period subtraction scheme has a higher noise immunity than the conventional alternating-period subtraction scheme.

  9. Tomographic image via background subtraction using an x-ray projection image and a priori computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Jin; Yi Byongyong; Lasio, Giovanni

    Kilovoltage x-ray projection images (kV images for brevity) are increasingly available in image guided radiotherapy (IGRT) for patient positioning. These images are two-dimensional (2D) projections of a three-dimensional (3D) object along the x-ray beam direction. Projecting a 3D object onto a plane may lead to ambiguities in the identification of anatomical structures and to poor contrast in kV images. Therefore, the use of kV images in IGRT is mainly limited to bony landmark alignments. This work proposes a novel subtraction technique that isolates a slice of interest (SOI) from a kV image with the assistance of a priori information frommore » a previous CT scan. The method separates structural information within a preselected SOI by suppressing contributions to the unprocessed projection from out-of-SOI-plane structures. Up to a five-fold increase in the contrast-to-noise ratios (CNRs) was observed in selected regions of the isolated SOI, when compared to the original unprocessed kV image. The tomographic image via background subtraction (TIBS) technique aims to provide a quick snapshot of the slice of interest with greatly enhanced image contrast over conventional kV x-ray projections for fast and accurate image guidance of radiation therapy. With further refinements, TIBS could, in principle, provide real-time tumor localization using gantry-mounted x-ray imaging systems without the need for implanted markers.« less

  10. Subtractive Leadership

    ERIC Educational Resources Information Center

    Larwin, K. H.; Thomas, Eugene M.; Larwin, David A.

    2015-01-01

    This paper introduces a new term and concept to the leadership discourse: Subtractive Leadership. As an extension of the distributive leadership model, the notion of subtractive leadership refers to a leadership style that detracts from organizational culture and productivity. Subtractive leadership fails to embrace and balance the characteristics…

  11. Probing Large-scale Coherence between Spitzer IR and Chandra X-Ray Source-subtracted Cosmic Backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappelluti, N.; Urry, M.; Arendt, R.

    2017-09-20

    We present new measurements of the large-scale clustering component of the cross-power spectra of the source-subtracted Spitzer -IRAC cosmic infrared background and Chandra -ACIS cosmic X-ray background surface brightness fluctuations Our investigation uses data from the Chandra Deep Field South, Hubble Deep Field North, Extended Groth Strip/AEGIS field, and UDS/SXDF surveys, comprising 1160 Spitzer hours and ∼12 Ms of Chandra data collected over a total area of 0.3 deg{sup 2}. We report the first (>5 σ ) detection of a cross-power signal on large angular scales >20″ between [0.5–2] keV and the 3.6 and 4.5 μ m bands, at ∼5more » σ and 6.3 σ significance, respectively. The correlation with harder X-ray bands is marginally significant. Comparing the new observations with existing models for the contribution of the known unmasked source population at z < 7, we find an excess of about an order of magnitude at 5 σ confidence. We discuss possible interpretations for the origin of this excess in terms of the contribution from accreting early black holes (BHs), including both direct collapse BHs and primordial BHs, as well as from scattering in the interstellar medium and intra-halo light.« less

  12. Automatic vehicle counting using background subtraction method on gray scale images and morphology operation

    NASA Astrophysics Data System (ADS)

    Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.

    2018-05-01

    Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.

  13. A new background subtraction method for Western blot densitometry band quantification through image analysis software.

    PubMed

    Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier

    2018-06-01

    Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Binarization algorithm for document image with complex background

    NASA Astrophysics Data System (ADS)

    Miao, Shaojun; Lu, Tongwei; Min, Feng

    2015-12-01

    The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.

  15. Crystal identification for a dual-layer-offset LYSO based PET system via Lu-176 background radiation and mean shift algorithm

    NASA Astrophysics Data System (ADS)

    Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang

    2018-01-01

    Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.

  16. Identification of hand motion using background subtraction method and extraction of image binary with backpropagation neural network on skeleton model

    NASA Astrophysics Data System (ADS)

    Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati

    2018-03-01

    Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.

  17. Developing Essential Understanding of Addition and Subtraction for Teaching Mathematics in Pre-K-Grade 2

    ERIC Educational Resources Information Center

    Karp, Karen; Caldwell, Janet; Zbiek, Rose Mary; Bay-Williams, Jennifer

    2011-01-01

    What is the relationship between addition and subtraction? How do individuals know whether an algorithm will always work? Can they explain why order matters in subtraction but not in addition, or why it is false to assert that the sum of any two whole numbers is greater than either number? It is organized around two big ideas and supported by…

  18. AccuTyping: new algorithms for automated analysis of data from high-throughput genotyping with oligonucleotide microarrays

    PubMed Central

    Hu, Guohong; Wang, Hui-Yun; Greenawalt, Danielle M.; Azaro, Marco A.; Luo, Minjie; Tereshchenko, Irina V.; Cui, Xiangfeng; Yang, Qifeng; Gao, Richeng; Shen, Li; Li, Honghua

    2006-01-01

    Microarray-based analysis of single nucleotide polymorphisms (SNPs) has many applications in large-scale genetic studies. To minimize the influence of experimental variation, microarray data usually need to be processed in different aspects including background subtraction, normalization and low-signal filtering before genotype determination. Although many algorithms are sophisticated for these purposes, biases are still present. In the present paper, new algorithms for SNP microarray data analysis and the software, AccuTyping, developed based on these algorithms are described. The algorithms take advantage of a large number of SNPs included in each assay, and the fact that the top and bottom 20% of SNPs can be safely treated as homozygous after sorting based on their ratios between the signal intensities. These SNPs are then used as controls for color channel normalization and background subtraction. Genotype calls are made based on the logarithms of signal intensity ratios using two cutoff values, which were determined after training the program with a dataset of ∼160 000 genotypes and validated by non-microarray methods. AccuTyping was used to determine >300 000 genotypes of DNA and sperm samples. The accuracy was shown to be >99%. AccuTyping can be downloaded from . PMID:16982644

  19. A Wireless FSCV Monitoring IC With Analog Background Subtraction and UWB Telemetry.

    PubMed

    Dorta-Quiñones, Carlos I; Wang, Xiao Y; Dokania, Rajeev K; Gailey, Alycia; Lindau, Manfred; Apsel, Alyssa B

    2016-04-01

    A 30-μW wireless fast-scan cyclic voltammetry monitoring integrated circuit for ultra-wideband (UWB) transmission of dopamine release events in freely-behaving small animals is presented. On-chip integration of analog background subtraction and UWB telemetry yields a 32-fold increase in resolution versus standard Nyquist-rate conversion alone, near a four-fold decrease in the volume of uplink data versus single-bit, third-order, delta-sigma modulation, and more than a 20-fold reduction in transmit power versus narrowband transmission for low data rates. The 1.5- mm(2) chip, which was fabricated in 65-nm CMOS technology, consists of a low-noise potentiostat frontend, a two-step analog-to-digital converter (ADC), and an impulse-radio UWB transmitter (TX). The duty-cycled frontend and ADC/UWB-TX blocks draw 4 μA and 15 μA from 3-V and 1.2-V supplies, respectively. The chip achieves an input-referred current noise of 92 pA(rms) and an input current range of ±430 nA at a conversion rate of 10 kHz. The packaged device operates from a 3-V coin-cell battery, measures 4.7 × 1.9 cm(2), weighs 4.3 g (including the battery and antenna), and can be carried by small animals. The system was validated by wirelessly recording flow-injection of dopamine with concentrations in the range of 250 nM to 1 μM with a carbon-fiber microelectrode (CFM) using 300-V/s FSCV.

  20. A Wireless FSCV Monitoring IC with Analog Background Subtraction and UWB Telemetry

    PubMed Central

    Dorta-Quiñones, Carlos I.; Wang, Xiao Y.; Dokania, Rajeev K.; Gailey, Alycia; Lindau, Manfred; Apsel, Alyssa B.

    2015-01-01

    A 30-μW wireless fast-scan cyclic voltammetry monitoring integrated circuit for ultra-wideband (UWB) transmission of dopamine release events in freely-behaving small animals is presented. On-chip integration of analog background subtraction and UWB telemetry yields a 32-fold increase in resolution versus standard Nyquist-rate conversion alone, near a four-fold decrease in the volume of uplink data versus single-bit, third-order, delta-sigma modulation, and more than a 20-fold reduction in transmit power versus narrowband transmission for low data rates. The 1.5-mm2 chip, which was fabricated in 65-nm CMOS technology, consists of a low-noise potentiostat frontend, a two-step analog-to-digital converter (ADC), and an impulse-radio UWB transmitter (TX). The duty-cycled frontend and ADC/UWB-TX blocks draw 4 μA and 15 μA from 3-V and 1.2-V supplies, respectively. The chip achieves an input-referred current noise of 92 pArms and an input current range of ±430 nA at a conversion rate of 10 kHz. The packaged device operates from a 3-V coin-cell battery, measures 4.7 × 1.9 cm2, weighs 4.3 g (including the battery and antenna), and can be carried by small animals. The system was validated by wirelessly recording flow-injection of dopamine with concentrations in the range of 250 nM to 1 μM with a carbon-fiber microelectrode (CFM) using 300-V/s FSCV. PMID:26057983

  1. Modeling Self-subtraction in Angular Differential Imaging: Application to the HD 32297 Debris Disk

    NASA Astrophysics Data System (ADS)

    Esposito, Thomas M.; Fitzgerald, Michael P.; Graham, James R.; Kalas, Paul

    2014-01-01

    We present a new technique for forward-modeling self-subtraction of spatially extended emission in observations processed with angular differential imaging (ADI) algorithms. High-contrast direct imaging of circumstellar disks is limited by quasi-static speckle noise, and ADI is commonly used to suppress those speckles. However, the application of ADI can result in self-subtraction of the disk signal due to the disk's finite spatial extent. This signal attenuation varies with radial separation and biases measurements of the disk's surface brightness, thereby compromising inferences regarding the physical processes responsible for the dust distribution. To compensate for this attenuation, we forward model the disk structure and compute the form of the self-subtraction function at each separation. As a proof of concept, we apply our method to 1.6 and 2.2 μm Keck adaptive optics NIRC2 scattered-light observations of the HD 32297 debris disk reduced using a variant of the "locally optimized combination of images" algorithm. We are able to recover disk surface brightness that was otherwise lost to self-subtraction and produce simplified models of the brightness distribution as it appears with and without self-subtraction. From the latter models, we extract radial profiles for the disk's brightness, width, midplane position, and color that are unbiased by self-subtraction. Our analysis of these measurements indicates a break in the brightness profile power law at r ≈ 110 AU and a disk width that increases with separation from the star. We also verify disk curvature that displaces the midplane by up to 30 AU toward the northwest relative to a straight fiducial midplane.

  2. An improved algorithm of laser spot center detection in strong noise background

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  3. An investigation of self-subtraction holography in LiNbO3

    NASA Technical Reports Server (NTRS)

    Vahey, D. W.; Kenan, R. P.; Hartman, N. F.; Sherman, R. C.

    1981-01-01

    A sample having self subtraction characteristics that were very promising was tested in depth: hologram formation times were on the order of 150 sec, the null signal was less than 2.5% of the peak signal, and no fatigue nor instability was detected over the span of the experiments. Another sample, fabricated with, at most, slight modifications did not perform nearly as well. In all samples, attempts to improve self subtraction characteristics by various thermal treatments had no effects or adverse effects, with one exception in which improvement was noted after a time delay of several days. A theory developed to describe self subtraction showed the observed decrease in beam intensity with time, but the shape of the predicted decay curve was oscillatory in contrast to the exponential like decay observed. The theory was also inadequate to account for the experimental sensitivity of self subtraction to the Bragg angle of the hologram. It is concluded that self subtraction is a viable method for optical processing systems requiring background discrimination.

  4. Research on the Improved Image Dodging Algorithm Based on Mask Technique

    NASA Astrophysics Data System (ADS)

    Yao, F.; Hu, H.; Wan, Y.

    2012-08-01

    The remote sensing image dodging algorithm based on Mask technique is a good method for removing the uneven lightness within a single image. However, there are some problems with this algorithm, such as how to set an appropriate filter size, for which there is no good solution. In order to solve these problems, an improved algorithm is proposed. In this improved algorithm, the original image is divided into blocks, and then the image blocks with different definitions are smoothed using the low-pass filters with different cut-off frequencies to get the background image; for the image after subtraction, the regions with different lightness are processed using different linear transformation models. The improved algorithm can get a better dodging result than the original one, and can make the contrast of the whole image more consistent.

  5. SWCD: a sliding window and self-regulated learning-based background updating method for change detection in videos

    NASA Astrophysics Data System (ADS)

    Işık, Şahin; Özkan, Kemal; Günal, Serkan; Gerek, Ömer Nezih

    2018-03-01

    Change detection with background subtraction process remains to be an unresolved issue and attracts research interest due to challenges encountered on static and dynamic scenes. The key challenge is about how to update dynamically changing backgrounds from frames with an adaptive and self-regulated feedback mechanism. In order to achieve this, we present an effective change detection algorithm for pixelwise changes. A sliding window approach combined with dynamic control of update parameters is introduced for updating background frames, which we called sliding window-based change detection. Comprehensive experiments on related test videos show that the integrated algorithm yields good objective and subjective performance by overcoming illumination variations, camera jitters, and intermittent object motions. It is argued that the obtained method makes a fair alternative in most types of foreground extraction scenarios; unlike case-specific methods, which normally fail for their nonconsidered scenarios.

  6. Brain Activation during Addition and Subtraction Tasks In-Noise and In-Quiet

    PubMed Central

    Abd Hamid, Aini Ismafairus; Yusoff, Ahmad Nazlim; Mukari, Siti Zamratol-Mai Sarah; Mohamad, Mazlyfarina

    2011-01-01

    Background: In spite of extensive research conducted to study how human brain works, little is known about a special function of the brain that stores and manipulates information—the working memory—and how noise influences this special ability. In this study, Functional magnetic resonance imaging (fMRI) was used to investigate brain responses to arithmetic problems solved in noisy and quiet backgrounds. Methods: Eighteen healthy young males performed simple arithmetic operations of addition and subtraction with in-quiet and in-noise backgrounds. The MATLAB-based Statistical Parametric Mapping (SPM8) was implemented on the fMRI datasets to generate and analyse the activated brain regions. Results: Group results showed that addition and subtraction operations evoked extended activation in the left inferior parietal lobe, left precentral gyrus, left superior parietal lobe, left supramarginal gyrus, and left middle temporal gyrus. This supported the hypothesis that the human brain relatively activates its left hemisphere more compared with the right hemisphere when solving arithmetic problems. The insula, middle cingulate cortex, and middle frontal gyrus, however, showed more extended right hemispheric activation, potentially due to the involvement of attention, executive processes, and working memory. For addition operations, there was extensive left hemispheric activation in the superior temporal gyrus, inferior frontal gyrus, and thalamus. In contrast, subtraction tasks evoked a greater activation of similar brain structures in the right hemisphere. For both addition and subtraction operations, the total number of activated voxels was higher for in-noise than in-quiet conditions. Conclusion: These findings suggest that when arithmetic operations were delivered auditorily, the auditory, attention, and working memory functions were required to accomplish the executive processing of the mathematical calculation. The respective brain activation patterns appear to be

  7. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  8. Children's Understanding of the Addition/Subtraction Complement Principle

    ERIC Educational Resources Information Center

    Torbeyns, Joke; Peters, Greet; De Smedt, Bert; Ghesquière, Pol; Verschaffel, Lieven

    2016-01-01

    Background: In the last decades, children's understanding of mathematical principles has become an important research topic. Different from the commutativity and inversion principles, only few studies have focused on children's understanding of the addition/subtraction complement principle (if a - b = c, then c + b = a), mainly relying on verbal…

  9. Target detection using the background model from the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.

    2013-05-01

    The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.

  10. Computational solution of spike overlapping using data-based subtraction algorithms to resolve synchronous sympathetic nerve discharge

    PubMed Central

    Su, Chun-Kuei; Chiang, Chia-Hsun; Lee, Chia-Ming; Fan, Yu-Pei; Ho, Chiu-Ming; Shyu, Liang-Yu

    2013-01-01

    Sympathetic nerves conveying central commands to regulate visceral functions often display activities in synchronous bursts. To understand how individual fibers fire synchronously, we establish “oligofiber recording techniques” to record “several” nerve fiber activities simultaneously, using in vitro splanchnic sympathetic nerve–thoracic spinal cord preparations of neonatal rats as experimental models. While distinct spike potentials were easily recorded from collagenase-dissociated sympathetic fibers, a problem arising from synchronous nerve discharges is a higher incidence of complex waveforms resulted from spike overlapping. Because commercial softwares do not provide an explicit solution for spike overlapping, a series of custom-made LabVIEW programs incorporated with MATLAB scripts was therefore written for spike sorting. Spikes were represented as data points after waveform feature extraction and automatically grouped by k-means clustering followed by principal component analysis (PCA) to verify their waveform homogeneity. For dissimilar waveforms with exceeding Hotelling's T2 distances from the cluster centroids, a unique data-based subtraction algorithm (SA) was used to determine if they were the complex waveforms resulted from superimposing a spike pattern close to the cluster centroid with the other signals that could be observed in original recordings. In comparisons with commercial software, higher accuracy was achieved by analyses using our algorithms for the synthetic data that contained synchronous spiking and complex waveforms. Moreover, both T2-selected and SA-retrieved spikes were combined as unit activities. Quantitative analyses were performed to evaluate if unit activities truly originated from single fibers. We conclude that applications of our programs can help to resolve synchronous sympathetic nerve discharges (SND). PMID:24198782

  11. Automatic background updating for video-based vehicle detection

    NASA Astrophysics Data System (ADS)

    Hu, Chunhai; Li, Dongmei; Liu, Jichuan

    2008-03-01

    Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The widely used video-based vehicle detection technique is the background subtraction method. The key problem of this method is how to subtract and update the background effectively. In this paper an efficient background updating scheme based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and fast enough to satisfy the real-time constraints of vehicle detection.

  12. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    PubMed Central

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896

  13. A Novel Sky-Subtraction Method Based on Non-negative Matrix Factorisation with Sparsity for Multi-object Fibre Spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zhang, Long; Ye, Zhongfu

    2016-12-01

    A novel sky-subtraction method based on non-negative matrix factorisation with sparsity is proposed in this paper. The proposed non-negative matrix factorisation with sparsity method is redesigned for sky-subtraction considering the characteristics of the skylights. It has two constraint terms, one for sparsity and the other for homogeneity. Different from the standard sky-subtraction techniques, such as the B-spline curve fitting methods and the Principal Components Analysis approaches, sky-subtraction based on non-negative matrix factorisation with sparsity method has higher accuracy and flexibility. The non-negative matrix factorisation with sparsity method has research value for the sky-subtraction on multi-object fibre spectroscopic telescope surveys. To demonstrate the effectiveness and superiority of the proposed algorithm, experiments are performed on Large Sky Area Multi-Object Fiber Spectroscopic Telescope data, as the mechanisms of the multi-object fibre spectroscopic telescopes are similar.

  14. Novel artefact removal algorithms for co-registered EEG/fMRI based on selective averaging and subtraction.

    PubMed

    de Munck, Jan C; van Houdt, Petra J; Gonçalves, Sónia I; van Wegen, Erwin; Ossenblok, Pauly P W

    2013-01-01

    Co-registered EEG and functional MRI (EEG/fMRI) is a potential clinical tool for planning invasive EEG in patients with epilepsy. In addition, the analysis of EEG/fMRI data provides a fundamental insight into the precise physiological meaning of both fMRI and EEG data. Routine application of EEG/fMRI for localization of epileptic sources is hampered by large artefacts in the EEG, caused by switching of scanner gradients and heartbeat effects. Residuals of the ballistocardiogram (BCG) artefacts are similarly shaped as epileptic spikes, and may therefore cause false identification of spikes. In this study, new ideas and methods are presented to remove gradient artefacts and to reduce BCG artefacts of different shapes that mutually overlap in time. Gradient artefacts can be removed efficiently by subtracting an average artefact template when the EEG sampling frequency and EEG low-pass filtering are sufficient in relation to MR gradient switching (Gonçalves et al., 2007). When this is not the case, the gradient artefacts repeat themselves at time intervals that depend on the remainder between the fMRI repetition time and the closest multiple of the EEG acquisition time. These repetitions are deterministic, but difficult to predict due to the limited precision by which these timings are known. Therefore, we propose to estimate gradient artefact repetitions using a clustering algorithm, combined with selective averaging. Clustering of the gradient artefacts yields cleaner EEG for data recorded during scanning of a 3T scanner when using a sampling frequency of 2048 Hz. It even gives clean EEG when the EEG is sampled with only 256 Hz. Current BCG artefacts-reduction algorithms based on average template subtraction have the intrinsic limitation that they fail to deal properly with artefacts that overlap in time. To eliminate this constraint, the precise timings of artefact overlaps were modelled and represented in a sparse matrix. Next, the artefacts were disentangled with

  15. Three-photon N00N states generated by photon subtraction from double photon pairs.

    PubMed

    Kim, Heonoh; Park, Hee Su; Choi, Sang-Kyung

    2009-10-26

    We describe an experimental demonstration of a novel three-photon N00N state generation scheme using a single source of photons based on spontaneous parametric down-conversion (SPDC). The three-photon entangled state is generated when a photon is subtracted from a double pair of photons and detected by a heralding counter. Interference fringes measured with an emulated three-photon detector reveal the three-photon de Broglie wavelength and exhibit visibility > 70% without background subtraction.

  16. Removal of two large-scale cosmic microwave background anomalies after subtraction of the integrated Sachs-Wolfe effect

    NASA Astrophysics Data System (ADS)

    Rassat, A.; Starck, J.-L.; Dupé, F.-X.

    2013-09-01

    Context. Although there is currently a debate over the significance of the claimed large-scale anomalies in the cosmic microwave background (CMB), their existence is not totally dismissed. In parallel to the debate over their statistical significance, recent work has also focussed on masks and secondary anisotropies as potential sources of these anomalies. Aims: In this work we investigate simultaneously the impact of the method used to account for masked regions as well as the impact of the integrated Sachs-Wolfe (ISW) effect, which is the large-scale secondary anisotropy most likely to affect the CMB anomalies. In this sense, our work is an update of previous works. Our aim is to identify trends in CMB data from different years and with different mask treatments. Methods: We reconstruct the ISW signal due to 2 Micron All-Sky Survey (2MASS) and NRAO VLA Sky Survey (NVSS) galaxies, effectively reconstructing the low-redshift ISW signal out to z ~ 1. We account for regions of missing data using the sparse inpainting technique. We test sparse inpainting of the CMB, large scale structure and ISW and find that it constitutes a bias-free reconstruction method suitable to study large-scale statistical isotropy and the ISW effect. Results: We focus on three large-scale CMB anomalies: the low quadrupole, the quadrupole/octopole alignment, and the octopole planarity. After sparse inpainting, the low quadrupole becomes more anomalous, whilst the quadrupole/octopole alignment becomes less anomalous. The significance of the low quadrupole is unchanged after subtraction of the ISW effect, while the trend amongst the CMB maps is that both the low quadrupole and the quadrupole/octopole alignment have reduced significance, yet other hypotheses remain possible as well (e.g. exotic physics). Our results also suggest that both of these anomalies may be due to the quadrupole alone. While the octopole planarity significance is reduced after inpainting and after ISW subtraction, however

  17. Focal plane infrared readout circuit with automatic background suppression

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Sun, Chao (Inventor); Shaw, Timothy J. (Inventor); Wrigley, Chris J. (Inventor)

    2002-01-01

    A circuit for reading out a signal from an infrared detector includes a current-mode background-signal subtracting circuit having a current memory which can be enabled to sample and store a dark level signal from the infrared detector during a calibration phase. The signal stored by the current memory is subtracted from a signal received from the infrared detector during an imaging phase. The circuit also includes a buffered direct injection input circuit and a differential voltage readout section. By performing most of the background signal estimation and subtraction in a current mode, a low gain can be provided by the buffered direct injection input circuit to keep the gain of the background signal relatively small, while a higher gain is provided by the differential voltage readout circuit. An array of such readout circuits can be used in an imager having an array of infrared detectors. The readout circuits can provide a high effective handling capacity.

  18. Nonrigid Image Registration in Digital Subtraction Angiography Using Multilevel B-Spline

    PubMed Central

    2013-01-01

    We address the problem of motion artifact reduction in digital subtraction angiography (DSA) using image registration techniques. Most of registration algorithms proposed for application in DSA, have been designed for peripheral and cerebral angiography images in which we mainly deal with global rigid motions. These algorithms did not yield good results when applied to coronary angiography images because of complex nonrigid motions that exist in this type of angiography images. Multiresolution and iterative algorithms are proposed to cope with this problem, but these algorithms are associated with high computational cost which makes them not acceptable for real-time clinical applications. In this paper we propose a nonrigid image registration algorithm for coronary angiography images that is significantly faster than multiresolution and iterative blocking methods and outperforms competing algorithms evaluated on the same data sets. This algorithm is based on a sparse set of matched feature point pairs and the elastic registration is performed by means of multilevel B-spline image warping. Experimental results with several clinical data sets demonstrate the effectiveness of our approach. PMID:23971026

  19. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  20. Dereverberation and denoising based on generalized spectral subtraction by multi-channel LMS algorithm using a small-scale microphone array

    NASA Astrophysics Data System (ADS)

    Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko

    2012-12-01

    A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech

  1. 3D temporal subtraction on multislice CT images using nonlinear warping technique

    NASA Astrophysics Data System (ADS)

    Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio

    2007-03-01

    The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.

  2. Preschoolers' Understanding of Subtraction-Related Principles

    ERIC Educational Resources Information Center

    Baroody, Arthur J.; Lai, Meng-lung; Li, Xia; Baroody, Alison E.

    2009-01-01

    Little research has focused on an informal understanding of subtractive negation (e.g., 3 - 3 = 0) and subtractive identity (e.g., 3 - 0 = 3). Previous research indicates that preschoolers may have a fragile (i.e., unreliable or localized) understanding of the addition-subtraction inverse principle (e.g., 2 + 1 - 1 = 2). Recognition of a small…

  3. Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline

    DOE PAGES

    Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.

    2016-09-28

    A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.

  4. Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.

    A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.

  5. Coherent bremsstrahlung used for digital subtraction angiography

    NASA Astrophysics Data System (ADS)

    Überall, Herbert

    2007-05-01

    Digital subtraction angiography (DSA), also known as Dichromography, using synchrotron radiation beams has been developed at Stanford University (R. Hofstadter) and was subsequently taken over at the Brookhaven Synchrotron and later at Hamburg (HASYLAB) [see, e.g., W.R. Dix, Physik in unserer Zeit. 30 (1999) 160]. The imaging of coronary arteries is carried out with an iodine-based contrast agent which need not be injected into the heart. The radiation must be monochromatized and is applied above and below the K-edge of iodine (33.16 keV), with a subsequent digital subtraction of the two images. Monochromatization of the synchrotron radiation causes a loss of intensity of 10 -3. We propose instead the use of coherent bremsstrahlung [see, e.g., A.W. Saenz and H. Uberall, Phys. Rev. B25 (1982) 448] which is inherently monochromatic, furnishing a flux of 10 12 photon/sec. This requires a 10-20 MeV electron linac which can be obtained by many larger hospitals, eliminating the scheduling problems present at synchrotrons. The large, broad incoherent bremsstrahlung background underlying the monochromatic spike would lead to inadmissible overexposure of the patient. This problem can be solved with the use of Kumakhov's capillary optics [see e.g., S.B.Dabagov, Physics-Uspekhi 46 (2003) 1053]: the low-energy spiked radiation can be deflected towards the patient, while the higher energy incoherent background continues forward, avoiding the patient who is placed several meters from the source.

  6. Assessing the impact of background spectral graph construction techniques on the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.

    2012-06-01

    Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.

  7. A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the subtraction terms I

    NASA Astrophysics Data System (ADS)

    Somogyi, Gábor; Trócsányi, Zoltán

    2008-08-01

    In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms.

  8. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Dilution air background emission...

  9. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Dilution air background emission...

  10. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Dilution air background emission...

  11. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Dilution air background emission...

  12. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Dilution air background emission...

  13. Enhanced identification and biological validation of differential gene expression via Illumina whole-genome expression arrays through the use of the model-based background correction methodology

    PubMed Central

    Ding, Liang-Hao; Xie, Yang; Park, Seongmi; Xiao, Guanghua; Story, Michael D.

    2008-01-01

    Despite the tremendous growth of microarray usage in scientific studies, there is a lack of standards for background correction methodologies, especially in single-color microarray platforms. Traditional background subtraction methods often generate negative signals and thus cause large amounts of data loss. Hence, some researchers prefer to avoid background corrections, which typically result in the underestimation of differential expression. Here, by utilizing nonspecific negative control features integrated into Illumina whole genome expression arrays, we have developed a method of model-based background correction for BeadArrays (MBCB). We compared the MBCB with a method adapted from the Affymetrix robust multi-array analysis algorithm and with no background subtraction, using a mouse acute myeloid leukemia (AML) dataset. We demonstrated that differential expression ratios obtained by using the MBCB had the best correlation with quantitative RT–PCR. MBCB also achieved better sensitivity in detecting differentially expressed genes with biological significance. For example, we demonstrated that the differential regulation of Tnfr2, Ikk and NF-kappaB, the death receptor pathway, in the AML samples, could only be detected by using data after MBCB implementation. We conclude that MBCB is a robust background correction method that will lead to more precise determination of gene expression and better biological interpretation of Illumina BeadArray data. PMID:18450815

  14. A semi-supervised classification algorithm using the TAD-derived background as training data

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  15. Infrared Thermography Approach for Effective Shielding Area of Field Smoke Based on Background Subtraction and Transmittance Interpolation.

    PubMed

    Tang, Runze; Zhang, Tonglai; Chen, Yongpeng; Liang, Hao; Li, Bingyang; Zhou, Zunning

    2018-05-06

    Effective shielding area is a crucial indicator for the evaluation of the infrared smoke-obscuring effectiveness on the battlefield. The conventional methods for assessing the shielding area of the smoke screen are time-consuming and labor intensive, in addition to lacking precision. Therefore, an efficient and convincing technique for testing the effective shielding area of the smoke screen has great potential benefits in the smoke screen applications in the field trial. In this study, a thermal infrared sensor with a mid-wavelength infrared (MWIR) range of 3 to 5 μm was first used to capture the target scene images through clear as well as obscuring smoke, at regular intervals. The background subtraction in motion detection was then applied to obtain the contour of the smoke cloud at each frame. The smoke transmittance at each pixel within the smoke contour was interpolated based on the data that was collected from the image. Finally, the smoke effective shielding area was calculated, based on the accumulation of the effective shielding pixel points. One advantage of this approach is that it utilizes only one thermal infrared sensor without any other additional equipment in the field trial, which significantly contributes to the efficiency and its convenience. Experiments have been carried out to demonstrate that this approach can determine the effective shielding area of the field infrared smoke both practically and efficiently.

  16. Manual for the Jet Event and Background Simulation Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinz, M.; Soltz, R.; Angerami, A.

    Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momentamore » are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.« less

  17. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  18. Effect of color coding and subtraction on the accuracy of contrast echocardiography

    NASA Technical Reports Server (NTRS)

    Pasquet, A.; Greenberg, N.; Brunken, R.; Thomas, J. D.; Marwick, T. H.

    1999-01-01

    BACKGROUND: Contrast echocardiography may be used to assess myocardial perfusion. However, gray scale assessment of myocardial contrast echocardiography (MCE) is difficult because of variations in regional backscatter intensity, difficulties in distinguishing varying shades of gray, and artifacts or attenuation. We sought to determine whether the assessment of rest myocardial perfusion by MCE could be improved with subtraction and color coding. METHODS AND RESULTS: MCE was performed in 31 patients with previous myocardial infarction with a 2nd generation agent (NC100100, Nycomed AS), using harmonic triggered or continuous imaging and gain settings were kept constant throughout the study. Digitized images were post processed by subtraction of baseline from contrast data and colorized to reflect the intensity of myocardial contrast. Gray scale MCE alone, MCE images combined with baseline and subtracted colorized images were scored independently using a 16 segment model. The presence and severity of myocardial contrast abnormalities were compared with perfusion defined by rest MIBI-SPECT. Segments that were not visualized by continuous (17%) or triggered imaging (14%) after color processing were excluded from further analysis. The specificity of gray scale MCE alone (56%) or MCE combined with baseline 2D (47%) was significantly enhanced by subtraction and color coding (76%, p<0.001) of triggered images. The accuracy of the gray scale approaches (respectively 52% and 47%) was increased to 70% (p<0.001). Similarly, for continuous images, the specificity of gray scale MCE with and without baseline comparison was 23% and 42% respectively, compared with 60% after post processing (p<0.001). The accuracy of colorized images (59%) was also significantly greater than gray scale MCE (43% and 29%, p<0.001). The sensitivity of MCE for both acquisitions was not altered by subtraction. CONCLUSION: Post-processing with subtraction and color coding significantly improves the accuracy

  19. Laboratory test of a polarimetry imaging subtraction system for the high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Ren, Deqing; Zhu, Yongtian; Zhang, Xi; Li, Rong

    2012-09-01

    We propose a polarimetry imaging subtraction test system that can be used for the direct imaging of the reflected light from exoplanets. Such a system will be able to remove the speckle noise scattered by the wave-front error and thus can enhance the high-contrast imaging. In this system, we use a Wollaston Prism (WP) to divide the incoming light into two simultaneous images with perpendicular linear polarizations. One of the images is used as the reference image. Then both the phase and geometric distortion corrections have been performed on the other image. The corrected image is subtracted with the reference image to remove the speckles. The whole procedure is based on an optimization algorithm and the target function is to minimize the residual speckles after subtraction. For demonstration purpose, here we only use a circular pupil in the test without integrating of our apodized-pupil coronagraph. It is shown that best result can be gained by inducing both phase and distortion corrections. Finally, it has reached an extra contrast gain of 50-times improvement in average, which is promising to be used for the direct imaging of exoplanets.

  20. Real-time out-of-plane artifact subtraction tomosynthesis imaging using prior CT for scanning beam digital x-ray system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca

    2014-11-01

    Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the

  1. Motion compensation in digital subtraction angiography using graphics hardware.

    PubMed

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  2. Addition and subtraction by students with Down syndrome

    NASA Astrophysics Data System (ADS)

    Noda Herrera, Aurelia; Bruno, Alicia; González, Carina; Moreno, Lorenzo; Sanabria, Hilda

    2011-01-01

    We present a research report on addition and subtraction conducted with Down syndrome students between the ages of 12 and 31. We interviewed a group of students with Down syndrome who executed algorithms and solved problems using specific materials and paper and pencil. The results show that students with Down syndrome progress through the same procedural levels as those without disabilities though they have difficulties in reaching the most abstract level (numerical facts). The use of fingers or concrete representations (balls) appears as a fundamental process among these students. As for errors, these vary widely depending on the students, and can be attributed mostly to an incomplete knowledge of the decimal number system.

  3. Digital subtraction dark-lumen MR colonography: initial experience.

    PubMed

    Ajaj, Waleed; Veit, Patrick; Kuehle, Christiane; Joekel, Michaela; Lauenstein, Thomas C; Herborn, Christoph U

    2005-06-01

    To evaluate image subtraction for the detection of colonic pathologies in a dark-lumen MR colonography exam. A total of 20 patients (12 males; 8 females; mean 51.4 years of age) underwent MR colonography after standard cleansing and a rectal water enema on a 1.5-T whole-body MR system. After suppression of peristaltic motion, native and Gd-contrast-enhanced three-dimensional T1-w gradient echo images were acquired in the coronal plane. Two radiologists analyzed the MR data sets in consensus on two separate occasions, with and without the subtracted images for lesion detection, and assessed the value of the subtracted data set on a five-point Likert scale (1=very helpful to 5=very unhelpful). All imaging results were compared with endoscopy. Without subtracted images, MR-colonography detected a total of five polyps, two inflammatory lesions, and one carcinoma in eight patients, which were all verified by endoscopy. Using subtraction, an additional polyp was found, and readout time was significantly shorter (6:41 vs. 7:39 minutes; P<0.05). In two patients, endoscopy detected a flat adenoma and a polyp (0.4 cm) that were missed in the MR exam. Sensitivity and specificity without subtraction were 0.67/1.0, and 0.76/1.0 with the subtracted images, respectively. Subtraction was assessed as helpful in all exams (mean value 1.8+/-0.5; Likert scale). We consider subtraction of native from contrast-enhanced dark-lumen MR colonography data sets as a beneficial supplement to the exam. Copyright (c) 2005 Wiley-Liss, Inc.

  4. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  5. Techniques to improve the accuracy of noise power spectrum measurements in digital x-ray imaging based on background trends removal.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2011-03-01

    Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two

  6. The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm

    NASA Astrophysics Data System (ADS)

    Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero

    1999-10-01

    We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.

  7. SU-E-J-23: An Accurate Algorithm to Match Imperfectly Matched Images for Lung Tumor Detection Without Markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozario, T; Bereg, S; Chiu, T

    Purpose: In order to locate lung tumors on projection images without internal markers, digitally reconstructed radiograph (DRR) is created and compared with projection images. Since lung tumors always move and their locations change on projection images while they are static on DRRs, a special DRR (background DRR) is generated based on modified anatomy from which lung tumors are removed. In addition, global discrepancies exist between DRRs and projections due to their different image originations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported. Methods: This method divides global images into a matrix ofmore » small tiles and similarities will be evaluated by calculating normalized cross correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) will be automatically optimized to keep the tumor within a single tile which has bad matching with the corresponding DRR tile. A pixel based linear transformation will be determined by linear interpolations of tile transformation results obtained during tile matching. The DRR will be transformed to the projection image level and subtracted from it. The resulting subtracted image now contains only the tumor. A DRR of the tumor is registered to the subtracted image to locate the tumor. Results: This method has been successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (Brainlab) for dynamic tumor tracking on phantom studies. Radiation opaque markers are implanted and used as ground truth for tumor positions. Although, other organs and bony structures introduce strong signals superimposed on tumors at some angles, this method accurately locates tumors on every projection over 12 gantry angles. The maximum error is less than 2.6 mm while the total average error is 1.0 mm. Conclusion: This algorithm is capable of detecting tumor without markers despite strong background

  8. I.v. and intraarterial hybrid digital subtraction angiography: clinical evaluation.

    PubMed

    Foley, W D; Beres, J; Smith, D F; Bell, R M; Milde, M W; Lipchik, E O

    1986-09-01

    Temporal/energy (hybrid) subtraction is a technique for removing soft-tissue motion artifact from digital subtraction angiograms. The diagnostic utility of hybrid subtraction for i.v. and intraarterial angiography was assessed in the first 9 months of operation of a dedicated production system. In i.v. carotid arteriography (N = 127), hybrid subtraction (H) provided a double-profile projection of the carotid bifurcation in an additional 14% of studies, compared with temporal subtraction (T) alone (H79:T48, p less than 0.001). However, a change in estimated percent stenosis or additional diagnostic information occurred in only 2% of studies. In i.v. abdominal arteriography (N = 23), hybrid subtraction, compared with temporal subtraction, provided a diagnostic examination in an additional 14% of studies (H20:T17); however, this difference is not statistically significant. An additional three i.v. abdominal angiograms were nondiagnostic. In intraarterial abdominal (N = 98) and pelvic (N = 60) angiography, hybrid subtraction provided a diagnostic examination in an additional 5% of studies (abdomen H94:T90, pelvis H58:T56); this difference was not statistically significant. An additional 5% of all intraarterial abdominal and pelvic digital subtraction angiographic studies were considered nondiagnostic. Hybrid subtraction provides a double-profile view of the carotid bifurcation in a significant number of patients. However, apart from some potential for improved i.v. abdominal arteriography, hybrid subtraction does not result in significant improvement in comparison to conventional temporal-subtraction techniques.

  9. Effects of global signal regression and subtraction methods on resting-state functional connectivity using arterial spin labeling data.

    PubMed

    Silva, João Paulo Santos; Mônaco, Luciana da Mata; Paschoal, André Monteiro; Oliveira, Ícaro Agenor Ferreira de; Leoni, Renata Ferranti

    2018-05-16

    Arterial spin labeling (ASL) is an established magnetic resonance imaging (MRI) technique that is finding broader applications in functional studies of the healthy and diseased brain. To promote improvement in cerebral blood flow (CBF) signal specificity, many algorithms and imaging procedures, such as subtraction methods, were proposed to eliminate or, at least, minimize noise sources. Therefore, this study addressed the main considerations of how CBF functional connectivity (FC) is changed, regarding resting brain network (RBN) identification and correlations between regions of interest (ROI), by different subtraction methods and removal of residual motion artifacts and global signal fluctuations (RMAGSF). Twenty young healthy participants (13 M/7F, mean age = 25 ± 3 years) underwent an MRI protocol with a pseudo-continuous ASL (pCASL) sequence. Perfusion-based images were obtained using simple, sinc and running subtraction. RMAGSF removal was applied to all CBF time series. Independent Component Analysis (ICA) was used for RBN identification, while Pearson' correlation was performed for ROI-based FC analysis. Temporal signal-to-noise ratio (tSNR) was higher in CBF maps obtained by sinc subtraction, although RMAGSF removal had a significant effect on maps obtained with simple and running subtractions. Neither the subtraction method nor the RMAGSF removal directly affected the identification of RBNs. However, the number of correlated and anti-correlated voxels varied for different subtraction and filtering methods. In an ROI-to-ROI level, changes were prominent in FC values and their statistical significance. Our study showed that both RMAGSF filtering and subtraction method might influence resting-state FC results, especially in an ROI level, consequently affecting FC analysis and its interpretation. Taking our results and the whole discussion together, we understand that for an exploratory assessment of the brain, one could avoid removing RMAGSF to

  10. An accurate algorithm to match imperfectly matched images for lung tumor detection without markers

    PubMed Central

    Rozario, Timothy; Bereg, Sergey; Yan, Yulong; Chiu, Tsuicheng; Liu, Honghuan; Kearney, Vasant; Jiang, Lan

    2015-01-01

    In order to locate lung tumors on kV projection images without internal markers, digitally reconstructed radiographs (DRRs) are created and compared with projection images. However, lung tumors always move due to respiration and their locations change on projection images while they are static on DRRs. In addition, global image intensity discrepancies exist between DRRs and projections due to their different image orientations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported to match imperfectly matched projection images and DRRs. The kV projection images were matched with different DRRs in two steps. Preprocessing was performed in advance to generate two sets of DRRs. The tumors were removed from the planning 3D CT for a single phase of planning 4D CT images using planning contours of tumors. DRRs of background and DRRs of tumors were generated separately for every projection angle. The first step was to match projection images with DRRs of background signals. This method divided global images into a matrix of small tiles and similarities were evaluated by calculating normalized cross‐correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) was automatically optimized to keep the tumor within a single projection tile that had a bad matching with the corresponding DRR tile. A pixel‐based linear transformation was determined by linear interpolations of tile transformation results obtained during tile matching. The background DRRs were transformed to the projection image level and subtracted from it. The resulting subtracted image now contained only the tumor. The second step was to register DRRs of tumors to the subtracted image to locate the tumor. This method was successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (BrainLAB) for dynamic tumor tracking on phantom studies. Radiation opaque markers were

  11. Manual for the Jet Event and Background Simulation Library(JEBSimLib)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinz, Matthias; Soltz, Ron; Angerami, Aaron

    Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momentamore » are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.« less

  12. [An Algorithm to Eliminate Power Frequency Interference in ECG Using Template].

    PubMed

    Shi, Guohua; Li, Jiang; Xu, Yan; Feng, Liang

    2017-01-01

    Researching an algorithm to eliminate power frequency interference in ECG. The algorithm first creates power frequency interference template, then, subtracts the template from the original ECG signals, final y, the algorithm gets the ECG signals without interference. Experiment shows the algorithm can eliminate interference effectively and has none side effect to normal signal. It’s efficient and suitable for practice.

  13. Power Spectral Density Error Analysis of Spectral Subtraction Type of Speech Enhancement Methods

    NASA Astrophysics Data System (ADS)

    Händel, Peter

    2006-12-01

    A theoretical framework for analysis of speech enhancement algorithms is introduced for performance assessment of spectral subtraction type of methods. The quality of the enhanced speech is related to physical quantities of the speech and noise (such as stationarity time and spectral flatness), as well as to design variables of the noise suppressor. The derived theoretical results are compared with the outcome of subjective listening tests as well as successful design strategies, performed by independent research groups.

  14. Data Series Subtraction with Unknown and Unmodeled Background Noise

    NASA Technical Reports Server (NTRS)

    Vitale, Stefano; Congedo, Giuseppe; Dolesi, Rita; Ferroni, Valerio; Hueller, Mauro; Vetrugno, Daniele; Weber, William Joseph; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; hide

    2014-01-01

    LISA Pathfinder (LPF), the precursor mission to a gravitational wave observatory of the European Space Agency, will measure the degree to which two test masses can be put into free fall, aiming to demonstrate a suppression of disturbance forces corresponding to a residual relative acceleration with a power spectral density (PSD) below (30 fm/sq s/Hz)(sup 2) around 1 mHz. In LPF data analysis, the disturbance forces are obtained as the difference between the acceleration data and a linear combination of other measured data series. In many circumstances, the coefficients for this linear combination are obtained by fitting these data series to the acceleration, and the disturbance forces appear then as the data series of the residuals of the fit. Thus the background noise or, more precisely, its PSD, whose knowledge is needed to build up the likelihood function in ordinary maximum likelihood fitting, is here unknown, and its estimate constitutes instead one of the goals of the fit. In this paper we present a fitting method that does not require the knowledge of the PSD of the background noise. The method is based on the analytical marginalization of the posterior parameter probability density with respect to the background noise PSD, and returns an estimate both for the fitting parameters and for the PSD. We show that both these estimates are unbiased, and that, when using averaged Welchs periodograms for the residuals, the estimate of the PSD is consistent, as its error tends to zero with the inverse square root of the number of averaged periodograms. Additionally, we find that the method is equivalent to some implementations of iteratively reweighted least-squares fitting. We have tested the method both on simulated data of known PSD and on data from several experiments performed with the LISA Pathfinder end-to-end mission simulator.

  15. A Low-Stress Algorithm for Fractions

    ERIC Educational Resources Information Center

    Ruais, Ronald W.

    1978-01-01

    An algorithm is given for the addition and subtraction of fractions based on dividing the sum of diagonal numerator and denominator products by the product of the denominators. As an explanation of the teaching method, activities used in teaching are demonstrated. (MN)

  16. Longitudinal development of subtraction performance in elementary school.

    PubMed

    Artemenko, Christina; Pixner, Silvia; Moeller, Korbinian; Nuerk, Hans-Christoph

    2017-10-05

    A major goal of education in elementary mathematics is the mastery of arithmetic operations. However, research on subtraction is rather scarce, probably because subtraction is often implicitly assumed to be cognitively similar to addition, its mathematical inverse. To evaluate this assumption, we examined the relation between the borrow effect in subtraction and the carry effect in addition, and the developmental trajectory of the borrow effect in children using a choice reaction paradigm in a longitudinal study. In contrast to the carry effect in adults, carry and borrow effects in children were found to be categorical rather than continuous. From grades 3 to 4, children became more proficient in two-digit subtraction in general, but not in performing the borrow operation in particular. Thus, we observed no specific developmental progress in place-value computation, but a general improvement in subtraction procedures. Statement of contribution What is already known on this subject? The borrow operation increases difficulty in two-digit subtraction in adults. The carry effect in addition, as the inverse operation of borrowing, comprises categorical and continuous processing characteristics. What does this study add? In contrast to the carry effect in adults, the borrow and carry effects are categorical in elementary school children. Children generally improve in subtraction performance from grades 3 to 4 but do not progress in place-value computation in particular. © 2017 The British Psychological Society.

  17. A method to characterise site, urban and regional ambient background radiation.

    PubMed

    Passmore, C; Kirr, M

    2011-03-01

    Control dosemeters are routinely provided to customers to monitor the background radiation so that it can be subtracted from the gross response of the dosemeter to arrive at the occupational dose. Landauer, the largest dosimetry processor in the world with subsidiaries in Australia, Brazil, China, France, Japan, Mexico and the UK, has clients in approximately 130 countries. The Glenwood facility processes over 1.1 million controls per year. This network of clients around the world provides a unique ability to monitor the world's ambient background radiation. Control data can be mined to provide useful historical information regarding ambient background rates and provide a historical baseline for geographical areas. Historical baseline can be used to provide site or region-specific background subtraction values, document the variation in ambient background radiation around a client's site or provide a baseline for measuring the efficiency of clean-up efforts in urban areas after a dirty bomb detonation.

  18. An automated subtraction of NLO EW infrared divergences

    NASA Astrophysics Data System (ADS)

    Schönherr, Marek

    2018-02-01

    In this paper a generalisation of the Catani-Seymour dipole subtraction method to next-to-leading order electroweak calculations is presented. All singularities due to photon and gluon radiation off both massless and massive partons in the presence of both massless and massive spectators are accounted for. Particular attention is paid to the simultaneous subtraction of singularities of both QCD and electroweak origin which are present in the next-to-leading order corrections to processes with more than one perturbative order contributing at Born level. Similarly, embedding non-dipole-like photon splittings in the dipole subtraction scheme discussed. The implementation of the formulated subtraction scheme in the framework of the Sherpa Monte-Carlo event generator, including the restriction of the dipole phase space through the α -parameters and expanding its existing subtraction for NLO QCD calculations, is detailed and numerous internal consistency checks validating the obtained results are presented.

  19. Development of Fire Detection Algorithm at Its Early Stage Using Fire Colour and Shape Information

    NASA Astrophysics Data System (ADS)

    Suleiman Abdullahi, Zainab; Hamisu Dalhatu, Shehu; Hassan Abdullahi, Zakariyya

    2018-04-01

    Fire can be defined as a state in which substances combined chemically with oxygen from the air and give out heat, smoke and flame. Most of the conventional fire detection techniques such as smoke, fire and heat detectors respectively have a problem of travelling delay and also give a high false alarm. The algorithm begins by loading the selected video clip from the database developed to identify the present or absence of fire in a frame. In this approach, background subtraction was employed. If the result of subtraction is less than the set threshold, the difference is ignored and the next frame is taken. However, if the difference is equal to or greater than the set threshold then it subjected to colour and shape test. This is done by using combined RGB colour model and shape signature. The proposed technique was very effective in detecting fire compared to those technique using only motion or colour clues.

  20. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  1. The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory

    NASA Astrophysics Data System (ADS)

    Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D.; Doran, Gary B.; Miller, Adam A.; Bellm, Eric; Kasliwal, Mansi; Ofek, Eran O.; Surace, Jason; Shupe, David L.; Grillmair, Carl J.; Jackson, Ed; Barlow, Tom; Yan, Lin; Cao, Yi; Cenko, S. Bradley; Storrie-Lombardi, Lisa J.; Helou, George; Prince, Thomas A.; Kulkarni, Shrinivas R.

    2017-01-01

    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, bogus candidates from processing artifacts and imperfect image subtractions outnumber real transients by ≃10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ≃97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.

  2. The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory

    NASA Technical Reports Server (NTRS)

    Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D.; Doran, Gary B.; Miller, Adam A.; Bellm, Eric; Kasliwal, Mansi; Ofek, Eran O.; Surace, Jason; Shupe, David L.; hide

    2016-01-01

    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, bogus candidates from processing artifacts and imperfect image subtractions outnumber real transients by approximately equal to 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of approximately equal to 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.

  3. Tuckshop Subtraction

    ERIC Educational Resources Information Center

    Duke, Roger; Graham, Alan; Johnston-Wilder, Sue

    2007-01-01

    This article describes a recent and successful initiative on teaching place value and the decomposition method of subtraction to pupils having difficulty with this technique in the 9-12-year age range. The aim of the research was to explore whether using the metaphor of selling chews (i.e., sweets) in a tuck shop and developing this into an iconic…

  4. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  5. Color Addition and Subtraction Apps

    NASA Astrophysics Data System (ADS)

    Ruiz, Frances; Ruiz, Michael J.

    2015-10-01

    Color addition and subtraction apps in HTML5 have been developed for students as an online hands-on experience so that they can more easily master principles introduced through traditional classroom demonstrations. The evolution of the additive RGB color model is traced through the early IBM color adapters so that students can proceed step by step in understanding mathematical representations of RGB color. Finally, color addition and subtraction are presented for the X11 colors from web design to illustrate yet another real-life application of color mixing.

  6. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Justin, E-mail: justin.solomon@duke.edu; Samei, Ehsan

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based onmore » a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise

  7. LASR-Guided Variability Subtraction: The Linear Algorithm for Significance Reduction of Stellar Seismic Activity

    NASA Astrophysics Data System (ADS)

    Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.

    2017-10-01

    Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.

  8. Renormalization of quark bilinear operators in a momentum-subtraction scheme with a nonexceptional subtraction point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturm, C.; Soni, A.; Aoki, Y.

    2009-07-01

    We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation withmore » substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.« less

  9. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  10. A Dynamic Enhancement With Background Reduction Algorithm: Overview and Application to Satellite-Based Dust Storm Detection

    NASA Astrophysics Data System (ADS)

    Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.

    2017-12-01

    This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.

  11. Investigation of contrast-enhanced subtracted breast CT images with MAP-EM based on projection-based weighting imaging.

    PubMed

    Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo

    2018-06-01

    Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.

  12. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  13. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm.

    PubMed

    Chen, Yung-Yue

    2018-05-08

    Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  14. Tomographic digital subtraction angiography for lung perfusion estimation in rodents.

    PubMed

    Badea, Cristian T; Hedlund, Laurence W; De Lin, Ming; Mackel, Julie S Boslego; Samei, Ehsan; Johnson, G Allan

    2007-05-01

    In vivo measurements of perfusion present a challenge to existing small animal imaging techniques such as magnetic resonance microscopy, micro computed tomography, micro positron emission tomography, and microSPECT, due to combined requirements for high spatial and temporal resolution. We demonstrate the use of tomographic digital subtraction angiography (TDSA) for estimation of perfusion in small animals. TDSA augments conventional digital subtraction angiography (DSA) by providing three-dimensional spatial information using tomosynthesis algorithms. TDSA is based on the novel paradigm that the same time density curves can be reproduced in a number of consecutive injections of microL volumes of contrast at a series of different angles of rotation. The capabilities of TDSA are established in studies on lung perfusion in rats. Using an imaging system developed in-house, we acquired data for four-dimensional (4D) imaging with temporal resolution of 140 ms, in-plane spatial resolution of 100 microm, and slice thickness on the order of millimeters. Based on a structured experimental approach, we optimized TDSA imaging providing a good trade-off between slice thickness, the number of injections, contrast to noise, and immunity to artifacts. Both DSA and TDSA images were used to create parametric maps of perfusion. TDSA imaging has potential application in a number of areas where functional perfusion measurements in 4D can provide valuable insight into animal models of disease and response to therapeutics.

  15. Embossed radiography utilizing energy subtraction.

    PubMed

    Osawa, Akihiro; Watanabe, Manabu; Sato, Eiichi; Matsukiyo, Hiroshi; Enomoto, Toshiyuki; Nagao, Jiro; Abderyim, Purkhet; Aizawa, Katsuo; Tanaka, Etsuro; Mori, Hidezo; Kawai, Toshiaki; Ehara, Shigeru; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun

    2009-01-01

    Currently, it is difficult to carry out refraction-contrast radiography by using a conventional X-ray generator. Thus, we developed an embossed radiography system utilizing dual-energy subtraction for decreasing the absorption contrast in unnecessary regions, and the contrast resolution of a target region was increased by use of image-shifting subtraction and a linear-contrast system in a flat panel detector (FPD). The X-ray generator had a 100-microm-focus tube. Energy subtraction was performed at tube voltages of 45 and 65 kV, a tube current of 0.50 mA, and an X-ray exposure time of 5.0 s. A 1.0-mm-thick aluminum filter was used for absorbing low-photon-energy bremsstrahlung X-rays. Embossed radiography was achieved with cohesion imaging by use of the FPD with pixel sizes of 48 x 48 microm, and the shifting dimension of an object in the horizontal direction ranged from 100 to 200 microm. At a shifting distance of 100 mum, the spatial resolutions in the horizontal and vertical directions measured with a lead test chart were both 83 microm. In embossed radiography of non-living animals, we obtained high-contrast embossed images of fine bones, gadolinium oxide particles in the kidney, and coronary arteries approximately 100 microm in diameter.

  16. Background suppression of infrared small target image based on inter-frame registration

    NASA Astrophysics Data System (ADS)

    Ye, Xiubo; Xue, Bindang

    2018-04-01

    We propose a multi-frame background suppression method for remote infrared small target detection. Inter-frame information is necessary when the heavy background clutters make it difficult to distinguish real targets and false alarms. A registration procedure based on points matching in image patches is used to compensate the local deformation of background. Then the target can be separated by background subtraction. Experiments show our method serves as an effective preliminary of target detection.

  17. An efficient voting algorithm for finding additive biclusters with random background.

    PubMed

    Xiao, Jing; Wang, Lusheng; Liu, Xiaowen; Jiang, Tao

    2008-12-01

    The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n x m matrix A (n >or= m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n x m background matrix is a random integer from [0, L - 1] for some integer L, and a k x k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L - 1] with probability theta. We propose an O(n(2)m) time algorithm based on voting to solve the problem. We show that when k >or= Omega(square root of (n log n)), the voting algorithm can correctly find the implanted bicluster with probability at least 1 - (9/n(2)). We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several ideas for estimating the size of an implanted bicluster, adjusting the threshold in voting, dealing with small biclusters, and dealing with overlapping implanted biclusters. Our experimental results on both simulated and real datasets show that VOTE can find biclusters with a high accuracy and speed.

  18. Target tracking and 3D trajectory acquisition of cabbage butterfly (P. rapae) based on the KCF-BS algorithm.

    PubMed

    Guo, Yang-Yang; He, Dong-Jian; Liu, Cong

    2018-06-25

    Insect behaviour is an important research topic in plant protection. To study insect behaviour accurately, it is necessary to observe and record their flight trajectory quantitatively and precisely in three dimensions (3D). The goal of this research was to analyse frames extracted from videos using Kernelized Correlation Filters (KCF) and Background Subtraction (BS) (KCF-BS) to plot the 3D trajectory of cabbage butterfly (P. rapae). Considering the experimental environment with a wind tunnel, a quadrature binocular vision insect video capture system was designed and applied in this study. The KCF-BS algorithm was used to track the butterfly in video frames and obtain coordinates of the target centroid in two videos. Finally the 3D trajectory was calculated according to the matching relationship in the corresponding frames of two angles in the video. To verify the validity of the KCF-BS algorithm, Compressive Tracking (CT) and Spatio-Temporal Context Learning (STC) algorithms were performed. The results revealed that the KCF-BS tracking algorithm performed more favourably than CT and STC in terms of accuracy and robustness.

  19. Characterization of unknown genetic modifications using high throughput sequencing and computational subtraction

    PubMed Central

    Tengs, Torstein; Zhang, Haibo; Holst-Jensen, Arne; Bohlin, Jon; Butenko, Melinka A; Kristoffersen, Anja Bråthen; Sorteberg, Hilde-Gunn Opsahl; Berdal, Knut G

    2009-01-01

    Background When generating a genetically modified organism (GMO), the primary goal is to give a target organism one or several novel traits by using biotechnology techniques. A GMO will differ from its parental strain in that its pool of transcripts will be altered. Currently, there are no methods that are reliably able to determine if an organism has been genetically altered if the nature of the modification is unknown. Results We show that the concept of computational subtraction can be used to identify transgenic cDNA sequences from genetically modified plants. Our datasets include 454-type sequences from a transgenic line of Arabidopsis thaliana and published EST datasets from commercially relevant species (rice and papaya). Conclusion We believe that computational subtraction represents a powerful new strategy for determining if an organism has been genetically modified as well as to define the nature of the modification. Fewer assumptions have to be made compared to methods currently in use and this is an advantage particularly when working with unknown GMOs. PMID:19814792

  20. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  1. A new registration method with voxel-matching technique for temporal subtraction images

    NASA Astrophysics Data System (ADS)

    Itai, Yoshinori; Kim, Hyoungseop; Ishikawa, Seiji; Katsuragawa, Shigehiko; Doi, Kunio

    2008-03-01

    A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for enhancing interval changes on medical images by removing most of normal structures. One of the important problems in temporal subtraction is that subtraction images commonly include artifacts created by slight differences in the size, shape, and/or location of anatomical structures. In this paper, we developed a new registration method with voxel-matching technique for substantially removing the subtraction artifacts on the temporal subtraction image obtained from multiple-detector computed tomography (MDCT). With this technique, the voxel value in a warped (or non-warped) previous image is replaced by a voxel value within a kernel, such as a small cube centered at a given location, which would be closest (identical or nearly equal) to the voxel value in the corresponding location in the current image. Our new method was examined on 16 clinical cases with MDCT images. Preliminary results indicated that interval changes on the subtraction images were enhanced considerably, with a substantial reduction of misregistration artifacts. The temporal subtraction images obtained by use of the voxel-matching technique would be very useful for radiologists in the detection of interval changes on MDCT images.

  2. Chemicalome and metabolome profiling of polymethoxylated flavonoids in Citri Reticulatae Pericarpium based on an integrated strategy combining background subtraction and modified mass defect filter in a Microsoft Excel Platform.

    PubMed

    Zeng, Su-Ling; Duan, Li; Chen, Bai-Zhong; Li, Ping; Liu, E-Hu

    2017-07-28

    Detection of metabolites in complex biological matrixes is a great challenge because of the background noise and endogenous components. Herein, we proposed an integrated strategy that combined background subtraction program and modified mass defect filter (MMDF) data mining in a Microsoft Excel platform for chemicalome and metabolome profiling of the polymethoxylated flavonoids (PMFs) in Citri Reticulatae Pericarpium (CRP). The exogenously-sourced ions were firstly filtered out by the developed Visual Basic for Applications (VBA) program incorporated in the Microsoft Office. The novel MMDF strategy was proposed for detecting both target and untarget constituents and metabolites based on narrow, well-defined mass defect ranges. The approach was validated to be powerful, and potentially useful for the metabolite identification of both single compound and homologous compound mixture. We successfully identified 30 and 31 metabolites from rat biosamples after oral administration of nobiletin and tangeretin, respectively. A total of 56 PMFs compounds were chemically characterized and 125 metabolites were captured. This work demonstrated the feasibility of the integrated approach for reliable characterization of the constituents and metabolites in herbal medicines. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Differential cDNA cloning by enzymatic degrading subtraction (EDS).

    PubMed Central

    Zeng, J; Gorski, R A; Hamer, D

    1994-01-01

    We describe a new method, called enzymatic degrading subtraction (EDS), for the construction of subtractive libraries from PCR amplified cDNA. The novel features of this method are that i) the tester DNA is blocked by thionucleotide incorporation; ii) the rate of hybridization is accelerated by phenol-emulsion reassociation; and iii) the driver cDNA and hybrid molecules are enzymatically removed by digestion with exonucleases III and VII rather than by physical partitioning. We demonstrate the utility of EDS by constructing a subtractive library enriched for cDNAs expressed in adult but not in embryonic rat brains. Images PMID:7971268

  4. An EPIC Tale of the Quiescent Particle Background

    NASA Technical Reports Server (NTRS)

    Snowden, S.L.; Kuntz, K.D.

    2017-01-01

    Extended Source Analysis Software Use Based Empirical Investigation: (1) Builds quiescent particle background (QPB) spectra and images for observations of extended sources that fill (or mostly fill) the FOV i.e., annular background subtraction won't work. (2) Uses a combination of Filter Wheel Closed (FWC) and corner data to capture the spectral, spatial, and temporal variation of the quiescent particle background. New Work: (1) Improved understanding of the QPB (aided by adding a whole lot of data since 2008). (2) Significantly improved statistics (did I mention a LOT more data?). (3) Better characterization and identification of anomalous states. (4) Builds backgrounds for some anomalous state. (5) New efficient method for non-anomalous states.

  5. The potential for neurovascular intravenous angiography using K-edge digital subtraction angiography

    NASA Astrophysics Data System (ADS)

    Schültke, E.; Fiedler, S.; Kelly, M.; Griebel, R.; Juurlink, B.; LeDuc, G.; Estève, F.; Le Bas, J.-F.; Renier, M.; Nemoz, C.; Meguro, K.

    2005-08-01

    Background: Catheterization of small-caliber blood vessels in the central nervous system can be extremely challenging. Alternatively, intravenous (i.v.) administration of contrast agent is minimally invasive and therefore carries a much lower risk for the patient. With conventional X-ray equipment, volumes of contrast agent that could be safely administered to the patient do not allow acquisition of high-quality images after i.v. injection, because the contrast bolus is extremely diluted by passage through the heart. However, synchrotron-based digital K-edge subtraction angiography does allow acquisition of high-quality images after i.v. administration of relatively small doses of contrast agent. Materials and methods: Eight adult male New Zealand rabbits were used for our experiments. Animals were submitted to both angiography with conventional X-ray equipment and synchrotron-based digital subtraction angiography. Results: With conventional X-ray equipment, no contrast was seen in either cerebral or spinal blood vessels after i.v. injection of iodinated contrast agent. However, using K-edge digital subtraction angiography, as little as 1 ml iodinated contrast agent, when administered as i.v. bolus, yielded images of small-caliber blood vessels in the central nervous system (both brain and spinal cord). Conclusions: If it would be possible to image blood vessels of the same diameter in the central nervous system of human patients, the synchrotron-based technique could yield high-quality images at a significantly lower risk for the patient than conventional X-ray imaging. Images could be acquired where catheterization of feeding blood vessels has proven impossible.

  6. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  7. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  8. Measurement and subtraction of Schumann resonances at gravitational-wave interferometers

    NASA Astrophysics Data System (ADS)

    Coughlin, Michael W.; Cirone, Alessio; Meyers, Patrick; Atsuta, Sho; Boschi, Valerio; Chincarini, Andrea; Christensen, Nelson L.; De Rosa, Rosario; Effler, Anamaria; Fiori, Irene; Gołkowski, Mark; Guidry, Melissa; Harms, Jan; Hayama, Kazuhiro; Kataoka, Yuu; Kubisz, Jerzy; Kulak, Andrzej; Laxen, Michael; Matas, Andrew; Mlynarczyk, Janusz; Ogawa, Tsutomu; Paoletti, Federico; Salvador, Jacobo; Schofield, Robert; Somiya, Kentaro; Thrane, Eric

    2018-05-01

    Correlated magnetic noise from Schumann resonances threatens to contaminate the observation of a stochastic gravitational-wave background in interferometric detectors. In previous work, we reported on the first effort to eliminate global correlated noise from the Schumann resonances using Wiener filtering, demonstrating as much as a factor of two reduction in the coherence between magnetometers on different continents. In this work, we present results from dedicated magnetometer measurements at the Virgo and KAGRA sites, which are the first results for subtraction using data from gravitational-wave detector sites. We compare these measurements to a growing network of permanent magnetometer stations, including at the LIGO sites. We show the effect of mutual magnetometer attraction, arguing that magnetometers should be placed at least one meter from one another. In addition, for the first time, we show how dedicated measurements by magnetometers near to the interferometers can reduce coherence to a level consistent with uncorrelated noise, making a potential detection of a stochastic gravitational-wave background possible.

  9. A novel algorithm for osteoarthritis detection in Hough domain

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Poria, Nilanjan; Chakraborty, Rajanya; Pratiher, Sawon; Mukherjee, Sukanya; Panigrahi, Prasanta K.

    2018-02-01

    Background subtraction of knee MRI images has been performed, followed by edge detection through canny edge detector. In order to avoid the discontinuities among edges, Daubechies-4 (Db-4) discrete wavelet transform (DWT) methodology is applied for the smoothening of edges identified through canny edge detector. The approximation coefficients of Db-4, having highest energy is selected to get rid of discontinuities in edges. Hough transform is then applied to find imperfect knee locations, as a function of distance (r) and angle (θ). The final outcome of the linear Hough transform is a two-dimensional array i.e., the accumulator space (r, θ) where one dimension of this matrix is the quantized angle θ and the other dimension is the quantized distance r. A novel algorithm has been suggested such that any deviation from the healthy knee bone structure for diseases like osteoarthritis can clearly be depicted on the accumulator space.

  10. Developing a Model to Support Students in Solving Subtraction

    ERIC Educational Resources Information Center

    Murdiyani, Nila Mareta; Zulkardi; Putri, Ratu Ilma Indra; van Eerde, Dolly; van Galen, Frans

    2013-01-01

    Subtraction has two meanings and each meaning leads to the different strategies. The meaning of "taking away something" suggests a direct subtraction, while the meaning of "determining the difference between two numbers" is more likely to be modeled as indirect addition. Many prior researches found that the second meaning and…

  11. Dose algorithm for EXTRAD 4100S extremity dosimeter for use at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, Charles Augustus

    An updated algorithm for the EXTRAD 4100S extremity dosimeter has been derived. This algorithm optimizes the binning of dosimeter element ratios and uses a quadratic function to determine the response factors for low response ratios. This results in lower systematic bias across all test categories and eliminates the need for the 'red strap' algorithm that was used for high energy beta/gamma emitting radionuclides. The Radiation Protection Dosimetry Program (RPDP) at Sandia National Laboratories uses the Thermo Fisher EXTRAD 4100S extremity dosimeter, shown in Fig 1.1 to determine shallow dose to the extremities of potentially exposed individuals. This dosimeter consists ofmore » two LiF TLD elements or 'chipstrates', one of TLD-700 ({sup 7}Li) and one of TLD-100 (natural Li) separated by a tin filter. Following readout and background subtraction, the ratio of the responses of the two elements is determined defining the penetrability of the incident radiation. While this penetrability approximates the incident energy of the radiation, X-rays and beta particles exist in energy distributions that make determination of dose conversion factors less straightforward in their determination.« less

  12. A new FOD recognition algorithm based on multi-source information fusion and experiment analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Xiao, Gang

    2011-08-01

    Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.

  13. Hardware Implementation of a Bilateral Subtraction Filter

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for

  14. Purification of photon subtraction from continuous squeezed light by filtering

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Jun-ichi; Asavanant, Warit; Furusawa, Akira

    2017-11-01

    Photon subtraction from squeezed states is a powerful scheme to create good approximation of so-called Schrödinger cat states. However, conventional continuous-wave-based methods actually involve some impurity in squeezing of localized wave packets, even in the ideal case of no optical losses. Here, we theoretically discuss this impurity by introducing mode match of squeezing. Furthermore, here we propose a method to remove this impurity by filtering the photon-subtraction field. Our method in principle enables creation of pure photon-subtracted squeezed states, which was not possible with conventional methods.

  15. Dual-wavelength digital holographic imaging with phase background subtraction

    NASA Astrophysics Data System (ADS)

    Khmaladze, Alexander; Matz, Rebecca L.; Jasensky, Joshua; Seeley, Emily; Holl, Mark M. Banaszak; Chen, Zhan

    2012-05-01

    Three-dimensional digital holographic microscopic phase imaging of objects that are thicker than the wavelength of the imaging light is ambiguous and results in phase wrapping. In recent years, several unwrapping methods that employed two or more wavelengths were introduced. These methods compare the phase information obtained from each of the wavelengths and extend the range of unambiguous height measurements. A straightforward dual-wavelength phase imaging method is presented which allows for a flexible tradeoff between the maximum height of the sample and the amount of noise the method can tolerate. For highly accurate phase measurements, phase unwrapping of objects with heights higher than the beat (synthetic) wavelength (i.e. the product of the original two wavelengths divided by their difference), can be achieved. Consequently, three-dimensional measurements of a wide variety of biological systems and microstructures become technically feasible. Additionally, an effective method of removing phase background curvature based on slowly varying polynomial fitting is proposed. This method allows accurate volume measurements of several small objects with the same image frame.

  16. Fluorescence lifetime correlation spectroscopy for precise concentration detection in vivo by background subtraction

    NASA Astrophysics Data System (ADS)

    Gärtner, Maria; Mütze, Jörg; Ohrt, Thomas; Schwille, Petra

    2009-07-01

    In vivo studies of single molecule dynamics by means of Fluorescence correlation spectroscopy can suffer from high background. Fluorescence lifetime correlation spectroscopy provides a tool to distinguish between signal and unwanted contributions via lifetime separation. By studying the motion of the RNA-induced silencing complex (RISC) within two compartments of a human cell, the nucleus and the cytoplasm, we observed clear differences in concentration as well as mobility of the protein complex between those two locations. Especially in the nucleus, where the fluorescence signal is very weak, a correction for background is crucial to provide reliable results of the particle number. Utilizing the fluorescent lifetime of the different contributions, we show that it is possible to distinguish between the fluorescent signal and the autofluorescent background in vivo in a single measurement.

  17. Motion induced second order temperature and y-type anisotropies after the subtraction of linear dipole in the CMB maps

    NASA Astrophysics Data System (ADS)

    Sunyaev, Rashid A.; Khatri, Rishi

    2013-03-01

    y-type spectral distortions of the cosmic microwave background allow us to detect clusters and groups of galaxies, filaments of hot gas and the non-uniformities in the warm hot intergalactic medium. Several CMB experiments (on small areas of sky) and theoretical groups (for full sky) have recently published y-type distortion maps. We propose to search for two artificial hot spots in such y-type maps resulting from the incomplete subtraction of the effect of the motion induced dipole on the cosmic microwave background sky. This dipole introduces, at second order, additional temperature and y-distortion anisotropy on the sky of amplitude few μK which could potentially be measured by Planck HFI and Pixie experiments and can be used as a source of cross channel calibration by CMB experiments. This y-type distortion is present in every pixel and is not the result of averaging the whole sky. This distortion, calculated exactly from the known linear dipole, can be subtracted from the final y-type maps, if desired.

  18. Motion induced second order temperature and y-type anisotropies after the subtraction of linear dipole in the CMB maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunyaev, Rashid A.; Khatri, Rishi, E-mail: sunyaev@mpa-garching.mpg.de, E-mail: khatri@mpa-garching.mpg.de

    2013-03-01

    y-type spectral distortions of the cosmic microwave background allow us to detect clusters and groups of galaxies, filaments of hot gas and the non-uniformities in the warm hot intergalactic medium. Several CMB experiments (on small areas of sky) and theoretical groups (for full sky) have recently published y-type distortion maps. We propose to search for two artificial hot spots in such y-type maps resulting from the incomplete subtraction of the effect of the motion induced dipole on the cosmic microwave background sky. This dipole introduces, at second order, additional temperature and y-distortion anisotropy on the sky of amplitude few μKmore » which could potentially be measured by Planck HFI and Pixie experiments and can be used as a source of cross channel calibration by CMB experiments. This y-type distortion is present in every pixel and is not the result of averaging the whole sky. This distortion, calculated exactly from the known linear dipole, can be subtracted from the final y-type maps, if desired.« less

  19. Contrast-enhanced dual-energy subtraction imaging using electronic spectrum-splitting and multi-prism x-ray lenses

    NASA Astrophysics Data System (ADS)

    Fredenberg, Erik; Cederström, Björn; Lundqvist, Mats; Ribbing, Carolina; Åslund, Magnus; Diekmann, Felix; Nishikawa, Robert; Danielsson, Mats

    2008-03-01

    Dual-energy subtraction imaging (DES) is a method to improve the detectability of contrast agents over a lumpy background. Two images, acquired at x-ray energies above and below an absorption edge of the agent material, are logarithmically subtracted, resulting in suppression of the signal from the tissue background and a relative enhancement of the signal from the agent. Although promising, DES is still not widely used in clinical practice. One reason may be the need for two distinctly separated x-ray spectra that are still close to the absorption edge, realized through dual exposures which may introduce motion unsharpness. In this study, electronic spectrum-splitting with a silicon-strip detector is theoretically and experimentally investigated for a mammography model with iodinated contrast agent. Comparisons are made to absorption imaging and a near-ideal detector using a signal-to-noise ratio that includes both statistical and structural noise. Similar to previous studies, heavy absorption filtration was needed to narrow the spectra at the expense of a large reduction in x-ray flux. Therefore, potential improvements using a chromatic multi-prism x-ray lens (MPL) for filtering were evaluated theoretically. The MPL offers a narrow tunable spectrum, and we show that the image quality can be improved compared to conventional filtering methods.

  20. Optimization of dual-energy subtraction chest radiography by use of a direct-conversion flat-panel detector system.

    PubMed

    Fukao, Mari; Kawamoto, Kiyosumi; Matsuzawa, Hiroaki; Honda, Osamu; Iwaki, Takeshi; Doi, Tsukasa

    2015-01-01

    We aimed to optimize the exposure conditions in the acquisition of soft-tissue images using dual-energy subtraction chest radiography with a direct-conversion flat-panel detector system. Two separate chest images were acquired at high- and low-energy exposures with standard or thick chest phantoms. The high-energy exposure was fixed at 120 kVp with the use of an auto-exposure control technique. For the low-energy exposure, the tube voltages and entrance surface doses ranged 40-80 kVp and 20-100 % of the dose required for high-energy exposure, respectively. Further, a repetitive processing algorithm was used for reduction of the image noise generated by the subtraction process. Seven radiology technicians ranked soft-tissue images, and these results were analyzed using the normalized-rank method. Images acquired at 60 kVp were of acceptable quality regardless of the entrance surface dose and phantom size. Using a repetitive processing algorithm, the minimum acceptable doses were reduced from 75 to 40 % for the standard phantom and to 50 % for the thick phantom. We determined that the optimum low-energy exposure was 60 kVp at 50 % of the dose required for the high-energy exposure. This allowed the simultaneous acquisition of standard radiographs and soft-tissue images at 1.5 times the dose required for a standard radiograph, which is significantly lower than the values reported previously.

  1. Space debris tracking based on fuzzy running Gaussian average adaptive particle filter track-before-detect algorithm

    NASA Astrophysics Data System (ADS)

    Torteeka, Peerapong; Gao, Peng-Qi; Shen, Ming; Guo, Xiao-Zhang; Yang, Da-Tao; Yu, Huan-Huan; Zhou, Wei-Ping; Zhao, You

    2017-02-01

    Although tracking with a passive optical telescope is a powerful technique for space debris observation, it is limited by its sensitivity to dynamic background noise. Traditionally, in the field of astronomy, static background subtraction based on a median image technique has been used to extract moving space objects prior to the tracking operation, as this is computationally efficient. The main disadvantage of this technique is that it is not robust to variable illumination conditions. In this article, we propose an approach for tracking small and dim space debris in the context of a dynamic background via one of the optical telescopes that is part of the space surveillance network project, named the Asia-Pacific ground-based Optical Space Observation System or APOSOS. The approach combines a fuzzy running Gaussian average for robust moving-object extraction with dim-target tracking using a particle-filter-based track-before-detect method. The performance of the proposed algorithm is experimentally evaluated, and the results show that the scheme achieves a satisfactory level of accuracy for space debris tracking.

  2. Subtraction coronary CT angiography using second-generation 320-detector row CT.

    PubMed

    Yoshioka, Kunihiro; Tanaka, Ryoichi; Muranaka, Kenta; Sasaki, Tadashi; Ueda, Takanori; Chiba, Takuya; Takeda, Kouta; Sugawara, Tsuyoshi

    2015-06-01

    The purpose of this study was to explore the feasibility of subtraction coronary computed tomography angiography (CCTA) by second-generation 320-detector row CT in patients with severe coronary artery calcification using invasive coronary angiography (ICA) as the gold standard. This study was approved by the institutional board, and all subjects provided written consent. Twenty patients with calcium scores of >400 underwent conventional CCTA and subtraction CCTA followed by ICA. A total of 82 segments were evaluated for image quality using a 4-point scale and the presence of significant (>50 %) luminal stenosis by two independent readers. The average image quality was 2.3 ± 0.8 with conventional CCTA and 3.2 ± 0.6 with subtraction CCTA (P < 0.001). The percentage of segments with non-diagnostic image quality was 43.9 % on conventional CCTA versus 8.5 % on subtraction CCTA (P = 0.004). The segment-based diagnostic accuracy for detecting significant stenosis according to ICA revealed an area under the receiver operating characteristics curve of 0.824 (95 % confidence interval [CI], 0.750-0.899) for conventional CCTA and 0.936 (95 % CI 0.889-0.936) for subtraction CCTA (P = 0.001). The sensitivity, specificity, positive predictive value, and negative predictive value for conventional CCTA were 88.2, 62.5, 62.5, and 88.2 %, respectively, and for subtraction CCTA they were 94.1, 85.4, 82.1, and 95.3 %, respectively. As compared to conventional, subtraction CCTA using a second-generation 320-detector row CT showed improvement in diagnostic accuracy at segment base analysis in patients with severe calcifications.

  3. An Experimental Implementation of Chemical Subtraction

    PubMed Central

    Chen, Shao-Nong; Turner, Allison; Jaki, Birgit U.; Nikolic, Dejan; van Breemen, Richard B.; Friesen, J. Brent; Pauli, Guido F.

    2008-01-01

    A preparative analytical method was developed to selectively remove (“chemically subtract”) a single compound from a complex mixture, such as a natural extract or fraction, in a single step. The proof of concept is demonstrated by the removal of pure benzoic acid (BA) from cranberry (Vaccinium macrocarpon Ait.) juice fractions that exhibit anti-adhesive effects vs. uropathogenic E. coli. Chemical subtraction of BA, representing a major constituent of the fractions, eliminates the potential in vitro interference of the bacteriostatic effect of BA on the E. coli anti-adherence action measured in bioassays. Upon BA removal, the anti-adherent activity of the fraction was fully retained, 36% inhibition of adherence in the parent fraction at 100 ug/mL increased to 58% in the BA-free active fraction. The method employs countercurrent chromatography (CCC) and operates loss-free for both the subtracted and the retained portions as only liquid-liquid partitioning is involved. While the high purity (97.47% by quantitative 1H NMR) of the subtracted BA confirms the selectivity of the method, one minor impurity was determined to be scopoletin by HR-ESI-MS and (q)HNMR and represents the first coumarin reported from cranberries. A general concept for the selective removal of phytoconstituents by CCC is presented, which has potential broad applicability in the biological evaluation of medicinal plant extracts and complex pharmaceutical preparations. PMID:18234463

  4. Subjective comparison and evaluation of speech enhancement algorithms

    PubMed Central

    Hu, Yi; Loizou, Philipos C.

    2007-01-01

    Making meaningful comparisons between the performance of the various speech enhancement algorithms proposed over the years, has been elusive due to lack of a common speech database, differences in the types of noise used and differences in the testing methodology. To facilitate such comparisons, we report on the development of a noisy speech corpus suitable for evaluation of speech enhancement algorithms. This corpus is subsequently used for the subjective evaluation of 13 speech enhancement methods encompassing four classes of algorithms: spectral subtractive, subspace, statistical-model based and Wiener-type algorithms. The subjective evaluation was performed by Dynastat, Inc. using the ITU-T P.835 methodology designed to evaluate the speech quality along three dimensions: signal distortion, noise distortion and overall quality. This paper reports the results of the subjective tests. PMID:18046463

  5. Subtraction of Positive and Negative Numbers: The Difference and Completion Approaches with Chips

    ERIC Educational Resources Information Center

    Flores, Alfinio

    2008-01-01

    Diverse contexts such as "take away," comparison," and "completion" give rise to subtraction problems. The take-away interpretation of subtraction has been explored using two-colored chips to help students understand addition and subtraction of integers. This article illustrates how the difference and completion (or missing addend) interpretations…

  6. Characterizing the Background Corona with SDO/AIA

    NASA Technical Reports Server (NTRS)

    Napier, Kate; Alexander, Caroline; Winebarger, Amy

    2014-01-01

    Characterizing the nature of the solar coronal background would enable scientists to more accurately determine plasma parameters, and may lead to a better understanding of the coronal heating problem. Because scientists study the 3D structure of the Sun in 2D, any line-of-sight includes both foreground and background material, and thus, the issue of background subtraction arises. By investigating the intensity values in and around an active region, using multiple wavelengths collected from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO) over an eight-hour period, this project aims to characterize the background as smooth or structured. Different methods were employed to measure the true coronal background and create minimum intensity images. These were then investigated for the presence of structure. The background images created were found to contain long-lived structures, including coronal loops, that were still present in all of the wavelengths, 131, 171, 193, 211, and 335 A. The intensity profiles across the active region indicate that the background is much more structured than previously thought.

  7. Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms

    NASA Astrophysics Data System (ADS)

    Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.

    2006-03-01

    In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.

  8. PSF subtraction to search for distant Jupiters with SPITZER

    NASA Astrophysics Data System (ADS)

    Rameau, Julien; Artigau, Etienne; Baron, Frédérique; Lafrenière, David; Doyon, Rene; Malo, Lison; Naud, Marie-Eve; Delorme, Philippe; Janson, Markus; Albert, Loic; Gagné, Jonathan; Beichman, Charles

    2015-12-01

    In the course of the search for extrasolar planets, a focus has been made towards rocky planets very close (within few AUs) to their parent stars. However, planetary systems might host gas giants as well, possibly at larger separation from the central star. Direct imaging is the only technique able to probe the outer part of planetary systems. With the advent of the new generation of planet finders like GPI and SPHERE, extrasolar systems are now studied at the solar system scale. Nevertheless, very extended planetary systems do exist and have been found (Gu Ps, AB Pic b, etc.). They are easier to detect and characterize. They are also excellent proxy for close-in gas giants that are detected from the ground. These planets have no equivalent in our solar system and their origin remain a matter of speculation. In this sense, studying planetary systems from its innermost to its outermost part is therefore mandatory to have a clear understanding of its architecture, hence hints of its formation and evolution. We are carrying out a space-based survey using SPITZER to search for distant companions around a well-characterized sample of 120 young and nearby stars. We designed an observing strategy that allows building a very homogeneous PSF library. With this library, we perform a PSF subtraction to search for planets from 10’’ down to 1’’. In this poster, I will present the library, the different algorithms used to subtract the PSF, and the promising detection sensitivity that we are able to reach with this survey. This project to search for the most extreme planetary systems is unique in the exoplanet community. It is also the only realistic mean of directly imaging and subsequently obtaining spectroscopy of young Saturn or Jupiter mass planets in the JWST-era.

  9. Young children's use of derived fact strategies for addition and subtraction

    PubMed Central

    Dowker, Ann

    2014-01-01

    Forty-four children between 6;0 and 7;11 took part in a study of derived fact strategy use. They were assigned to addition and subtraction levels on the basis of calculation pretests. They were then given Dowker's (1998) test of derived fact strategies in addition, involving strategies based on the Identity, Commutativity, Addend +1, Addend −1, and addition/subtraction Inverse principles; and test of derived fact strategies in subtraction, involving strategies based on the Identity, Minuend +1, Minuend −1, Subtrahend +1, Subtrahend −1, Complement and addition/subtraction Inverse principles. The exact arithmetic problems given varied according to the child's previously assessed calculation level and were selected to be just a little too difficult for the child to solve unaided. Children were given the answer to a problem and then asked to solve another problem that could be solved quickly by using this answer, together with the principle being assessed. The children also took the WISC Arithmetic subtest. Strategies differed greatly in difficulty, with Identity being the easiest, and the Inverse and Complement principles being most difficult. The Subtrahend +1 and Subtrahend −1 problems often elicited incorrect strategies based on an overextension of the principles of addition to subtraction. It was concluded that children may have difficulty with understanding and applying the relationships between addition and subtraction. Derived fact strategy use was significantly related to both calculation level and to WISC Arithmetic scaled score. PMID:24431996

  10. Digital image comparison by subtracting contextual transformations—percentile rank order differentiation

    USGS Publications Warehouse

    Wehde, M. E.

    1995-01-01

    The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.

  11. Passive Fourier-transform infrared spectroscopy of chemical plumes: an algorithm for quantitative interpretation and real-time background removal

    NASA Astrophysics Data System (ADS)

    Polak, Mark L.; Hall, Jeffrey L.; Herr, Kenneth C.

    1995-08-01

    We present a ratioing algorithm for quantitative analysis of the passive Fourier-transform infrared spectrum of a chemical plume. We show that the transmission of a near-field plume is given by tau plume = (Lobsd - Lbb-plume)/(Lbkgd - Lbb-plume), where tau plume is the frequency-dependent transmission of the plume, L obsd is the spectral radiance of the scene that contains the plume, Lbkgd is the spectral radiance of the same scene without the plume, and Lbb-plume is the spectral radiance of a blackbody at the plume temperature. The algorithm simultaneously achieves background removal, elimination of the spectrometer internal signature, and quantification of the plume spectral transmission. It has applications to both real-time processing for plume visualization and quantitative measurements of plume column densities. The plume temperature (Lbb-plume ), which is not always precisely known, can have a profound effect on the quantitative interpretation of the algorithm and is discussed in detail. Finally, we provide an illustrative example of the use of the algorithm on a trichloroethylene and acetone plume.

  12. Differential Gene Expression at Coral Settlement and Metamorphosis - A Subtractive Hybridization Study

    PubMed Central

    Hayward, David C.; Hetherington, Suzannah; Behm, Carolyn A.; Grasso, Lauretta C.; Forêt, Sylvain; Miller, David J.; Ball, Eldon E.

    2011-01-01

    Background A successful metamorphosis from a planktonic larva to a settled polyp, which under favorable conditions will establish a future colony, is critical for the survival of corals. However, in contrast to the situation in other animals, e.g., frogs and insects, little is known about the molecular basis of coral metamorphosis. We have begun to redress this situation with previous microarray studies, but there is still a great deal to learn. In the present paper we have utilized a different technology, subtractive hybridization, to characterize genes differentially expressed across this developmental transition and to compare the success of this method to microarray. Methodology/Principal Findings Suppressive subtractive hybridization (SSH) was used to identify two pools of transcripts from the coral, Acropora millepora. One is enriched for transcripts expressed at higher levels at the pre-settlement stage, and the other for transcripts expressed at higher levels at the post-settlement stage. Virtual northern blots were used to demonstrate the efficacy of the subtractive hybridization technique. Both pools contain transcripts coding for proteins in various functional classes but transcriptional regulatory proteins were represented more frequently in the post-settlement pool. Approximately 18% of the transcripts showed no significant similarity to any other sequence on the public databases. Transcripts of particular interest were further characterized by in situ hybridization, which showed that many are regulated spatially as well as temporally. Notably, many transcripts exhibit axially restricted expression patterns that correlate with the pool from which they were isolated. Several transcripts are expressed in patterns consistent with a role in calcification. Conclusions We have characterized over 200 transcripts that are differentially expressed between the planula larva and post-settlement polyp of the coral, Acropora millepora. Sequence, putative function

  13. Subtraction method in the Second Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Gambacurta, Danilo

    2018-02-01

    We discuss the subtraction method applied to the Second Random Phase Approximation (SRPA). This method has been proposed to overcome double counting and stability issues appearing in beyond mean-field calculations. We show that the subtraction procedure leads to a considerable reduction of the SRPA downwards shift with respect to the random phase approximation (RPA) spectra and to results that are weakly cutoff dependent. Applications to the isoscalar monopole and quadrupole response in 16O and to the low-lying dipole response in 48Ca are shown and discussed.

  14. Blood flow measurement using digital subtraction angiography for assessing hemodialysis access function

    NASA Astrophysics Data System (ADS)

    Koirala, Nischal; Setser, Randolph M.; Bullen, Jennifer; McLennan, Gordon

    2017-03-01

    Blood flow rate is a critical parameter for diagnosing dialysis access function during fistulography where a flow rate of 600 ml/min in arteriovenous graft or 400-500 ml/min in arteriovenous fistula is considered the clinical threshold for fully functioning access. In this study, a flow rate computational model for calculating intra-access flow to evaluate dialysis access patency was developed and validated in an in vitro set up using digital subtraction angiography. Flow rates were computed by tracking the bolus through two regions of interest using cross correlation (XCOR) and mean arrival time (MAT) algorithms, and correlated versus an in-line transonic flow meter measurement. The mean difference (mean +/- standard deviation) between XCOR and in-line flow measurements for in vitro setup at 3, 6, 7.5 and 10 frames/s was 118+/-63 37+/-59 31+/-31 and 46+/-57 ml/min respectively while for MAT method it was 86+/-56 57+/-72 35+/-85 and 19+/-129 ml/min respectively. The result of this investigation will be helpful for selecting candidate algorithms while blood flow computational tool is developed for clinical application.

  15. An Efficient Voting Algorithm for Finding Additive Biclusters with Random Background

    PubMed Central

    Xiao, Jing; Wang, Lusheng; Liu, Xiaowen

    2008-01-01

    Abstract The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n × m matrix A (n ≥ m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n × m background matrix is a random integer from [0, L − 1] for some integer L, and a k × k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L − 1] with probability θ. We propose an O (n2m) time algorithm based on voting to solve the problem. We show that when \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document}$$k \\geq \\Omega (\\sqrt{n \\log n})$$\\end{document}, the voting algorithm can correctly find the implanted bicluster with probability at least \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document}$$1 - {\\frac {9} {n^ {2}}}$$\\end{document}. We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several

  16. Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong

    2017-08-01

    We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.

  17. Subtractive Structural Modification of Morpho Butterfly Wings.

    PubMed

    Shen, Qingchen; He, Jiaqing; Ni, Mengtian; Song, Chengyi; Zhou, Lingye; Hu, Hang; Zhang, Ruoxi; Luo, Zhen; Wang, Ge; Tao, Peng; Deng, Tao; Shang, Wen

    2015-11-11

    Different from studies of butterfly wings through additive modification, this work for the first time studies the property change of butterfly wings through subtractive modification using oxygen plasma etching. The controlled modification of butterfly wings through such subtractive process results in gradual change of the optical properties, and helps the further understanding of structural optimization through natural evolution. The brilliant color of Morpho butterfly wings is originated from the hierarchical nanostructure on the wing scales. Such nanoarchitecture has attracted a lot of research effort, including the study of its optical properties, its potential use in sensing and infrared imaging, and also the use of such structure as template for the fabrication of high-performance photocatalytic materials. The controlled subtractive processes provide a new path to modify such nanoarchitecture and its optical property. Distinct from previous studies on the optical property of the Morpho wing structure, this study provides additional experimental evidence for the origination of the optical property of the natural butterfly wing scales. The study also offers a facile approach to generate new 3D nanostructures using butterfly wings as the templates and may lead to simpler structure models for large-scale man-made structures than those offered by original butterfly wings. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. The Automatic Recognition of the Abnormal Sky-subtraction Spectra Based on Hadoop

    NASA Astrophysics Data System (ADS)

    An, An; Pan, Jingchang

    2017-10-01

    The skylines, superimposing on the target spectrum as a main noise, If the spectrum still contains a large number of high strength skylight residuals after sky-subtraction processing, it will not be conducive to the follow-up analysis of the target spectrum. At the same time, the LAMOST can observe a quantity of spectroscopic data in every night. We need an efficient platform to proceed the recognition of the larger numbers of abnormal sky-subtraction spectra quickly. Hadoop, as a distributed parallel data computing platform, can deal with large amounts of data effectively. In this paper, we conduct the continuum normalization firstly and then a simple and effective method will be presented to automatic recognize the abnormal sky-subtraction spectra based on Hadoop platform. Obtain through the experiment, the Hadoop platform can implement the recognition with more speed and efficiency, and the simple method can recognize the abnormal sky-subtraction spectra and find the abnormal skyline positions of different residual strength effectively, can be applied to the automatic detection of abnormal sky-subtraction of large number of spectra.

  19. Color Addition and Subtraction Apps

    ERIC Educational Resources Information Center

    Ruiz, Frances; Ruiz, Michael J.

    2015-01-01

    Color addition and subtraction apps in HTML5 have been developed for students as an online hands-on experience so that they can more easily master principles introduced through traditional classroom demonstrations. The evolution of the additive RGB color model is traced through the early IBM color adapters so that students can proceed step by step…

  20. VAKT for Basic Subtraction Facts.

    ERIC Educational Resources Information Center

    Thornton, Carol A.; Toohey, Margaret A.

    Guidelines are presented for modifying basic instruction of subtraction facts for elementary level learning disabled students. A detailed case study is used to illustrate a five-step structured program: (1) find a way to work it out; (2) add to check; (3) learn the partner facts; (4) study families of facts; (5) review and practice. The selection…

  1. Summation and subtraction using a modified autoshaping procedure in pigeons.

    PubMed

    Ploog, Bertram O

    2008-06-01

    A modified autoshaping paradigm (significantly different from those previously reported in the summation literature) was employed to allow for the simultaneous assessment of stimulus summation and subtraction in pigeons. The response requirements and the probability of food delivery were adjusted such that towards the end of training 12 of 48 trials ended in food delivery, the same proportion as under testing. Stimuli (outlines of squares of three sizes and colors: A, B, and C) were used that could be presented separately or in any combination of two or three stimuli. Twelve of the pigeons (summation groups) were trained with either A, B, and C or with AB, BC, and CA, and tested with ABC. The remaining 12 pigeons (subtraction groups) received training with ABC but were tested with A, B, and C or with AB, BC, and CA. These groups were further subdivided according to whether stimulus elements were presented either in a concentric or dispersed manner. Summation did not occur; subtraction occurred in the two concentric groups. For interpretation of the results, configural theory, the Rescorla-Wagner model, and the composite-stimulus control model were considered. The results suggest different mechanisms responsible for summation and subtraction.

  2. Nonenhanced magnetic resonance angiography (MRA) of the calf arteries at 3 Tesla: intraindividual comparison of 3D flow-dependent subtractive MRA and 2D flow-independent non-subtractive MRA.

    PubMed

    Knobloch, Gesine; Lauff, Marie-Teres; Hirsch, Sebastian; Schwenke, Carsten; Hamm, Bernd; Wagner, Moritz

    2016-12-01

    To prospectively compare 3D flow-dependent subtractive MRA vs. 2D flow-independent non-subtractive MRA for assessment of the calf arteries at 3 Tesla. Forty-two patients with peripheral arterial occlusive disease underwent nonenhanced MRA of calf arteries at 3 Tesla with 3D flow-dependent subtractive MRA (fast spin echo sequence; 3D-FSE-MRA) and 2D flow-independent non-subtractive MRA (balanced steady-state-free-precession sequence; 2D-bSSFP-MRA). Moreover, all patients underwent contrast-enhanced MRA (CE-MRA) as standard-of-reference. Two readers performed a per-segment evaluation for image quality (4 = excellent to 0 = non-diagnostic) and severity of stenosis. Image quality scores of 2D-bSSFP-MRA were significantly higher compared to 3D-FSE-MRA (medians across readers: 4 vs. 3; p < 0.0001) with lower rates of non-diagnostic vessel segments on 2D-bSSFP-MRA (reader 1: <1 % vs. 15 %; reader 2: 1 % vs. 29 %; p < 0.05). Diagnostic performance of 2D-bSSFP-MRA and 3D-FSE-MRA across readers showed sensitivities of 89 % (214/240) vs. 70 % (168/240), p = 0.0153; specificities: 91 % (840/926) vs. 63 % (585/926), p < 0.0001; and diagnostic accuracies of 90 % (1054/1166) vs. 65 % (753/1166), p < 0.0001. 2D flow-independent non-subtractive MRA (2D-bSSFP-MRA) is a robust nonenhanced MRA technique for assessment of the calf arteries at 3 Tesla with significantly higher image quality and diagnostic accuracy compared to 3D flow-dependent subtractive MRA (3D-FSE-MRA). • 2D flow-independent non-subtractive MRA (2D-bSSFP-MRA) is a robust NE-MRA technique at 3T • 2D-bSSFP-MRA outperforms 3D flow-dependent subtractive MRA (3D-FSE-MRA) as NE-MRA of calf arteries • 2D-bSSFP-MRA is a promising alternative to CE-MRA for calf PAOD evaluation.

  3. Microscopic image analysis for reticulocyte based on watershed algorithm

    NASA Astrophysics Data System (ADS)

    Wang, J. Q.; Liu, G. F.; Liu, J. G.; Wang, G.

    2007-12-01

    We present a watershed-based algorithm in the analysis of light microscopic image for reticulocyte (RET), which will be used in an automated recognition system for RET in peripheral blood. The original images, obtained by micrography, are segmented by modified watershed algorithm and are recognized in term of gray entropy and area of connective area. In the process of watershed algorithm, judgment conditions are controlled according to character of the image, besides, the segmentation is performed by morphological subtraction. The algorithm was simulated with MATLAB software. It is similar for automated and manual scoring and there is good correlation(r=0.956) between the methods, which is resulted from 50 pieces of RET images. The result indicates that the algorithm for peripheral blood RETs is comparable to conventional manual scoring, and it is superior in objectivity. This algorithm avoids time-consuming calculation such as ultra-erosion and region-growth, which will speed up the computation consequentially.

  4. Characterizing the True Background Corona with SDO/AIA

    NASA Technical Reports Server (NTRS)

    Napier, Kate; Winebarger, Amy; Alexander, Caroline

    2014-01-01

    Characterizing the nature of the solar coronal background would enable scientists to more accurately determine plasma parameters, and may lead to a better understanding of the coronal heating problem. Because scientists study the 3D structure of the Sun in 2D, any line of sight includes both foreground and background material, and thus, the issue of background subtraction arises. By investigating the intensity values in and around an active region, using multiple wavelengths collected from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO) over an eight-hour period, this project aims to characterize the background as smooth or structured. Different methods were employed to measure the true coronal background and create minimum intensity images. These were then investigated for the presence of structure. The background images created were found to contain long-lived structures, including coronal loops, that were still present in all of the wavelengths, 193 Angstroms,171 Angstroms,131 Angstroms, and 211 Angstroms. The intensity profiles across the active region indicate that the background is much more structured than previously thought.

  5. Appearance of the canine meninges in subtraction magnetic resonance images.

    PubMed

    Lamb, Christopher R; Lam, Richard; Keenihan, Erin K; Frean, Stephen

    2014-01-01

    The canine meninges are not visible as discrete structures in noncontrast magnetic resonance (MR) images, and are incompletely visualized in T1-weighted, postgadolinium images, reportedly appearing as short, thin curvilinear segments with minimal enhancement. Subtraction imaging facilitates detection of enhancement of tissues, hence may increase the conspicuity of meninges. The aim of the present study was to describe qualitatively the appearance of canine meninges in subtraction MR images obtained using a dynamic technique. Images were reviewed of 10 consecutive dogs that had dynamic pre- and postgadolinium T1W imaging of the brain that was interpreted as normal, and had normal cerebrospinal fluid. Image-anatomic correlation was facilitated by dissection and histologic examination of two canine cadavers. Meningeal enhancement was relatively inconspicuous in postgadolinium T1-weighted images, but was clearly visible in subtraction images of all dogs. Enhancement was visible as faint, small-rounded foci compatible with vessels seen end on within the sulci, a series of larger rounded foci compatible with vessels of variable caliber on the dorsal aspect of the cerebral cortex, and a continuous thin zone of moderate enhancement around the brain. Superimposition of color-encoded subtraction images on pregadolinium T1- and T2-weighted images facilitated localization of the origin of enhancement, which appeared to be predominantly dural, with relatively few leptomeningeal structures visible. Dynamic subtraction MR imaging should be considered for inclusion in clinical brain MR protocols because of the possibility that its use may increase sensitivity for lesions affecting the meninges. © 2014 American College of Veterinary Radiology.

  6. Number Words in Young Children's Conceptual and Procedural Knowledge of Addition, Subtraction and Inversion

    ERIC Educational Resources Information Center

    Canobi, Katherine H.; Bethune, Narelle E.

    2008-01-01

    Three studies addressed children's arithmetic. First, 50 3- to 5-year-olds judged physical demonstrations of addition, subtraction and inversion, with and without number words. Second, 20 3- to 4-year-olds made equivalence judgments of additions and subtractions. Third, 60 4- to 6-year-olds solved addition, subtraction and inversion problems that…

  7. Removing Background Noise with Phased Array Signal Processing

    NASA Technical Reports Server (NTRS)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  8. Relearning To Teach Arithmetic Addition and Subtraction: A Teacher's Study Guide.

    ERIC Educational Resources Information Center

    Russell, Susan Jo

    This package features videotapes and a study guide that are designed to help teachers revisit the operations of addition and subtraction and consider how students can develop meaningful approaches to these operations. The study guides' sessions are on addition, subtraction, the teacher's role, and goals for students and teachers. The readings in…

  9. The Use of Procedural Knowledge in Simple Addition and Subtraction Problems

    ERIC Educational Resources Information Center

    Fayol, Michel; Thevenot, Catherine

    2012-01-01

    In a first experiment, adults were asked to solve one-digit additions, subtractions and multiplications. When the sign appeared 150 ms before the operands, addition and subtraction were solved faster than when the sign and the operands appeared simultaneously on screen. This priming effect was not observed for multiplication problems. A second…

  10. Temporal subtraction of chest radiographs compensating pose differences

    NASA Astrophysics Data System (ADS)

    von Berg, Jens; Dworzak, Jalda; Klinder, Tobias; Manke, Dirk; Kreth, Adrian; Lamecker, Hans; Zachow, Stefan; Lorenz, Cristian

    2011-03-01

    Temporal subtraction techniques using 2D image registration improve the detectability of interval changes from chest radiographs. Although such methods are well known for some time they are not widely used in radiologic practice. The reason is the occurrence of strong pose differences between two acquisitions with a time interval of months to years in between. Such strong perspective differences occur in a reasonable number of cases. They cannot be compensated by available image registration methods and thus mask interval changes to be undetectable. In this paper a method is proposed to estimate a 3D pose difference by the adaptation of a 3D rib cage model to both projections. The difference between both is then compensated for, thus producing a subtraction image with virtually no change in pose. The method generally assumes that no 3D image data is available from the patient. The accuracy of pose estimation is validated with chest phantom images acquired under controlled geometric conditions. A subtle interval change simulated by a piece of plastic foam attached to the phantom becomes visible in subtraction images generated with this technique even at strong angular pose differences like an anterior-posterior inclination of 13 degrees.

  11. Power corrections in the N -jettiness subtraction scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  12. Power corrections in the N -jettiness subtraction scheme

    DOE PAGES

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    2017-03-30

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  13. The functional architectures of addition and subtraction: Network discovery using fMRI and DCM.

    PubMed

    Yang, Yang; Zhong, Ning; Friston, Karl; Imamura, Kazuyuki; Lu, Shengfu; Li, Mi; Zhou, Haiyan; Wang, Haiyuan; Li, Kuncheng; Hu, Bin

    2017-06-01

    The neuronal mechanisms underlying arithmetic calculations are not well understood but the differences between mental addition and subtraction could be particularly revealing. Using fMRI and dynamic causal modeling (DCM), this study aimed to identify the distinct neuronal architectures engaged by the cognitive processes of simple addition and subtraction. Our results revealed significantly greater activation during subtraction in regions along the dorsal pathway, including the left inferior frontal gyrus (IFG), middle portion of dorsolateral prefrontal cortex (mDLPFC), and supplementary motor area (SMA), compared with addition. Subsequent analysis of the underlying changes in connectivity - with DCM - revealed a common circuit processing basic (numeric) attributes and the retrieval of arithmetic facts. However, DCM showed that addition was more likely to engage (numeric) retrieval-based circuits in the left hemisphere, while subtraction tended to draw on (magnitude) processing in bilateral parietal cortex, especially the right intraparietal sulcus (IPS). Our findings endorse previous hypotheses about the differences in strategic implementation, dominant hemisphere, and the neuronal circuits underlying addition and subtraction. Moreover, for simple arithmetic, our connectivity results suggest that subtraction calls on more complex processing than addition: auxiliary phonological, visual, and motor processes, for representing numbers, were engaged by subtraction, relative to addition. Hum Brain Mapp 38:3210-3225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. P300: Waves Identification with and without Subtraction of Traces

    PubMed Central

    Romero, Ana Carla Leite; Reis, Ana Cláudia Mirândola Barbosa; Oliveira, Anna Caroline Silva de; Oliveira Simões, Humberto de; Oliveira Junqueira, Cinthia Amorim de; Frizzo, Ana Cláudia Figueiredo

    2017-01-01

    Introduction  The P300 test requires well-defined and unique criteria, in addition to training for the examiners, for a uniform analysis of studies and to avoid variations and errors in the interpretation of measurement results. Objectives  The objective of this study is to verify whether there are differences in P300 with and without subtraction of traces of standard and nonstandard stimuli. Method  We conducted this study in collaboration with two research electrophysiology laboratories. From Laboratory 1, we selected 40 tests of subjects between 7–44 years, from Laboratory 2, we selected 83 tests of subjects between 18–44 years. We first performed the identification with the nonstandard stimuli; then, we subtracted the nonstandard stimuli from the standard stimuli. The examiners identified the waves, performing a descriptive and comparative analysis of traces with and without subtraction. Results  After a comparative analysis of the traces with and without subtraction, there was no significant difference when compared with analysis of traces in both laboratories, within the conditions, of right ears ( p  = 0.13 and 0.28 for differences between latency and amplitude measurements) and left ears ( p  = 0.15 and 0.09 for differences between latency and amplitude measurements) from Laboratory 1. As for Laboratory 2, when investigating both ears, results did not identify significant differences ( p  = 0.098 and 0.28 for differences between latency and amplitude measurements). Conclusion  There was no difference verified in traces with and without subtraction. We suggest the identification of this potential performed through nonstandard stimuli. PMID:29018497

  15. Background subtraction for fluorescence EXAFS data of a very dilute dopant Z in Z + 1 host.

    PubMed

    Medling, Scott; Bridges, Frank

    2011-07-01

    When conducting EXAFS at the Cu K-edge for ZnS:Cu with very low Cu concentration (<0.04% Cu), a large background was present that increased with energy. This background arises from a Zn X-ray Raman peak, which moves through the Cu fluorescence window, plus the tail of the Zn fluorescence peak. This large background distorts the EXAFS and must be removed separately before reducing the data. A simple means to remove this background is described.

  16. Improvements in floating point addition/subtraction operations

    DOEpatents

    Farmwald, P.M.

    1984-02-24

    Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.

  17. Digging Deeper: Observing Primordial Gravitational Waves below the Binary-Black-Hole-Produced Stochastic Background.

    PubMed

    Regimbau, T; Evans, M; Christensen, N; Katsavounidis, E; Sathyaprakash, B; Vitale, S

    2017-04-14

    The merger rate of black hole binaries inferred from the detections in the first Advanced LIGO science run implies that a stochastic background produced by a cosmological population of mergers will likely mask the primordial gravitational wave background. Here we demonstrate that the next generation of ground-based detectors, such as the Einstein Telescope and Cosmic Explorer, will be able to observe binary black hole mergers throughout the Universe with sufficient efficiency that the confusion background can potentially be subtracted to observe the primordial background at the level of Ω_{GW}≃10^{-13} after 5 years of observation.

  18. Reduction of background clutter in structured lighting systems

    DOEpatents

    Carlson, Jeffrey J.; Giles, Michael K.; Padilla, Denise D.; Davidson, Jr., Patrick A.; Novick, David K.; Wilson, Christopher W.

    2010-06-22

    Methods for segmenting the reflected light of an illumination source having a characteristic wavelength from background illumination (i.e. clutter) in structured lighting systems can comprise pulsing the light source used to illuminate a scene, pulsing the light source synchronously with the opening of a shutter in an imaging device, estimating the contribution of background clutter by interpolation of images of the scene collected at multiple spectral bands not including the characteristic wavelength and subtracting the estimated background contribution from an image of the scene comprising the wavelength of the light source and, placing a polarizing filter between the imaging device and the scene, where the illumination source can be polarized in the same orientation as the polarizing filter. Apparatus for segmenting the light of an illumination source from background illumination can comprise an illuminator, an image receiver for receiving images of multiple spectral bands, a processor for calculations and interpolations, and a polarizing filter.

  19. Cloning the Gravity and Shear Stress Related Genes from MG-63 Cells by Subtracting Hybridization

    NASA Astrophysics Data System (ADS)

    Zhang, Shu; Dai, Zhong-quan; Wang, Bing; Cao, Xin-sheng; Li, Ying-hui; Sun, Xi-qing

    2008-06-01

    Background The purpose of the present study was to clone the gravity and shear stress related genes from osteoblast-like human osteosarcoma MG-63 cells by subtractive hybridization. Method MG-63 cells were divided into two groups (1G group and simulated microgravity group). After cultured for 60 h in two different gravitational environments, two groups of MG-63 cells were treated with 1.5Pa fluid shear stress (FSS) for 60 min, respectively. The total RNA in cells was isolated. The gravity and shear stress related genes were cloned by subtractive hybridization. Result 200 clones were gained. 30 positive clones were selected using PCR method based on the primers of vector and sequenced. The obtained sequences were analyzed by blast. changes of 17 sequences were confirmed by RT-PCR and these genes are related to cell proliferation, cell differentiation, protein synthesis, signal transduction and apoptosis. 5 unknown genes related to gravity and shear stress were found. Conclusion In this part of our study, our result indicates that simulated microgravity may change the activities of MG-63 cells by inducing the functional alterations of specific genes.

  20. Continuous-variable measurement-device-independent quantum key distribution with photon subtraction

    NASA Astrophysics Data System (ADS)

    Ma, Hong-Xin; Huang, Peng; Bai, Dong-Yun; Wang, Shi-Yu; Bao, Wan-Su; Zeng, Gui-Hua

    2018-04-01

    It has been found that non-Gaussian operations can be applied to increase and distill entanglement between Gaussian entangled states. We show the successful use of the non-Gaussian operation, in particular, photon subtraction operation, on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. The proposed method can be implemented based on existing technologies. Security analysis shows that the photon subtraction operation can remarkably increase the maximal transmission distance of the CV-MDI-QKD protocol, which precisely make up for the shortcoming of the original CV-MDI-QKD protocol, and one-photon subtraction operation has the best performance. Moreover, the proposed protocol provides a feasible method for the experimental implementation of the CV-MDI-QKD protocol.

  1. Gas leak detection in infrared video with background modeling

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  2. [The backgroud sky subtraction around [OIII] line in LAMOST QSO spectra].

    PubMed

    Shi, Zhi-Xin; Comte, Georges; Luo, A-Li; Tu, Liang-Ping; Zhao, Yong-Heng; Wu, Fu-Chao

    2014-11-01

    At present, most sky-subtraction methods focus on the full spectrum, not the particular location, especially for the backgroud sky around [OIII] line which is very important to low redshift quasars. A new method to precisely subtract sky lines in local region is proposed in the present paper, which sloves the problem that the width of Hβ-[OIII] line is effected by the backgroud sky subtraction. The exprimental results show that, for different redshift quasars, the spectral quality has been significantly improved using our method relative to the original batch program by LAMOST. It provides a complementary solution for the small part of LAMOST spectra which are not well handled by LAMOST 2D pipeline. Meanwhile, This method has been used in searching for candidates of double-peaked Active Galactic Nuclei.

  3. Combined subtraction hybridization and polymerase chain reaction amplification procedure for isolation of strain-specific Rhizobium DNA sequences.

    PubMed Central

    Bjourson, A J; Stone, C E; Cooper, J E

    1992-01-01

    A novel subtraction hybridization procedure, incorporating a combination of four separation strategies, was developed to isolate unique DNA sequences from a strain of Rhizobium leguminosarum bv. trifolii. Sau3A-digested DNA from this strain, i.e., the probe strain, was ligated to a linker and hybridized in solution with an excess of pooled subtracter DNA from seven other strains of the same biovar which had been restricted, ligated to a different, biotinylated, subtracter-specific linker, and amplified by polymerase chain reaction to incorporate dUTP. Subtracter DNA and subtracter-probe hybrids were removed by phenol-chloroform extraction of a streptavidin-biotin-DNA complex. NENSORB chromatography of the sequences remaining in the aqueous layer captured biotinylated subtracter DNA which may have escaped removal by phenol-chloroform treatment. Any traces of contaminating subtracter DNA were removed by digestion with uracil DNA glycosylase. Finally, remaining sequences were amplified by polymerase chain reaction with a probe strain-specific primer, labelled with 32P, and tested for specificity in dot blot hybridizations against total genomic target DNA from each strain in the subtracter pool. Two rounds of subtraction-amplification were sufficient to remove cross-hybridizing sequences and to give a probe which hybridized only with homologous target DNA. The method is applicable to the isolation of DNA and RNA sequences from both procaryotic and eucaryotic cells. Images PMID:1637166

  4. Nagy-Soper subtraction scheme for multiparton final states

    NASA Astrophysics Data System (ADS)

    Chung, Cheng-Han; Robens, Tania

    2013-04-01

    In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.

  5. Complete Nagy-Soper subtraction for next-to-leading order calculations in QCD

    NASA Astrophysics Data System (ADS)

    Bevilacqua, G.; Czakon, M.; Kubocz, M.; Worek, M.

    2013-10-01

    We extend the Helac-Dipoles package with the implementation of a new subtraction formalism, first introduced by Nagy and Soper in the formulation of an improved parton shower. We discuss a systematic, semi-numerical approach for the evaluation of the integrated subtraction terms for both massless and massive partons, which provides the missing ingredient for a complete implementation. In consequence, the new scheme can now be used as part of a complete NLO QCD calculation for processes with arbitrary parton masses and multiplicities. We assess its overall performance through a detailed comparison with results based on Catani-Seymour subtraction. The importance of random polarization and color sampling of the external partons is also examined.

  6. Redefining the Whole: Common Errors in Elementary Preservice Teachers' Self-Authored Word Problems for Fraction Subtraction

    ERIC Educational Resources Information Center

    Dixon, Juli K.; Andreasen, Janet B.; Avila, Cheryl L.; Bawatneh, Zyad; Deichert, Deana L.; Howse, Tashana D.; Turner, Mercedes Sotillo

    2014-01-01

    A goal of this study was to examine elementary preservice teachers' (PSTs) ability to contextualize and decontextualize fraction subtraction by asking them to write word problems to represent fraction subtraction expressions and to choose prewritten word problems to support given fraction subtraction expressions. Three themes emerged from the…

  7. Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique

    NASA Astrophysics Data System (ADS)

    Nagashima, Hiroyuki; Harakawa, Tetsumi

    We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.

  8. Microarray image analysis: background estimation using quantile and morphological filters.

    PubMed

    Bengtsson, Anders; Bengtsson, Henrik

    2006-02-28

    In a microarray experiment the difference in expression between genes on the same slide is up to 103 fold or more. At low expression, even a small error in the estimate will have great influence on the final test and reference ratios. In addition to the true spot intensity the scanned signal consists of different kinds of noise referred to as background. In order to assess the true spot intensity background must be subtracted. The standard approach to estimate background intensities is to assume they are equal to the intensity levels between spots. In the literature, morphological opening is suggested to be one of the best methods for estimating background this way. This paper examines fundamental properties of rank and quantile filters, which include morphological filters at the extremes, with focus on their ability to estimate between-spot intensity levels. The bias and variance of these filter estimates are driven by the number of background pixels used and their distributions. A new rank-filter algorithm is implemented and compared to methods available in Spot by CSIRO and GenePix Pro by Axon Instruments. Spot's morphological opening has a mean bias between -47 and -248 compared to a bias between 2 and -2 for the rank filter and the variability of the morphological opening estimate is 3 times higher than for the rank filter. The mean bias of Spot's second method, morph.close.open, is between -5 and -16 and the variability is approximately the same as for morphological opening. The variability of GenePix Pro's region-based estimate is more than ten times higher than the variability of the rank-filter estimate and with slightly more bias. The large variability is because the size of the background window changes with spot size. To overcome this, a non-adaptive region-based method is implemented. Its bias and variability are comparable to that of the rank filter. The performance of more advanced rank filters is equal to the best region-based methods. However, in

  9. [Construction of forward and reverse subtracted cDNA libraries between muscle tissue of Meishan and Landrace pigs].

    PubMed

    Xu, De-Quan; Zhang, Yi-Bing; Xiong, Yuan-Zhu; Gui, Jian-Fang; Jiang, Si-Wen; Su, Yu-Hong

    2003-07-01

    Using suppression subtractive hybridization (SSH) technique, forward and reverse subtracted cDNA libraries were constructed between Longissimus muscles from Meishan and Landrace pigs. A housekeeping gene, G3PDH, was used to estimate the efficiency of subtractive cDNA. In two cDNA libraries, G3PDH was subtracted very efficiently at appropriate 2(10) and 2(5) folds, respectively, indicating that some differentially expressed genes were also enriched at the same folds and the two subtractive cDNA libraries were very successful. A total of 709 and 673 positive clones were isolated from forward and reverse subtracted cDNA libraries, respectively. Analysis of PCR showed that most of all plasmids in the clones contained 150-750 bp inserts. The construction of subtractive cDNA libraries between muscle tissue from different pig breeds laid solid foundations for isolating and identifying the genes determining muscle growth and meat quality, which will be important to understand the mechanism of muscle growth, determination of meat quality and practice of molecular breeding.

  10. Toward particle-level filtering of individual collision events at the Large Hadron Collider and beyond

    NASA Astrophysics Data System (ADS)

    Colecchia, Federico

    2014-03-01

    Low-energy strong interactions are a major source of background at hadron colliders, and methods of subtracting the associated energy flow are well established in the field. Traditional approaches treat the contamination as diffuse, and estimate background energy levels either by averaging over large data sets or by restricting to given kinematic regions inside individual collision events. On the other hand, more recent techniques take into account the discrete nature of background, most notably by exploiting the presence of substructure inside hard jets, i.e. inside collections of particles originating from scattered hard quarks and gluons. However, none of the existing methods subtract background at the level of individual particles inside events. We illustrate the use of an algorithm that will allow particle-by-particle background discrimination at the Large Hadron Collider, and we envisage this as the basis for a novel event filtering procedure upstream of the official reconstruction chains. Our hope is that this new technique will improve physics analysis when used in combination with state-of-the-art algorithms in high-luminosity hadron collider environments.

  11. Accessing the diffracted wavefield by coherent subtraction

    NASA Astrophysics Data System (ADS)

    Schwarz, Benjamin; Gajewski, Dirk

    2017-10-01

    Diffractions have unique properties which are still rarely exploited in common practice. Aside from containing subwavelength information on the scattering geometry or indicating small-scale structural complexity, they provide superior illumination compared to reflections. While diffraction occurs arguably on all scales and in most realistic media, the respective signatures typically have low amplitudes and are likely to be masked by more prominent wavefield components. It has been widely observed that automated stacking acts as a directional filter favouring the most coherent arrivals. In contrast to other works, which commonly aim at steering the summation operator towards fainter contributions, we utilize this directional selection to coherently approximate the most dominant arrivals and subtract them from the data. Supported by additional filter functions which can be derived from wave front attributes gained during the stacking procedure, this strategy allows for a fully data-driven recovery of faint diffractions and makes them accessible for further processing. A complex single-channel field data example recorded in the Aegean sea near Santorini illustrates that the diffracted background wavefield is surprisingly rich and despite the absence of a high channel count can still be detected and characterized, suggesting a variety of applications in industry and academia.

  12. [Construction of fetal mesenchymal stem cell cDNA subtractive library].

    PubMed

    Yang, Li; Wang, Dong-Mei; Li, Liang; Bai, Ci-Xian; Cao, Hua; Li, Ting-Yu; Pei, Xue-Tao

    2002-04-01

    To identify differentially expressed genes between fetal mesenchymal stem cell (MSC) and adult MSC, especially specified genes expressed in fetal MSC, a cDNA subtractive library of fetal MSC was constructed using suppression subtractive hybridization (SSH) technique. At first, total RNA was isolated from fetal and adult MSC. Using SMART PCR synthesis method, single-strand and double-strand cDNAs were synthesized. After Rsa I digestion, fetal MSC cDNAs were divided into two groups and ligated to adaptor 1 and adaptor 2 respectively. Results showed that the amplified library contains 890 clones. Analysis of 890 clones with PCR demonstrated that 768 clones were positive. The positive rate is 86.3%. The size of inserted fragments in these positive clones was between 0.2 - 1 kb, with an average of 400 - 600 bp. SSH is a convenient and effective method for screening differentially expressed genes. The constructed cDNA subtractive library of fetal MSC cDNA lays solid foundation for screening and cloning new and specific function related genes of fetal MSC.

  13. Computed tomography lung iodine contrast mapping by image registration and subtraction

    NASA Astrophysics Data System (ADS)

    Goatman, Keith; Plakas, Costas; Schuijf, Joanne; Beveridge, Erin; Prokop, Mathias

    2014-03-01

    Pulmonary embolism (PE) is a relatively common and potentially life threatening disease, affecting around 600,000 people annually in the United States alone. Prompt treatment using anticoagulants is effective and saves lives, but unnecessary treatment risks life threatening haemorrhage. The specificity of any diagnostic test for PE is therefore as important as its sensitivity. Computed tomography (CT) angiography is routinely used to diagnose PE. However, there are concerns it may over-report the condition. Additional information about the severity of an occlusion can be obtained from an iodine contrast map that represents tissue perfusion. Such maps tend to be derived from dual-energy CT acquisitions. However, they may also be calculated by subtracting pre- and post-contrast CT scans. Indeed, there are technical advantages to such a subtraction approach, including better contrast-to-noise ratio for the same radiation dose, and bone suppression. However, subtraction relies on accurate image registration. This paper presents a framework for the automatic alignment of pre- and post-contrast lung volumes prior to subtraction. The registration accuracy is evaluated for seven subjects for whom pre- and post-contrast helical CT scans were acquired using a Toshiba Aquilion ONE scanner. One hundred corresponding points were annotated on the pre- and post-contrast scans, distributed throughout the lung volume. Surface-to-surface error distances were also calculated from lung segmentations. Prior to registration the mean Euclidean landmark alignment error was 2.57mm (range 1.43-4.34 mm), and following registration the mean error was 0.54mm (range 0.44-0.64 mm). The mean surface error distance was 1.89mm before registration and 0.47mm after registration. There was a commensurate reduction in visual artefacts following registration. In conclusion, a framework for pre- and post-contrast lung registration has been developed that is sufficiently accurate for lung subtraction

  14. New Spectral Evidence of an Unaccounted Component of the Near-infrared Extragalactic Background Light from the CIBER

    NASA Astrophysics Data System (ADS)

    Matsuura, Shuji; Arai, Toshiaki; Bock, James J.; Cooray, Asantha; Korngut, Phillip M.; Kim, Min Gyu; Lee, Hyung Mok; Lee, Dae Hee; Levenson, Louis R.; Matsumoto, Toshio; Onishi, Yosuke; Shirahata, Mai; Tsumura, Kohji; Wada, Takehiko; Zemcov, Michael

    2017-04-01

    The extragalactic background light (EBL) captures the total integrated emission from stars and galaxies throughout the cosmic history. The amplitude of the near-infrared EBL from space absolute photometry observations has been controversial and depends strongly on the modeling and subtraction of the zodiacal light (ZL) foreground. We report the first measurement of the diffuse background spectrum at 0.8-1.7 μm from the CIBER experiment. The observations were obtained with an absolute spectrometer over two flights in multiple sky fields to enable the subtraction of ZL, stars, terrestrial emission, and diffuse Galactic light. After subtracting foregrounds and accounting for systematic errors, we find the nominal EBL brightness, assuming the Kelsall ZL model, is {42.7}-10.6+11.9 nW m-2 sr-1 at 1.4 μm. We also analyzed the data using the Wright ZL model, which results in a worse statistical fit to the data and an unphysical EBL, falling below the known background light from galaxies at λ < 1.3 μm. Using a model-independent analysis based on the minimum EBL brightness, we find an EBL brightness of {28.7}-3.3+5.1 nWm-2 sr-1 at 1.4 μm. While the derived EBL amplitude strongly depends on the ZL model, we find that we cannot fit the spectral data to ZL, Galactic emission, and EBL from solely integrated galactic light from galaxy counts. The results require a new diffuse component, such as an additional foreground or an excess EBL with a redder spectrum than that of ZL.

  15. A general method for baseline-removal in ultrafast electron powder diffraction data using the dual-tree complex wavelet transform.

    PubMed

    René de Cotret, Laurent P; Siwick, Bradley J

    2017-07-01

    The general problem of background subtraction in ultrafast electron powder diffraction (UEPD) is presented with a focus on the diffraction patterns obtained from materials of moderately complex structure which contain many overlapping peaks and effectively no scattering vector regions that can be considered exclusively background. We compare the performance of background subtraction algorithms based on discrete and dual-tree complex (DTCWT) wavelet transforms when applied to simulated UEPD data on the M1-R phase transition in VO 2 with a time-varying background. We find that the DTCWT approach is capable of extracting intensities that are accurate to better than 2% across the whole range of scattering vector simulated, effectively independent of delay time. A Python package is available.

  16. Efficiency and Flexibility of Indirect Addition in the Domain of Multi-Digit Subtraction

    ERIC Educational Resources Information Center

    Torbeyns, Joke; Ghesquiere, Pol; Verschaffel, Lieven

    2009-01-01

    This article discusses the characteristics of the indirect addition strategy (IA) in the domain of multi-digit subtraction. In two studies, adults' use of IA on three-digit subtractions with a small, medium, or large difference between the integers was analysed using the choice/no-choice method. Results from both studies indicate that adults…

  17. Efficient algorithm for baseline wander and powerline noise removal from ECG signals based on discrete Fourier series.

    PubMed

    Bahaz, Mohamed; Benzid, Redha

    2018-03-01

    Electrocardiogram (ECG) signals are often contaminated with artefacts and noises which can lead to incorrect diagnosis when they are visually inspected by cardiologists. In this paper, the well-known discrete Fourier series (DFS) is re-explored and an efficient DFS-based method is proposed to reduce contribution of both baseline wander (BW) and powerline interference (PLI) noises in ECG records. In the first step, the determination of the exact number of low frequency harmonics contributing in BW is achieved. Next, the baseline drift is estimated by the sum of all associated Fourier sinusoids components. Then, the baseline shift is discarded efficiently by a subtraction of its approximated version from the original biased ECG signal. Concerning the PLI, the subtraction of the contributing harmonics calculated in the same manner reduces efficiently such type of noise. In addition of visual quality results, the proposed algorithm shows superior performance in terms of higher signal-to-noise ratio and smaller mean square error when faced to the DCT-based algorithm.

  18. Subtraction of cap-trapped full-length cDNA libraries to select rare transcripts.

    PubMed

    Hirozane-Kishikawa, Tomoko; Shiraki, Toshiyuki; Waki, Kazunori; Nakamura, Mari; Arakawa, Takahiro; Kawai, Jun; Fagiolini, Michela; Hensch, Takao K; Hayashizaki, Yoshihide; Carninci, Piero

    2003-09-01

    The normalization and subtraction of highly expressed cDNAs from relatively large tissues before cloning dramatically enhanced the gene discovery by sequencing for the mouse full-length cDNA encyclopedia, but these methods have not been suitable for limited RNA materials. To normalize and subtract full-length cDNA libraries derived from limited quantities of total RNA, here we report a method to subtract plasmid libraries excised from size-unbiased amplified lambda phage cDNA libraries that avoids heavily biasing steps such as PCR and plasmid library amplification. The proportion of full-length cDNAs and the gene discovery rate are high, and library diversity can be validated by in silico randomization.

  19. A simultaneous all-optical half/full-subtraction strategy using cascaded highly nonlinear fibers

    NASA Astrophysics Data System (ADS)

    Singh, Karamdeep; Kaur, Gurmeet; Singh, Maninder Lal

    2018-02-01

    Using non-linear effects such as cross-gain modulation (XGM) and cross-phase modulation (XPM) inside two highly non-linear fibres (HNLF) arranged in cascaded configuration, a simultaneous half/full-subtracter is proposed. The proposed simultaneous half/full-subtracter design is attractive due to several features such as input data pattern independence and usage of minimal number of non-linear elements i.e. HNLFs. Proof of concept simulations have been conducted at 100 Gbps rate, indicating fine performance, as extinction ratio (dB) > 6.28 dB and eye opening factors (EO) > 77.1072% are recorded for each implemented output. The proposed simultaneous half/full-subtracter can be used as a key component in all-optical information processing circuits.

  20. On-demand generation of background-free single photons from a solid-state source

    NASA Astrophysics Data System (ADS)

    Schweickert, Lucas; Jöns, Klaus D.; Zeuner, Katharina D.; Covre da Silva, Saimon Filipe; Huang, Huiying; Lettner, Thomas; Reindl, Marcus; Zichi, Julien; Trotta, Rinaldo; Rastelli, Armando; Zwiller, Val

    2018-02-01

    True on-demand high-repetition-rate single-photon sources are highly sought after for quantum information processing applications. However, any coherently driven two-level quantum system suffers from a finite re-excitation probability under pulsed excitation, causing undesirable multi-photon emission. Here, we present a solid-state source of on-demand single photons yielding a raw second-order coherence of g(2 )(0 )=(7.5 ±1.6 )×10-5 without any background subtraction or data processing. To this date, this is the lowest value of g(2 )(0 ) reported for any single-photon source even compared to the previously reported best background subtracted values. We achieve this result on GaAs/AlGaAs quantum dots embedded in a low-Q planar cavity by employing (i) a two-photon excitation process and (ii) a filtering and detection setup featuring two superconducting single-photon detectors with ultralow dark-count rates of (0.0056 ±0.0007 ) s-1 and (0.017 ±0.001 ) s-1, respectively. Re-excitation processes are dramatically suppressed by (i), while (ii) removes false coincidences resulting in a negligibly low noise floor.

  1. The EPIC-MOS Particle-Induced Background Spectrum

    NASA Technical Reports Server (NTRS)

    Kuntz, K. D.; Snowden, S. L.

    2006-01-01

    We have developed a method for constructing a spectrum of the particle-induced instrumental background of the XMM-Newton EPIC MOS detectors that can be used for observations of the diffuse background and extended sources that fill a significant fraction of the instrument field of view. The strength and spectrum of the particle-induced background, that is, the background due to the interaction of particles with the detector and the detector surroundings, is temporally variable as well as spatially variable over individual chips. Our method uses a combination of the filter-wheel-closed data and a database of unexposed-region data to construct a spectrum of the "quiescent" background. We show that, using this method of background subtraction, the differences between independent observations of the same region of "blank sky" are consistent with the statistical uncertainties except when there is clear evidence of solar wind charge exchange emission. We use the blank sky observations to show that contamination by SWCX emission is a strong function of the solar wind proton flux, and that observations through the flanks of the magnetosheath appear to be contaminated only at much higher solar wind fluxes. We have also developed a spectral model of the residual soft proton flares, which allows their effects to be removed to a substantial degree during spectral fitting.

  2. A comparative intelligibility study of single-microphone noise reduction algorithms.

    PubMed

    Hu, Yi; Loizou, Philipos C

    2007-09-01

    The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.

  3. First results of the COBE satellite measurement of the anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Smoot, G. F.; Aymon, J.; De Amici, G.; Bennett, C. L.; Kogut, A.; Gulkis, S.; Backus, C.; Galuk, K.; Jackson, P. D.; Keegstra, P.

    1991-01-01

    The concept and operation of the Differential Microwave Radiometers (DMR) instrument aboard NASA's Cosmic Background Explorer satellite are reviewed, with emphasis on the software identification and subtraction of potential systematic effects. Preliminary results obtained from the first six months of DMR data are presented, and implications for cosmology are discussed.

  4. Distinct representations of subtraction and multiplication in the neural systems for numerosity and language

    PubMed Central

    Prado, Jérôme; Mutreja, Rachna; Zhang, Hongchuan; Mehta, Rucha; Desroches, Amy S.; Minas, Jennifer E.; Booth, James R.

    2010-01-01

    It has been proposed that recent cultural inventions such as symbolic arithmetic recycle evolutionary older neural mechanisms. A central assumption of this hypothesis is that the degree to which a pre-existing mechanism is recycled depends upon the degree of similarity between its initial function and the novel task. To test this assumption, we investigated whether the brain region involved in magnitude comparison in the intraparietal sulcus (IPS), localized by a numerosity comparison task, is recruited to a greater degree by arithmetic problems that involve number comparison (single-digit subtractions) than by problems that involve retrieving facts from memory (single-digit multiplications). Our results confirmed that subtractions are associated with greater activity in the IPS than multiplications, whereas multiplications elicit greater activity than subtractions in regions involved in verbal processing including the middle temporal gyrus and inferior frontal gyrus that were localized by a phonological processing task. Pattern analyses further indicated that the neural mechanisms more active for subtraction than multiplication in the IPS overlap with those involved in numerosity comparison, and that the strength of this overlap predicts inter-individual performance in the subtraction task. These findings provide novel evidence that elementary arithmetic relies on the co-option of evolutionary older neural circuits. PMID:21246667

  5. Spinal pedicle subtraction osteotomy for fixed sagittal imbalance patients

    PubMed Central

    Hyun, Seung-Jae; Kim, Yongjung J; Rhim, Seung-Chul

    2013-01-01

    In addressing spinal sagittal imbalance through a posterior approach, the surgeon now may choose from among a variety of osteotomy techniques. Posterior column osteotomies such as the facetectomy or Ponte or Smith-Petersen osteotomy provide the least correction, but can be used at multiple levels with minimal blood loss and a lower operative risk. Pedicle subtraction osteotomies provide nearly 3 times the per-level correction of Ponte/Smith-Petersen osteotomies; however, they carry increased technical demands, longer operative time, and greater blood loss and associated significant morbidity, including neurological injury. The literature focusing on pedicle subtraction osteotomy for fixed sagittal imbalance patients is reviewed. The long-term overall outcomes, surgical tips to reduce the complications and suggestions for their proper application are also provided. PMID:24340276

  6. Lung nodule detection by microdose CT versus chest radiography (standard and dual-energy subtracted).

    PubMed

    Ebner, Lukas; Bütikofer, Yanik; Ott, Daniel; Huber, Adrian; Landau, Julia; Roos, Justus E; Heverhagen, Johannes T; Christe, Andreas

    2015-04-01

    The purpose of this study was to investigate the feasibility of microdose CT using a comparable dose as for conventional chest radiographs in two planes including dual-energy subtraction for lung nodule assessment. We investigated 65 chest phantoms with 141 lung nodules, using an anthropomorphic chest phantom with artificial lung nodules. Microdose CT parameters were 80 kV and 6 mAs, with pitch of 2.2. Iterative reconstruction algorithms and an integrated circuit detector system (Stellar, Siemens Healthcare) were applied for maximum dose reduction. Maximum intensity projections (MIPs) were reconstructed. Chest radiographs were acquired in two projections with bone suppression. Four blinded radiologists interpreted the images in random order. A soft-tissue CT kernel (I30f) delivered better sensitivities in a pilot study than a hard kernel (I70f), with respective mean (SD) sensitivities of 91.1%±2.2% versus 85.6%±5.6% (p=0.041). Nodule size was measured accurately for all kernels. Mean clustered nodule sensitivity with chest radiography was 45.7%±8.1% (with bone suppression, 46.1%±8%; p=0.94); for microdose CT, nodule sensitivity was 83.6%±9% without MIP (with additional MIP, 92.5%±6%; p<10(-3)). Individual sensitivities of microdose CT for readers 1, 2, 3, and 4 were 84.3%, 90.7%, 68.6%, and 45.0%, respectively. Sensitivities with chest radiography for readers 1, 2, 3, and 4 were 42.9%, 58.6%, 36.4%, and 90.7%, respectively. In the per-phantom analysis, respective sensitivities of microdose CT versus chest radiography were 96.2% and 75% (p<10(-6)). The effective dose for chest radiography including dual-energy subtraction was 0.242 mSv; for microdose CT, the applied dose was 0.1323 mSv. Microdose CT is better than the combination of chest radiography and dual-energy subtraction for the detection of solid nodules between 5 and 12 mm at a lower dose level of 0.13 mSv. Soft-tissue kernels allow better sensitivities. These preliminary results indicate that

  7. SR high-speed K-edge subtraction angiography in the small animal (abstract)

    NASA Astrophysics Data System (ADS)

    Takeda, T.; Akisada, M.; Nakajima, T.; Anno, I.; Ueda, K.; Umetani, K.; Yamaguchi, C.

    1989-07-01

    To assess the ability of the high-speed K-edge energy subtraction system which was made at beamline 8C of Photon Factory, Tsukuba, we performed an animal experiment. Rabbits were used for the intravenous K-edge subtraction angiography. In this paper, the actual images of the artery obtained by this system, are demonstrated. The high-speed K-edge subtraction system consisted of movable silicon (111) monocrystals, II-ITV, and digital memory system. Image processing was performed by 68000-IP computer. The monochromatic x-ray beam size was 50×60 mm. Photon energy above and below iodine K edge was changed within 16 ms and 32 frames of images were obtained sequentially. The rabbits were anaesthetized by phenobarbital and a 5F catheter was inserted into inferior vena cava via the femoral vein. 1.5 ml/kg of contrast material (Conlaxin H) was injected at the rate of 0.5 ml/kg/s. TV images were obtained 3 s after the starting point of injection. By using this system, the clear K-edge subtracted images were obtained sequentially as a conventional DSA system. The quality of the images were better than that obtained by DSA. The dynamical blood flow was analyzed, and the best arterial image could be selected from the sequential images. The structures of aortic arch, common carotid arteries, right subclavian artery, and internal thoracic artery were obtained at the chest. Both common carotid arteries and vertebral arteries were recorded at the neck. The diameter of about 0.3-0.4 mm artery could be clearly revealed. The high-speed K-edge subtraction system demonstrates the very sharp arterial images clearly and dynamically.

  8. Developmental dissociation in the neural responses to simple multiplication and subtraction problems

    PubMed Central

    Prado, Jérôme; Mutreja, Rachna; Booth, James R.

    2014-01-01

    Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a cross-sectional design to measure the neural activity associated with single-digit subtraction and multiplication in 34 children from 2nd to 7th grade. The neural correlates of language and numerical processing were also identified in each child via localizer scans. Although multiplication and subtraction were undistinguishable in terms of behavior, we found a striking developmental dissociation in their neural correlates. First, we observed grade-related increases of activity for multiplication, but not for subtraction, in a language-related region of the left temporal cortex. Second, we found grade-related increases of activity for subtraction, but not for multiplication, in a region of the right parietal cortex involved in the procedural manipulation of numerical quantities. The present results suggest that fluency in simple arithmetic in children may be achieved by both increasing reliance on verbal retrieval and by greater use of efficient quantity-based procedures, depending on the operation. PMID:25089323

  9. "Abuelita" Epistemologies: Counteracting Subtractive Schools in American Education

    ERIC Educational Resources Information Center

    Gonzales, Sandra M.

    2015-01-01

    This autoethnographic inquiry examines the intersection of elder epistemology and subtractive education, exploring how one "abuelita" countered her granddaughter's divestment of Mexican-ness. I demonstrate how the grandmother used "abuelita" epistemologies to navigate this tension and resist the assimilative pressures felt…

  10. A comparative study of additive and subtractive manufacturing for dental restorations.

    PubMed

    Bae, Eun-Jeong; Jeong, Il-Do; Kim, Woong-Chul; Kim, Ji-Hwan

    2017-08-01

    Digital systems have recently found widespread application in the fabrication of dental restorations. For the clinical assessment of dental restorations fabricated digitally, it is necessary to evaluate their accuracy. However, studies of the accuracy of inlay restorations fabricated with additive manufacturing are lacking. The purpose of this in vitro study was to evaluate and compare the accuracy of inlay restorations fabricated by using recently introduced additive manufacturing with the accuracy of subtractive methods. The inlay (distal occlusal cavity) shape was fabricated using 3-dimensional image (reference data) software. Specimens were fabricated using 4 different methods (each n=10, total N=40), including 2 additive manufacturing methods, stereolithography apparatus and selective laser sintering; and 2 subtractive methods, wax and zirconia milling. Fabricated specimens were scanned using a dental scanner and then compared by overlapping reference data. The results were statistically analyzed using a 1-way analysis of variance (α=.05). Additionally, the surface morphology of 1 randomly (the first of each specimen) selected specimen from each group was evaluated using a digital microscope. The results of the overlap analysis of the dental restorations indicated that the root mean square (RMS) deviation observed in the restorations fabricated using the additive manufacturing methods were significantly different from those fabricated using the subtractive methods (P<.05). However, no significant differences were found between restorations fabricated using stereolithography apparatus and selective laser sintering, the additive manufacturing methods (P=.466). Similarly, no significant differences were found between wax and zirconia, the subtractive methods (P=.986). The observed RMS values were 106 μm for stereolithography apparatus, 113 μm for selective laser sintering, 116 μm for wax, and 119 μm for zirconia. Microscopic evaluation of the surface

  11. Error Patterns in Portuguese Students' Addition and Subtraction Calculation Tasks: Implications for Teaching

    ERIC Educational Resources Information Center

    Watson, Silvana Maria R.; Lopes, João; Oliveira, Célia; Judge, Sharon

    2018-01-01

    Purpose: The purpose of this descriptive study is to investigate why some elementary children have difficulties mastering addition and subtraction calculation tasks. Design/methodology/approach: The researchers have examined error types in addition and subtraction calculation made by 697 Portuguese students in elementary grades. Each student…

  12. Four-State Continuous-Variable Quantum Key Distribution with Photon Subtraction

    NASA Astrophysics Data System (ADS)

    Li, Fei; Wang, Yijun; Liao, Qin; Guo, Ying

    2018-06-01

    Four-state continuous-variable quantum key distribution (CVQKD) is one of the discretely modulated CVQKD which generates four nonorthogonal coherent states and exploits the sign of the measured quadrature of each state to encode information rather than uses the quadrature \\hat {x} or \\hat {p} itself. It has been proven that four-state CVQKD is more suitable than Gaussian modulated CVQKD in terms of transmission distance. In this paper, we propose an improved four-state CVQKD using an non-Gaussian operation, photon subtraction. A suitable photon-subtraction operation can be exploited to improve the maximal transmission of CVQKD in point-to-point quantum communication since it provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance. Furthermore, by taking finite-size effect into account we obtain a tighter bound of the secure distance, which is more practical than that obtained in the asymptotic limit.

  13. Misconception on Addition and Subtraction of Fraction at Primary School Students in Fifth-Grade

    NASA Astrophysics Data System (ADS)

    Trivena, V.; Ningsih, A. R.; Jupri, A.

    2017-09-01

    This study aims to investigate the mastery concept of the student in mathematics learning especially in addition and subtraction of fraction at primary school level. By using qualitative research method, the data were collected from 23 grade five students (10-11-year-old). Instruments included a test, that is accompanied by Certainty Response Index (CRI) and interview with students and teacher. The result of the test has been obtained, then processed by analyzing the student’s answers for each item and then grouped by the CRI categories that combined with the results of the interview with students and teacher. The results showed that student’s mastery-concept on additional and subtraction dominated by category ‘misconception’. So, we can say that mastery-concept on addition and subtraction of fraction at fifth-grade students is still low. Finally, the impact can make most of primary student think that learning addition and subtraction of fraction in mathematics is difficult.

  14. Subtraction of subcutaneous fat to improve the prediction of visceral adiposity: exploring a new anthropometric track in overweight and obese youth.

    PubMed

    Samouda, H; De Beaufort, C; Stranges, S; Van Nieuwenhuyse, J-P; Dooms, G; Keunen, O; Leite, S; Vaillant, M; Lair, M-L; Dadoun, F

    2017-08-01

    The efficiency of traditional anthropometric measurements such as body mass index (BMI) or waist circumference (Waist C) used to replace biomedical imaging for assessing visceral adipose tissue (VAT) is still highly controversial in youth. We evaluated the most accurate model predicting VAT in overweight/obese youth, using various anthropometric measurements and their correlation with different body fat compartments, especially by testing, for the first time in youth, the hypothesis that subtracting the anthropometric measurement the most highly correlated with subcutaneous abdominal adipose tissue (SAAT) and less correlated possible with VAT from an anthropometric abdominal measurement highly correlated with visceral and total abdominal adipose tissue (TAAT), predicts VAT with higher accuracy. VAT and SAAT data resulted from magnetic resonance imaging (MRI) analysis performed on 181 boys and girls (7-17 y) from Diabetes & Endocrinology Care Paediatrics Clinic in Luxembourg. Height, weight, abdominal diameters, waist, hip, and thigh circumferences were measured with a view to developing the anthropometric VAT predictive algorithms. In girls, subtracting proximal thigh circumference (Proximal Thigh C), the most closely correlated anthropometric measurement with SAAT, from Waist C, the most closely correlated anthropometric measurement with VAT was instrumental in improving VAT prediction, in comparison with the most accurate single VAT anthropometric surrogate. [Formula: see text] Residual analysis showed a negligible estimation error (5 cm 2 ). In boys, Waist C was the best VAT predictor. Subtraction of abdominal subcutaneous fat is important to predict VAT in overweight/obese girls. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Children's Understanding of Addition and Subtraction Concepts

    ERIC Educational Resources Information Center

    Robinson, Katherine M.; Dube, Adam K.

    2009-01-01

    After the onset of formal schooling, little is known about the development of children's understanding of the arithmetic concepts of inversion and associativity. On problems of the form a+b-b (e.g., 3+26-26), if children understand the inversion concept (i.e., that addition and subtraction are inverse operations), then no calculations are needed…

  16. Observation of yttrium oxide nanoparticles in cabbage (Brassica oleracea) through dual energy K-edge subtraction imaging

    DOE PAGES

    Chen, Yunyun; Sanchez, Carlos; Yue, Yuan; ...

    2016-03-25

    Background: The potential transfer of engineered nanoparticles (ENPs) from plants into the food chain has raised widespread concerns. In order to investigate the effects of ENPs on plants, young cabbage plants (Brassica oleracea) were exposed to a hydroponic system containing yttrium oxide (yttria) ENPs. The objective of this study was to reveal the impacts of NPs on plants by using K-edge subtraction imaging technique. Results: Using synchrotron dual-e nergy X-ray micro-tomography with K-edge subtraction technique, we studied the uptake, accumulation, distribution and concentration mapping of yttria ENPs in cabbage plants. It was found that yttria ENPs were uptaken by themore » cabbage roots but did not effectively transferred and mobilized through the cabbage stem and leaves. This could be due to the accumulation of yttria ENPs blocked at primary-lateral-root junction. Instead, non-yttria minerals were found in the xylem vessels of roots and stem. Conclusions: Synchrotron dual-energy X-ray micro-tomography is an effective method to observe yttria NPs inside the cabbage plants in both whole body and microscale level. Furthermore, the blockage of a plant's roots by nanoparticles is likely the first and potentially fatal environmental effect of such type of nanoparticles.« less

  17. A Novel Approach to model EPIC variable background

    NASA Astrophysics Data System (ADS)

    Marelli, M.; De Luca, A.; Salvetti, D.; Belfiore, A.

    2017-10-01

    One of the main aim of the EXTraS (Exploring the X-ray Transient and variable Sky) project is to characterise the variability of serendipitous XMM-Newton sources within each single observation. Unfortunately, 164 Ms out of the 774 Ms of cumulative exposure considered (21%) are badly affected by soft proton flares, hampering any classical analysis of field sources. De facto, the latest releases of the 3XMM catalog, as well as most of the analysis in literature, simply exclude these 'high background' periods from analysis. We implemented a novel SAS-indipendent approach to produce background-subtracted light curves, which allows to treat the case of very faint sources and very bright proton flares. EXTraS light curves of 3XMM-DR5 sources will be soon released to the community, together with new tools we are developing.

  18. The performance of projective standardization for digital subtraction radiography.

    PubMed

    Mol, André; Dunn, Stanley M

    2003-09-01

    We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.

  19. Two-dimensional real-time imaging system for subtraction angiography using an iodine filter

    NASA Astrophysics Data System (ADS)

    Umetani, Keiji; Ueda, Ken; Takeda, Tohoru; Anno, Izumi; Itai, Yuji; Akisada, Masayoshi; Nakajima, Teiichi

    1992-01-01

    A new type of subtraction imaging system was developed using an iodine filter and a single-energy broad bandwidth monochromatized x ray. The x-ray images of coronary arteries made after intravenous injection of a contrast agent are enhanced by an energy-subtraction technique. Filter chopping of the x-ray beam switches energies rapidly, so that a nearly simultaneous pair of filtered and nonfiltered images can be made. By using a high-speed video camera, a pair of two 512 × 512 pixel images can be obtained within 9 ms. Three hundred eighty-four images (raw data) are stored in a 144-Mbyte frame memory. After phantom studies, in vivo subtracted images of coronary arteries in dogs were obtained at a rate of 15 images/s.

  20. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  1. A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the doubly unresolved subtraction terms

    NASA Astrophysics Data System (ADS)

    Somogyi, Gábor

    2013-04-01

    We finish the definition of a subtraction scheme for computing NNLO corrections to QCD jet cross sections. In particular, we perform the integration of the soft-type contributions to the doubly unresolved counterterms via the method of Mellin-Barnes representations. With these final ingredients in place, the definition of the scheme is complete and the computation of fully differential rates for electron-positron annihilation into two and three jets at NNLO accuracy becomes feasible.

  2. Enriching Addition and Subtraction Fact Mastery through Games

    ERIC Educational Resources Information Center

    Bay-Williams, Jennifer M.; Kling, Gina

    2014-01-01

    The learning of "basic facts"--single-digit combinations for addition, subtraction, multiplication, and division--has long been a focus of elementary school mathematics. Many people remember completing endless worksheets, timed tests, and flash card drills as they attempted to "master" their basic facts as children. However,…

  3. Pediatric head and neck lesions: assessment of vascularity by MR digital subtraction angiography.

    PubMed

    Chooi, Weng Kong; Woodhouse, Neil; Coley, Stuart C; Griffiths, Paul D

    2004-08-01

    Pediatric head and neck lesions can be difficult to characterize on clinical grounds alone. We investigated the use of dynamic MR digital subtraction angiography as a noninvasive adjunct for the assessment of the vascularity of these abnormalities. Twelve patients (age range, 2 days to 16 years) with known or suspected vascular abnormalities were studied. Routine MR imaging, time-of-flight MR angiography, and MR digital subtraction angiography were performed in all patients. The dynamic sequence was acquired in two planes at one frame per second by using a thick section (6-10 cm) selective radio-frequency spoiled fast gradient-echo sequence and an IV administered bolus of contrast material. The images were subtracted from a preliminary mask sequence and viewed as a video-inverted cine loop. In all cases, MR digital subtraction angiography was successfully performed. The technique showed the following: 1) slow flow lesions (two choroidal angiomas, eyelid hemangioma, and scalp venous malformation); 2) high flow lesions that were not always suspected by clinical examination alone (parotid hemangioma, scalp, occipital, and eyelid arteriovenous malformations plus a palatal teratoma); 3) a hypovascular tumor for which a biopsy could be safely performed (Burkitt lymphoma); and 4) a hypervascular tumor of the palate (cystic teratoma). Our early experience suggests that MR digital subtraction angiography can be reliably performed in children of all ages without complication. The technique provided a noninvasive assessment of the vascularity of each lesion that could not always have been predicted on the basis of clinical examination or routine MR imaging alone.

  4. A compact radiation source for digital subtractive angiography

    NASA Astrophysics Data System (ADS)

    Wiedemann, H.; Baltay, M.; Carr, R.; Hernandez, M.; Lavender, W.

    1994-08-01

    Beam requirements for 33 keV radiation used in digital subtraction angiography have been established through extended experimentation first at Stanford and later at the National Synchrotron Light Source in Brookhaven. So far research and development of this medical procedure to image coronary blood vessels have been undertaken on large high energy electron storage rings. With progress in this diagnostic procedure, it is interesting to look for an optimum concept for providing a 33 keV radiation source which would fit into the environment of a hospital. A variety of competing effects and technologies to produce 33 keV radiation are available, but none of these processes provides the combination of sufficient photon flux and monochromaticity except for synchrotron radiation from an electron storage ring. The conceptual design of a compact storage ring optimized to fit into a hospital environment and producing sufficient 33 keV radiation for digital subtraction radiography will be discussed.

  5. [Development of a digital chest phantom for studies on energy subtraction techniques].

    PubMed

    Hayashi, Norio; Taniguchi, Anna; Noto, Kimiya; Shimosegawa, Masayuki; Ogura, Toshihiro; Doi, Kunio

    2014-03-01

    Digital chest phantoms continue to play a significant role in optimizing imaging parameters for chest X-ray examinations. The purpose of this study was to develop a digital chest phantom for studies on energy subtraction techniques under ideal conditions without image noise. Computed tomography (CT) images from the LIDC (Lung Image Database Consortium) were employed to develop a digital chest phantom. The method consisted of the following four steps: 1) segmentation of the lung and bone regions on CT images; 2) creation of simulated nodules; 3) transformation to attenuation coefficient maps from the segmented images; and 4) projection from attenuation coefficient maps. To evaluate the usefulness of digital chest phantoms, we determined the contrast of the simulated nodules in projection images of the digital chest phantom using high and low X-ray energies, soft tissue images obtained by energy subtraction, and "gold standard" images of the soft tissues. Using our method, the lung and bone regions were segmented on the original CT images. The contrast of simulated nodules in soft tissue images obtained by energy subtraction closely matched that obtained using the gold standard images. We thus conclude that it is possible to carry out simulation studies based on energy subtraction techniques using the created digital chest phantoms. Our method is potentially useful for performing simulation studies for optimizing the imaging parameters in chest X-ray examinations.

  6. VNIR hyperspectral background characterization methods in adverse weather conditions

    NASA Astrophysics Data System (ADS)

    Romano, João M.; Rosario, Dalton; Roth, Luz

    2009-05-01

    Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new autonomous target size invariant background characterization process, the Autonomous Background Characterization (ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling; and a fusion of results at the end. In this paper, we will demonstrate how different background characterization approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.

  7. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Trócsányi, Zoltán; Somogyi, Gábor

    2008-10-01

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  8. Assessing the use of an infrared spectrum hyperpixel array imager to measure temperature during additive and subtractive manufacturing

    NASA Astrophysics Data System (ADS)

    Whitenton, Eric; Heigel, Jarred; Lane, Brandon; Moylan, Shawn

    2016-05-01

    Accurate non-contact temperature measurement is important to optimize manufacturing processes. This applies to both additive (3D printing) and subtractive (material removal by machining) manufacturing. Performing accurate single wavelength thermography suffers numerous challenges. A potential alternative is hyperpixel array hyperspectral imaging. Focusing on metals, this paper discusses issues involved such as unknown or changing emissivity, inaccurate greybody assumptions, motion blur, and size of source effects. The algorithm which converts measured thermal spectra to emissivity and temperature uses a customized multistep non-linear equation solver to determine the best-fit emission curve. Emissivity dependence on wavelength may be assumed uniform or have a relationship typical for metals. The custom software displays residuals for intensity, temperature, and emissivity to gauge the correctness of the greybody assumption. Initial results are shown from a laser powder-bed fusion additive process, as well as a machining process. In addition, the effects of motion blur are analyzed, which occurs in both additive and subtractive manufacturing processes. In a laser powder-bed fusion additive process, the scanning laser causes the melt pool to move rapidly, causing a motion blur-like effect. In machining, measuring temperature of the rapidly moving chip is a desirable goal to develop and validate simulations of the cutting process. A moving slit target is imaged to characterize how the measured temperature values are affected by motion of a measured target.

  9. Simple and complex mental subtraction: strategy choice and speed-of-processing differences in younger and older adults.

    PubMed

    Geary, D C; Frensch, P A; Wiley, J G

    1993-06-01

    Thirty-six younger adults (10 male, 26 female; ages 18 to 38 years) and 36 older adults (14 male, 22 female; ages 61 to 80 years) completed simple and complex paper-and-pencil subtraction tests and solved a series of simple and complex computer-presented subtraction problems. For the computer task, strategies and solution times were recorded on a trial-by-trial basis. Older Ss used a developmentally more mature mix of problem-solving strategies to solve both simple and complex subtraction problems. Analyses of component scores derived from the solution times suggest that the older Ss are slower at number encoding and number production but faster at executing the borrow procedure. In contrast, groups did not appear to differ in the speed of subtraction fact retrieval. Results from a computational simulation are consistent with the interpretation that older adults' advantage for strategy choices and for the speed of executing the borrow procedure might result from more practice solving subtraction problems.

  10. Detection of admittivity anomaly on high-contrast heterogeneous backgrounds using frequency difference EIT.

    PubMed

    Jang, J; Seo, J K

    2015-06-01

    This paper describes a multiple background subtraction method in frequency difference electrical impedance tomography (fdEIT) to detect an admittivity anomaly from a high-contrast background conductivity distribution. The proposed method expands the use of the conventional weighted frequency difference EIT method, which has been used limitedly to detect admittivity anomalies in a roughly homogeneous background. The proposed method can be viewed as multiple weighted difference imaging in fdEIT. Although the spatial resolutions of the output images by fdEIT are very low due to the inherent ill-posedness, numerical simulations and phantom experiments of the proposed method demonstrate its feasibility to detect anomalies. It has potential application in stroke detection in a head model, which is highly heterogeneous due to the skull.

  11. Bone images from dual-energy subtraction chest radiography in the detection of rib fractures.

    PubMed

    Szucs-Farkas, Zsolt; Lautenschlager, Katrin; Flach, Patricia M; Ott, Daniel; Strautz, Tamara; Vock, Peter; Ruder, Thomas D

    2011-08-01

    To assess the sensitivity and image quality of chest radiography (CXR) with or without dual-energy subtracted (ES) bone images in the detection of rib fractures. In this retrospective study, 39 patients with 204 rib fractures and 24 subjects with no fractures were examined with a single exposure dual-energy subtraction digital radiography system. Three blinded readers first evaluated the non-subtracted posteroanterior and lateral chest radiographs alone, and 3 months later they evaluated the non-subtracted images together with the subtracted posteroanterior bone images. The locations of rib fractures were registered with confidence levels on a 3-grade scale. Image quality was rated on a 5-point scale. Marks by readers were compared with fracture localizations in CT as a standard of reference. The sensivity for fracture detection using both methods was very similar (34.3% with standard CXR and 33.5% with ES-CXR, p=0.92). At the patient level, both sensitivity (71.8%) and specificity (92.9%) with or without ES were identical. Diagnostic confidence was not significantly different (2.61 with CXR and 2.75 with ES-CXR, p=0.063). Image quality with ES was rated higher than that on standard CXR (4.08 vs. 3.74, p<0.001). Despite a better image quality, adding ES bone images to standard radiographs of the chest does not provide better sensitivity or improved diagnostic confidence in the detection of rib fractures. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface

    NASA Astrophysics Data System (ADS)

    Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc

    2010-12-01

    Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.

  13. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field.

    PubMed

    Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik

    2016-11-11

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).

  14. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field

    PubMed Central

    Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik

    2016-01-01

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717

  15. Following subtraction of the dipole anisotropy and components of the detected emission arising from

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Following subtraction of the dipole anisotropy and components of the detected emission arising from dust (thermal emission), hot gas (free-free emission), and charged particles interacting with magnetic fields (synchrotron emission) in the Milky Way Galaxy, the cosmic microwave background (CMB) anisotropy can be seen. CMB anisotropy - tiny fluctuations in the sky brightness at a level of a part in one hundred thousand - was first detected by the COBE DMR instrument. The CMB radiation is a remnant of the Big Bang, and the fluctuations are the imprint of density contrast in the early Universe (see slide 24 caption). This image represents the anisotropy detected in data collected during the first two years of DMR operation. Ultimately the DMR was operated for four years. See slide 19 caption for information about map smoothing and projection.

  16. Demonstration of an optoelectronic interconnect architecture for a parallel modified signed-digit adder and subtracter

    NASA Astrophysics Data System (ADS)

    Sun, Degui; Wang, Na-Xin; He, Li-Ming; Weng, Zhao-Heng; Wang, Daheng; Chen, Ray T.

    1996-06-01

    A space-position-logic-encoding scheme is proposed and demonstrated. This encoding scheme not only makes the best use of the convenience of binary logic operation, but is also suitable for the trinary property of modified signed- digit (MSD) numbers. Based on the space-position-logic-encoding scheme, a fully parallel modified signed-digit adder and subtractor is built using optoelectronic switch technologies in conjunction with fiber-multistage 3D optoelectronic interconnects. Thus an effective combination of a parallel algorithm and a parallel architecture is implemented. In addition, the performance of the optoelectronic switches used in this system is experimentally studied and verified. Both the 3-bit experimental model and the experimental results of a parallel addition and a parallel subtraction are provided and discussed. Finally, the speed ratio between the MSD adder and binary adders is discussed and the advantage of the MSD in operating speed is demonstrated.

  17. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  18. Ending up with Less: The Role of Working Memory in Solving Simple Subtraction Problems with Positive and Negative Answers

    ERIC Educational Resources Information Center

    Robert, Nicole D.; LeFevre, Jo-Anne

    2013-01-01

    Does solving subtraction problems with negative answers (e.g., 5-14) require different cognitive processes than solving problems with positive answers (e.g., 14-5)? In a dual-task experiment, young adults (N=39) combined subtraction with two working memory tasks, verbal memory and visual-spatial memory. All of the subtraction problems required…

  19. Operational momentum in large-number addition and subtraction by 9-month-olds.

    PubMed

    McCrink, Koleen; Wynn, Karen

    2009-08-01

    Recent studies on nonsymbolic arithmetic have illustrated that under conditions that prevent exact calculation, adults display a systematic tendency to overestimate the answers to addition problems and underestimate the answers to subtraction problems. It has been suggested that this operational momentum results from exposure to a culture-specific practice of representing numbers spatially; alternatively, the mind may represent numbers in spatial terms from early in development. In the current study, we asked whether operational momentum is present during infancy, prior to exposure to culture-specific representations of numbers. Infants (9-month-olds) were shown videos of events involving the addition or subtraction of objects with three different types of outcomes: numerically correct, too large, and too small. Infants looked significantly longer only at those incorrect outcomes that violated the momentum of the arithmetic operation (i.e., at too-large outcomes in subtraction events and too-small outcomes in addition events). The presence of operational momentum during infancy indicates developmental continuity in the underlying mechanisms used when operating over numerical representations.

  20. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  1. Noise covariance incorporated MEG-MUSIC algorithm: a method for multiple-dipole estimation tolerant of the influence of background brain activity.

    PubMed

    Sekihara, K; Poeppel, D; Marantz, A; Koizumi, H; Miyashita, Y

    1997-09-01

    This paper proposes a method of localizing multiple current dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) algorithm and is tolerant of the influence of background brain activity. In this method, the noise covariance matrix is estimated using a portion of the data that contains noise, but does not contain any signal information. Then, a modified noise subspace projector is formed using the generalized eigenvectors of the noise and measured-data covariance matrices. The MUSIC localizer is calculated using this noise subspace projector and the noise covariance matrix. The results from a computer simulation have verified the effectiveness of the method. The method was then applied to source estimation for auditory-evoked fields elicited by syllable speech sounds. The results strongly suggest the method's effectiveness in removing the influence of background activity.

  2. LWIR pupil imaging and prospects for background compensation

    NASA Astrophysics Data System (ADS)

    LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg

    2015-08-01

    A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.

  3. Digital Sound Synthesis Algorithms: a Tutorial Introduction and Comparison of Methods

    NASA Astrophysics Data System (ADS)

    Lee, J. Robert

    The objectives of the dissertation are to provide both a compendium of sound-synthesis methods with detailed descriptions and sound examples, as well as a comparison of the relative merits of each method based on ease of use, observed sound quality, execution time, and data storage requirements. The methods are classified under the general headings of wavetable-lookup synthesis, additive synthesis, subtractive synthesis, nonlinear methods, and physical modelling. The nonlinear methods comprise a large group that ranges from the well-known frequency-modulation synthesis to waveshaping. The final category explores computer modelling of real musical instruments and includes numerical and analytical solutions to the classical wave equation of motion, along with some of the more sophisticated time -domain models that are possible through the prudent combination of simpler synthesis techniques. The dissertation is intended to be understandable by a musician who is mathematically literate but who does not necessarily have a background in digital signal processing. With this limitation in mind, a brief and somewhat intuitive description of digital sampling theory is provided in the introduction. Other topics such as filter theory are discussed as the need arises. By employing each of the synthesis methods to produce the same type of sound, interesting comparisons can be made. For example, a struck string sound, such as that typical of a piano, can be produced by algorithms in each of the synthesis classifications. Many sounds, however, are peculiar to a single algorithm and must be examined independently. Psychoacoustic studies were conducted as an aid in the comparison of the sound quality of several implementations of the synthesis algorithms. Other psychoacoustic experiments were conducted to supplement the established notions of which timbral issues are important in the re -synthesis of the sounds of acoustic musical instruments.

  4. Relation between thallium-201/iodine 123-BMIPP subtraction and fluorine 18 deoxyglucose polar maps in patients with hypertrophic cardiomyopathy.

    PubMed

    Ito, Y; Hasegawa, S; Yamaguchi, H; Yoshioka, J; Uehara, T; Nishimura, T

    2000-01-01

    Clinical studies have shown discrepancies in the distribution of thallium-201 and iodine 123-beta-methyl-iodophenylpentadecanoic acid (BMIPP) in patients with hypertrophic cardiomyopathy (HCM). Myocardial uptake of fluorine 18 deoxyglucose (FDG) is increased in the hypertrophic area in HCM. We examined whether the distribution of a Tl-201/BMIPP subtraction polar map correlates with that of an FDG polar map. We normalized to maximum count each Tl-201 and BMIPP bull's-eye polar map of 6 volunteers and obtained a standard Tl-201/BMIPP subtraction polar map by subtracting a normalized BMIPP bull's-eye polar map from a normalized Tl-201 bull's-eye polar map. The Tl-201/BMIPP subtraction polar map was then applied to 8 patients with HCM (mean age 65+/-12 years) to evaluate the discrepancy between Tl-201 and BMIPP distribution. We compared the Tl-201/BMIPP subtraction polar map with an FDG polar map. In patients with HCM, the Tl-201/BMIPP subtraction polar map showed a focal uptake pattern in the hypertrophic area similar to that of the FDG polar map. By quantitative analysis, the severity score of the Tl-201/BMIPP subtraction polar map was significantly correlated with the percent dose uptake of the FDG polar map. These results suggest that this new quantitative method may be an alternative to FDG positron emission tomography for the routine evaluation of HCM.

  5. Addition and subtraction operation of optical orbital angular momentum with dielectric metasurfaces

    NASA Astrophysics Data System (ADS)

    Yi, Xunong; Li, Ying; Ling, Xiaohui; Liu, Yachao; Ke, Yougang; Fan, Dianyuan

    2015-12-01

    In this work, we propose a simple approach to realize addition and subtraction operation of optical orbital angular momentum (OAM) based on dielectric metasurfaces. The spin-orbit interaction of light in spatially inhomogeneous and anisotropic metasurfaces results in the spin-to-orbital angular momentum conversion. The subtraction system of OAM consists of two cascaded metasurfaces, while the addition system of OAM is constituted by inserting a half waveplate (HWP) between the two metasurfaces. Our experimental results are in good agreement with the theoretical calculation. These results could be useful for OAM-carrying beams applied in optical communication, information processing, etc.

  6. Deblurring in digital tomosynthesis by iterative self-layer subtraction

    NASA Astrophysics Data System (ADS)

    Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung

    2010-04-01

    Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.

  7. Baseline-Subtraction-Free (BSF) Damage-Scattered Wave Extraction for Stiffened Isotropic Plates

    NASA Technical Reports Server (NTRS)

    He, Jiaze; Leser, Patrick E.; Leser, William P.

    2017-01-01

    Lamb waves enable long distance inspection of structures for health monitoring purposes. However, this capability is diminished when applied to complex structures where damage-scattered waves are often buried by scattering from various structural components or boundaries in the time-space domain. Here, a baseline-subtraction-free (BSF) inspection concept based on the Radon transform (RT) is proposed to identify and separate these scattered waves from those scattered by damage. The received time-space domain signals can be converted into the Radon domain, in which the scattered signals from structural components are suppressed into relatively small regions such that damage-scattered signals can be identified and extracted. In this study, a piezoelectric wafer and a linear scan via laser Doppler vibrometer (LDV) were used to excite and acquire the Lamb-wave signals in an aluminum plate with multiple stiffeners. Linear and inverse linear Radon transform algorithms were applied to the direct measurements. The results demonstrate the effectiveness of the Radon transform as a reliable extraction tool for damage-scattered waves in a stiffened aluminum plate and also suggest the possibility of generalizing this technique for application to a wide variety of complex, large-area structures.

  8. Highly noise-tolerant hybrid algorithm for phase retrieval from a single-shot spatial carrier fringe pattern

    NASA Astrophysics Data System (ADS)

    Dong, Zhichao; Cheng, Haobo

    2018-01-01

    A highly noise-tolerant hybrid algorithm (NTHA) is proposed in this study for phase retrieval from a single-shot spatial carrier fringe pattern (SCFP), which effectively combines the merits of spatial carrier phase shift method and two dimensional continuous wavelet transform (2D-CWT). NTHA firstly extracts three phase-shifted fringe patterns from the SCFP with one pixel malposition; then calculates phase gradients by subtracting the reference phase from the other two target phases, which are retrieved respectively from three phase-shifted fringe patterns by 2D-CWT; finally, reconstructs the phase map by a least square gradient integration method. Its typical characters include but not limited to: (1) doesn't require the spatial carrier to be constant; (2) the subtraction mitigates edge errors of 2D-CWT; (3) highly noise-tolerant, because not only 2D-CWT is noise-insensitive, but also the noise in the fringe pattern doesn't directly take part in the phase reconstruction as in previous hybrid algorithm. Its feasibility and performances are validated extensively by simulations and contrastive experiments to temporal phase shift method, Fourier transform and 2D-CWT methods.

  9. Nagy-Soper Subtraction: a Review

    NASA Astrophysics Data System (ADS)

    Robens, Tania

    2013-07-01

    In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.

  10. Space moving target detection and tracking method in complex background

    NASA Astrophysics Data System (ADS)

    Lv, Ping-Yue; Sun, Sheng-Li; Lin, Chang-Qing; Liu, Gao-Rui

    2018-06-01

    The background of the space-borne detectors in real space-based environment is extremely complex and the signal-to-clutter ratio is very low (SCR ≈ 1), which increases the difficulty for detecting space moving targets. In order to solve this problem, an algorithm combining background suppression processing based on two-dimensional least mean square filter (TDLMS) and target enhancement based on neighborhood gray-scale difference (GSD) is proposed in this paper. The latter can filter out most of the residual background clutter processed by the former such as cloud edge. Through this procedure, both global and local SCR have obtained substantial improvement, indicating that the target has been greatly enhanced. After removing the detector's inherent clutter region through connected domain processing, the image only contains the target point and the isolated noise, in which the isolated noise could be filtered out effectively through multi-frame association. The proposed algorithm in this paper has been compared with some state-of-the-art algorithms for moving target detection and tracking tasks. The experimental results show that the performance of this algorithm is the best in terms of SCR gain, background suppression factor (BSF) and detection results.

  11. [Correction of posttraumatic thoracolumbar kyphosis with modified pedicle subtraction osteotomy].

    PubMed

    Chen, Fei; Kang, Yijun; Zhou, Bin; Dai, Zhehao

    2016-11-28

    To evaluate the efficacy and safety of modified pedicle subtraction osteotomy for treatment of thoracolumbar old fracture with kyphosis.
 Methods: From January 2003 to January 2013, 58 patients of thoracolumbar kyphosis, who underwent modified pedicle subtraction osteotomy, were reviewed. Among them, 45 cases underwent initial operation and 13 cases underwent revision surgery. Preoperative and postoperative kyphotic Cobb's angle, score of back pain, as well as the incidence of complication were accessed by using visual analogue scale (VAS) and Oswestry disability index (ODI).
 Results: Mean follow-up duration was 42 months (range, 24-60 months). Average operative time was 258 min (range, 190-430 min), while average bleeding was 950 mL (range, 600-1 600 mL). All the patients were significantly improved in function and self-image, and achieved kyphosis correction with 17.9°± 4.3°. VAS of low back pain was decreased by 3.1±0.6; ODI was dropped by 25.3%±5.5%. 3 patients (5.2%) suffered anterior thigh numbness and got recovery after 3 months of follow-up. Complications happened in 19 patients, including 12 with cerebrospinal fluid leak, 4 with superficial wound infection, and 3 with urinary tract infection. All these complications were managed properly and none of them underwent reoperation.
 Conclusion: Modified pedicle subtraction osteotomy is a safe and effective technique for the treatment of old fracture with kyphosis.

  12. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    DOE PAGES

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; ...

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less

  13. Hybrid active contour model for inhomogeneous image segmentation with background estimation

    NASA Astrophysics Data System (ADS)

    Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun

    2018-03-01

    This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.

  14. Adult Learners' Knowledge of Fraction Addition and Subtraction

    ERIC Educational Resources Information Center

    Muckridge, Nicole A.

    2017-01-01

    The purpose of this study was to examine adult developmental mathematics (ADM) students' knowledge of fraction addition and subtraction as it relates to their demonstrated fraction schemes and ability to disembed in multiplicative contexts with whole numbers. The study was conducted using a mixed methods sequential explanatory design. In the first…

  15. Curricular Approaches to Connecting Subtraction to Addition and Fostering Fluency with Basic Differences in Grade 1

    ERIC Educational Resources Information Center

    Baroody, Arthur J.

    2016-01-01

    Six widely used US Grade 1 curricula do not adequately address the following three developmental prerequisites identified by a proposed learning trajectory for the meaningful learning of the subtraction-as-addition strategy (e.g., for 13-8 think "what + 8 = 13?"): (a) reverse operations (adding 8 is undone by subtracting 8); (b) common…

  16. Continuous-variable measurement-device-independent quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Xu, Bingjie; Yu, Song; Guo, Hong

    2018-04-01

    The method of improving the performance of continuous-variable quantum key distribution protocols by postselection has been recently proposed and verified. In continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocols, the measurement results are obtained from untrusted third party Charlie. There is still not an effective method of improving CV-MDI QKD by the postselection with untrusted measurement. We propose a method to improve the performance of coherent-state CV-MDI QKD protocol by virtual photon subtraction via non-Gaussian postselection. The non-Gaussian postselection of transmitted data is equivalent to an ideal photon subtraction on the two-mode squeezed vacuum state, which is favorable to enhance the performance of CV-MDI QKD. In CV-MDI QKD protocol with non-Gaussian postselection, two users select their own data independently. We demonstrate that the optimal performance of the renovated CV-MDI QKD protocol is obtained with the transmitted data only selected by Alice. By setting appropriate parameters of the virtual photon subtraction, the secret key rate and tolerable excess noise are both improved at long transmission distance. The method provides an effective optimization scheme for the application of CV-MDI QKD protocols.

  17. Correction of Atmospheric Haze in RESOURCESAT-1 LISS-4 MX Data for Urban Analysis: AN Improved Dark Object Subtraction Approach

    NASA Astrophysics Data System (ADS)

    Mustak, S.

    2013-09-01

    The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.

  18. [X-ray semiotics of sialolithiasis in functional digital subtraction sialography].

    PubMed

    Iudin, L A; Kondrashin, S A; Afanas'ev, V V; Shchipskiĭ, A V

    1995-01-01

    Twenty-seven patients with sialolithiasis were examined using functional subtraction sialography developed by the authors. Differential diagnostic signs characterizing the degree of involvement of the salivary gland were defined. High efficacy of the method helps correctly plan the treatment strategy.

  19. Analytical optimization of digital subtraction mammography with contrast medium using a commercial unit.

    PubMed

    Rosado-Méndez, I; Palma, B A; Brandan, M E

    2008-12-01

    Contrast-medium-enhanced digital mammography (CEDM) is an image subtraction technique which might help unmasking lesions embedded in very dense breasts. Previous works have stated the feasibility of CEDM and the imperative need of radiological optimization. This work presents an extension of a former analytical formalism to predict contrast-to-noise ratio (CNR) in subtracted mammograms. The goal is to optimize radiological parameters available in a clinical mammographic unit (x-ray tube anode/filter combination, voltage, and loading) by maximizing CNR and minimizing total mean glandular dose (D(gT)), simulating the experimental application of an iodine-based contrast medium and the image subtraction under dual-energy nontemporal, and single- or dual-energy temporal modalities. Total breast-entrance air kerma is limited to a fixed 8.76 mGy (1 R, similar to screening studies). Mathematical expressions obtained from the formalism are evaluated using computed mammographic x-ray spectra attenuated by an adipose/glandular breast containing an elongated structure filled with an iodinated solution in various concentrations. A systematic study of contrast, its associated variance, and CNR for different spectral combinations is performed, concluding in the proposal of optimum x-ray spectra. The linearity between contrast in subtracted images and iodine mass thickness is proven, including the determination of iodine visualization limits based on Rose's detection criterion. Finally, total breast-entrance air kerma is distributed between both images in various proportions in order to maximize the figure of merit CNR2/D(gT). Predicted results indicate the advantage of temporal subtraction (either single- or dual-energy modalities) with optimum parameters corresponding to high-voltage, strongly hardened Rh/Rh spectra. For temporal techniques, CNR was found to depend mostly on the energy of the iodinated image, and thus reduction in D(gT) could be achieved if the spectral energy

  20. Gadolinium-enhanced magnetic resonance angiography in renal artery stenosis: comparison with digital subtraction angiography.

    PubMed

    Law, Y M; Tay, K H; Gan, Y U; Cheah, F K; Tan, B S

    2008-04-01

    To evaluate the accuracy of gadolinium-enhanced magnetic resonance angiography in assessing renal artery stenosis compared to catheter digital subtraction angiography. Retrospective study. Singapore General Hospital. Records of patients who underwent magnetic resonance angiography as well as digital subtraction angiography for assessment of renal artery stenosis from January 2003 to December 2005 were reviewed. There were 27 patients (14 male, 13 female) with a mean age of 62 (range, 44-77) years. There were 10 patients with renal transplants; their native renal arteries were not evaluated. Each of the two experienced interventional and body magnetic resonance radiologists, who were blinded to the results, reviewed the digital subtraction angiography and magnetic resonance angiography images respectively. Digital subtraction angiography was used as the standard of reference. A total of 39 renal arteries from these 27 patients were evaluated. One of the arteries was previously stented and could not be assessed with magnetic resonance angiography due to severe artefacts. Of the remaining 38 renal arteries, two were graded as normal, seven as having mild stenosis (<50%), eight as having moderate stenosis (> or =50% but <75%), and 21 as having severe stenosis (> or =75%). Magnetic resonance angiography and digital subtraction angiography were concordant in 89% of the arteries; magnetic resonance angiography overestimated the degree of stenosis in 8% and underestimated it in 3% of them. In the evaluation of clinically significant renal artery stenosis (> or =50%) with magnetic resonance angiography, the overall sensitivity, specificity, positive predictive value, and negative predictive value were 97%, 67%, 90%, and 86% respectively. The sensitivity and specificity of magnetic resonance angiography in transplant renal artery stenosis was 100%. CONCLUSION. Our experience suggested that gadolinium-enhanced magnetic resonance angiography is a sensitive non

  1. Suppression Subtractive Hybridization Reveals Transcript Profiling of Chlorella under Heterotrophy to Photoautotrophy Transition

    PubMed Central

    Huang, Jianke; Wang, Weiliang; Yin, Weibo; Hu, Zanmin; Li, Yuanguang

    2012-01-01

    Background Microalgae have been extensively investigated and exploited because of their competitive nutritive bioproducts and biofuel production ability. Chlorella are green algae that can grow well heterotrophically and photoautotrophically. Previous studies proved that shifting from heterotrophy to photoautotrophy in light-induced environments causes photooxidative damage as well as distinct physiologic features that lead to dynamic changes in Chlorella intracellular components, which have great potential in algal health food and biofuel production. However, the molecular mechanisms underlying the trophic transition remain unclear. Methodology/Principal Findings In this study, suppression subtractive hybridization strategy was employed to screen and characterize genes that are differentially expressed in response to the light-induced shift from heterotrophy to photoautotrophy. Expressed sequence tags (ESTs) were obtained from 770 and 803 randomly selected clones among the forward and reverse libraries, respectively. Sequence analysis identified 544 unique genes in the two libraries. The functional annotation of the assembled unigenes demonstrated that 164 (63.1%) from the forward library and 62 (21.8%) from the reverse showed significant similarities with the sequences in the NCBI non-redundant database. The time-course expression patterns of 38 selected differentially expressed genes further confirmed their responsiveness to a diverse trophic status. The majority of the genes enriched in the subtracted libraries were associated with energy metabolism, amino acid metabolism, protein synthesis, carbohydrate metabolism, and stress defense. Conclusions/Significance The data presented here offer the first insights into the molecular foundation underlying the diverse microalgal trophic niche. In addition, the results can be used as a reference for unraveling candidate genes associated with the transition of Chlorella from heterotrophy to photoautotrophy, which holds

  2. Instrumental and atmospheric background lines observed by the SMM gamma-ray spectrometer

    NASA Technical Reports Server (NTRS)

    Share, G. H.; Kinzer, R. L.; Strickman, M. S.; Letaw, J. R.; Chupp, E. L.

    1989-01-01

    Preliminary identifications of instrumental and atmospheric background lines detected by the gamma-ray spectrometer on NASA's Solar Maximum Mission satellite (SMM) are presented. The long-term and stable operation of this experiment has provided data of high quality for use in this analysis. Methods are described for identifying radioactive isotopes which use their different decay times. Temporal evolution of the features are revealed by spectral comparisons, subtractions, and fits. An understanding of these temporal variations has enabled the data to be used for detecting celestial gamma-ray sources.

  3. Animal experiments by K-edge subtraction angiography by using SR (abstract)

    NASA Astrophysics Data System (ADS)

    Anno, I.; Akisada, M.; Takeda, T.; Sugishita, Y.; Kakihana, M.; Ohtsuka, S.; Nishimura, K.; Hasegawa, S.; Takenaka, E.; Hyodo, K.; Ando, M.

    1989-07-01

    Ischemic heart disease is one of the most popular and lethal diseases for aged peoples in the world, and is usually diagnosed by transarterial selective coronary arteriography. However, it is rather invasive and somewhat dangerous, so that the selective coronary arteriography is not feasible for prospective screening of coronary occlusive heart disease. Conventional digital subtraction angiography (DSA) is widely known as a relatively noninvasive and useful technique is making a diagnosis of arterial occlusive disease, especially in making the diagnosis of ischemic heart disease. Conventional intravenous subtraction angiography by temporal subtraction, however, has several problems when applying to the moving objects. Digital subtraction method using high-speed switching above and below the K edge could be the ideal approach to this solution. We intend to make a synchrotron radiation digital K-edge subtraction angiography in the above policy, and to apply it to the human coronary ischemic disease on an outpatient basis. The principles and experimental systems have already been described in detail by our coworkers. Our prototype experimental system is situated at the AR (accumulation ring) for TRISTAN project of high energy physics. The available beam size is 70 mm by 120 mm. The electron energy of AR is 6.5 GeV and average beam current is approximately 10 mA. This paper will show the animal experiments of our K-edge subtraction system, and discuss some problems and technical difficulties. Three dogs, weighing approximately 15 kg, were examined to evaluate the ability of our prototype synchrotron radiation DSA unit, that we are now constructing. The dogs were anaesthetized with pentobarbital sodium, intravenously (30 mg/kg). Six french-sized (1.52 mm i.d.) pigtail catheter with multiple side holes were introduced via the right femoral vein into the right atrium by the cutdown technique under conventional x-ray fluoroscopic control. Respiration of the dogs was

  4. Studying extragalactic background fluctuations with the Cosmic Infrared Background ExpeRiment 2 (CIBER-2)

    NASA Astrophysics Data System (ADS)

    Lanz, Alicia; Arai, Toshiaki; Battle, John; Bock, James; Cooray, Asantha; Hristov, Viktor; Korngut, Phillip; Lee, Dae Hee; Mason, Peter; Matsumoto, Toshio; Matsuura, Shuji; Morford, Tracy; Onishi, Yosuke; Shirahata, Mai; Tsumura, Kohji; Wada, Takehiko; Zemcov, Michael

    2014-08-01

    Fluctuations in the extragalactic background light trace emission from the history of galaxy formation, including the emission from the earliest sources from the epoch of reionization. A number of recent near-infrared measure- ments show excess spatial power at large angular scales inconsistent with models of z < 5 emission from galaxies. These measurements have been interpreted as arising from either redshifted stellar and quasar emission from the epoch of reionization, or the combined intra-halo light from stars thrown out of galaxies during merging activity at lower redshifts. Though astrophysically distinct, both interpretations arise from faint, low surface brightness source populations that are difficult to detect except by statistical approaches using careful observations with suitable instruments. The key to determining the source of these background anisotropies will be wide-field imaging measurements spanning multiple bands from the optical to the near-infrared. The Cosmic Infrared Background ExpeRiment 2 (CIBER-2) will measure spatial anisotropies in the extra- galactic infrared background caused by cosmological structure using six broad spectral bands. The experiment uses three 2048 x 2048 Hawaii-2RG near-infrared arrays in three cameras coupled to a single 28.5 cm telescope housed in a reusable sounding rocket-borne payload. A small portion of each array will also be combined with a linear-variable filter to make absolute measurements of the spectrum of the extragalactic background with high spatial resolution for deep subtraction of Galactic starlight. The large field of view and multiple spectral bands make CIBER-2 unique in its sensitivity to fluctuations predicted by models of lower limits on the luminosity of the first stars and galaxies and in its ability to distinguish between primordial and foreground anisotropies. In this paper the scientific motivation for CIBER-2 and details of its first flight instrumentation will be discussed, including

  5. Realization of arithmetic addition and subtraction in a quantum system

    NASA Astrophysics Data System (ADS)

    Um, Mark; Zhang, Junhua; Lv, Dingshun; Lu, Yao; An, Shuoming; Zhang, Jing-Ning; Kim, Kihwan; Kim, M. S.; Nha, Hyunchul

    2015-05-01

    We report an experimental realization of the conventional arithmetic on a bosonic system, in particular, phonons of a 171Yb+ ion trapped in a harmonic potential. The conventional addition and subtraction are totally different from the quantum operations of creation ↠and annihilation â that have the modification of √{ n } factor due to the symmetric nature of bosons. In our realization, the addition and subtraction do not depend on the number of particles originally in the system and nearly deterministically bring a classical state into a non-classical state. We implement such operations by applying the scheme of transitionless shortcuts to adiabaticity on anti-Jaynes-Cummings transition. This technology enables quantum state engineering and can be applied to many other experimental platforms. This work was supported by the National Basic Research Program of China under Grants No. 2011CBA00300 (No. 2011CBA00301), the National Natural Science Foundation of China 11374178.

  6. Iodine filter imaging system for subtraction angiography using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.

    1993-11-01

    A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.

  7. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    PubMed

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  8. Holistic approach for automated background EEG assessment in asphyxiated full-term infants

    NASA Astrophysics Data System (ADS)

    Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten

    2014-12-01

    Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.

  9. Subtraction Radiography for the Diagnosis of Bone Lesions in Dogs.

    DTIC Science & Technology

    1984-05-31

    Avail ander Journal of Periodontology Dist Special Ř 211 East Chicago Avenue Room 924 Chicago, IL 60611 Dear Sirs: I m submitting an original...research article titled "Subtraction Radiography for the Diagnosis of Bone Lesions in Dogs" solely to the Journal of Periodontology for review and

  10. Performance improvement of continuous-variable quantum key distribution with an entangled source in the middle via photon subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua

    2017-03-01

    A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.

  11. Evaluation of chronic periapical lesions by digital subtraction radiography by using Adobe Photoshop CS: a technical report.

    PubMed

    Carvalho, Fabiola B; Gonçalves, Marcelo; Tanomaru-Filho, Mário

    2007-04-01

    The purpose of this study was to describe a new technique by using Adobe Photoshop CS (San Jose, CA) image-analysis software to evaluate the radiographic changes of chronic periapical lesions after root canal treatment by digital subtraction radiography. Thirteen upper anterior human teeth with pulp necrosis and radiographic image of chronic periapical lesion were endodontically treated and radiographed 0, 2, 4, and 6 months after root canal treatment by using a film holder. The radiographic films were automatically developed and digitized. The radiographic images taken 0, 2, 4, and 6 months after root canal therapy were submitted to digital subtraction in pairs (0 and 2 months, 2 and 4 months, and 4 and 6 months) choosing "image," "calculation," "subtract," and "new document" tools from Adobe Photoshop CS image-analysis software toolbar. The resulting images showed areas of periapical healing in all cases. According to this methodology, the healing or expansion of periapical lesions can be evaluated by means of digital subtraction radiography by using Adobe Photoshop CS software.

  12. Using background knowledge for picture organization and retrieval

    NASA Astrophysics Data System (ADS)

    Quintana, Yuri

    1997-01-01

    A picture knowledge base management system is described that is used to represent, organize and retrieve pictures from a frame knowledge base. Experiments with human test subjects were conducted to obtain further descriptions of pictures from news magazines. These descriptions were used to represent the semantic content of pictures in frame representations. A conceptual clustering algorithm is described which organizes pictures not only on the observable features, but also on implicit properties derived from the frame representations. The algorithm uses inheritance reasoning to take into account background knowledge in the clustering. The algorithm creates clusters of pictures using a group similarity function that is based on the gestalt theory of picture perception. For each cluster created, a frame is generated which describes the semantic content of pictures in the cluster. Clustering and retrieval experiments were conducted with and without background knowledge. The paper shows how the use of background knowledge and semantic similarity heuristics improves the speed, precision, and recall of queries processed. The paper concludes with a discussion of how natural language processing of can be used to assist in the development of knowledge bases and the processing of user queries.

  13. Temporal Subtraction of Serial CT Images with Large Deformation Diffeomorphic Metric Mapping in the Identification of Bone Metastases.

    PubMed

    Sakamoto, Ryo; Yakami, Masahiro; Fujimoto, Koji; Nakagomi, Keita; Kubo, Takeshi; Emoto, Yutaka; Akasaka, Thai; Aoyama, Gakuto; Yamamoto, Hiroyuki; Miller, Michael I; Mori, Susumu; Togashi, Kaori

    2017-11-01

    Purpose To determine the improvement of radiologist efficiency and performance in the detection of bone metastases at serial follow-up computed tomography (CT) by using a temporal subtraction (TS) technique based on an advanced nonrigid image registration algorithm. Materials and Methods This retrospective study was approved by the institutional review board, and informed consent was waived. CT image pairs (previous and current scans of the torso) in 60 patients with cancer (primary lesion location: prostate, n = 14; breast, n = 16; lung, n = 20; liver, n = 10) were included. These consisted of 30 positive cases with a total of 65 bone metastases depicted only on current images and confirmed by two radiologists who had access to additional imaging examinations and clinical courses and 30 matched negative control cases (no bone metastases). Previous CT images were semiautomatically registered to current CT images by the algorithm, and TS images were created. Seven radiologists independently interpreted CT image pairs to identify newly developed bone metastases without and with TS images with an interval of at least 30 days. Jackknife free-response receiver operating characteristics (JAFROC) analysis was conducted to assess observer performance. Reading time was recorded, and usefulness was evaluated with subjective scores of 1-5, with 5 being extremely useful and 1 being useless. Significance of these values was tested with the Wilcoxon signed-rank test. Results The subtraction images depicted various types of bone metastases (osteolytic, n = 28; osteoblastic, n = 26; mixed osteolytic and blastic, n = 11) as temporal changes. The average reading time was significantly reduced (384.3 vs 286.8 seconds; Wilcoxon signed rank test, P = .028). The average figure-of-merit value increased from 0.758 to 0.835; however, this difference was not significant (JAFROC analysis, P = .092). The subjective usefulness survey response showed a median score of 5 for use of the technique

  14. A Method of Time-Intensity Curve Calculation for Vascular Perfusion of Uterine Fibroids Based on Subtraction Imaging with Motion Correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming

    2016-12-01

    The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.

  15. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  16. Incremental principal component pursuit for video background modeling

    DOEpatents

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  17. Recursive least squares background prediction of univariate syndromic surveillance data

    PubMed Central

    2009-01-01

    Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the

  18. Dynamic cone beam CT angiography of carotid and cerebral arteries using canine model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai Weixing; Zhao Binghui; Conover, David

    2012-01-15

    Purpose: This research is designed to develop and evaluate a flat-panel detector-based dynamic cone beam CT system for dynamic angiography imaging, which is able to provide both dynamic functional information and dynamic anatomic information from one multirevolution cone beam CT scan. Methods: A dynamic cone beam CT scan acquired projections over four revolutions within a time window of 40 s after contrast agent injection through a femoral vein to cover the entire wash-in and wash-out phases. A dynamic cone beam CT reconstruction algorithm was utilized and a novel recovery method was developed to correct the time-enhancement curve of contrast flow.more » From the same data set, both projection-based subtraction and reconstruction-based subtraction approaches were utilized and compared to remove the background tissues and visualize the 3D vascular structure to provide the dynamic anatomic information. Results: Through computer simulations, the new recovery algorithm for dynamic time-enhancement curves was optimized and showed excellent accuracy to recover the actual contrast flow. Canine model experiments also indicated that the recovered time-enhancement curves from dynamic cone beam CT imaging agreed well with that of an IV-digital subtraction angiography (DSA) study. The dynamic vascular structures reconstructed using both projection-based subtraction and reconstruction-based subtraction were almost identical as the differences between them were comparable to the background noise level. At the enhancement peak, all the major carotid and cerebral arteries and the Circle of Willis could be clearly observed. Conclusions: The proposed dynamic cone beam CT approach can accurately recover the actual contrast flow, and dynamic anatomic imaging can be obtained with high isotropic 3D resolution. This approach is promising for diagnosis and treatment planning of vascular diseases and strokes.« less

  19. Fostering Taiwanese Preschoolers' Understanding of the Addition-Subtraction Inverse Principle

    ERIC Educational Resources Information Center

    Lai, Meng-Lung; Baroody, Arthur J.; Johnson, Amanda R.

    2008-01-01

    The present research involved gauging preschoolers' learning potential for a key arithmetic concept, the addition-subtraction inverse principle (e.g., 2+1-1=2). Sixty 4- and 5-year-old Taiwanese children from two public preschools serving low- and middle-income families participated in the training experiment. Half were randomly assigned to an…

  20. [Myocardial perfusion imaging by digital subtraction angiography].

    PubMed

    Kadowaki, H; Ishikawa, K; Ogai, T; Katori, R

    1986-03-01

    Several methods of digital subtraction angiography (DSA) were compared to determine which could better visualize regional myocardial perfusion using coronary angiography in seven patients with myocardial infarction, two with angina pectoris and five with normal coronary arteries. Satisfactory DSA was judged to be achieved if the shape of the heart on the mask film was identical to that on the live film and if both films were exactly superimposed. To obtain an identical mask film in the shape of each live film, both films were selected from the following three phases of the cardiac cycle; at the R wave of the electrocardiogram, 100 msec before the R wave, and 200 msec before the R wave. The last two were superior for obtaining mask and live films which were similar in shape, because the cardiac motion in these phases was relatively small. Using these mask and live films, DSA was performed either with the continuous image mode (CI mode) or the time interval difference mode (TID mode). The overall perfusion of contrast medium through the artery to the vein was adequately visualized using the CI mode. Passage of contrast medium through the artery, capillary and vein was visualized at each phase using TID mode. Subtracted images were displayed and photographed, and the density of the contrast medium was adequate to display contour lines as in a relief map. Using this DSA, it was found that regional perfusion of the contrast medium was not always uniform in normal subjects, depending on the typography of the coronary artery.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  2. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  3. High Spatial and Temporal Resolution Dynamic Contrast-Enhanced Magnetic Resonance Angiography (CE-MRA) using Compressed Sensing with Magnitude Image Subtraction

    PubMed Central

    Rapacchi, Stanislas; Han, Fei; Natsuaki, Yutaka; Kroeker, Randall; Plotnik, Adam; Lehman, Evan; Sayre, James; Laub, Gerhard; Finn, J Paul; Hu, Peng

    2014-01-01

    Purpose We propose a compressed-sensing (CS) technique based on magnitude image subtraction for high spatial and temporal resolution dynamic contrast-enhanced MR angiography (CE-MRA). Methods Our technique integrates the magnitude difference image into the CS reconstruction to promote subtraction sparsity. Fully sampled Cartesian 3D CE-MRA datasets from 6 volunteers were retrospectively under-sampled and three reconstruction strategies were evaluated: k-space subtraction CS, independent CS, and magnitude subtraction CS. The techniques were compared in image quality (vessel delineation, image artifacts, and noise) and image reconstruction error. Our CS technique was further tested on 7 volunteers using a prospectively under-sampled CE-MRA sequence. Results Compared with k-space subtraction and independent CS, our magnitude subtraction CS provides significantly better vessel delineation and less noise at 4X acceleration, and significantly less reconstruction error at 4X and 8X (p<0.05 for all). On a 1–4 point image quality scale in vessel delineation, our technique scored 3.8±0.4 at 4X, 2.8±0.4 at 8X and 2.3±0.6 at 12X acceleration. Using our CS sequence at 12X acceleration, we were able to acquire dynamic CE-MRA with higher spatial and temporal resolution than current clinical TWIST protocol while maintaining comparable image quality (2.8±0.5 vs. 3.0±0.4, p=NS). Conclusion Our technique is promising for dynamic CE-MRA. PMID:23801456

  4. Decision making regarding Smith-Petersen vs. pedicle subtraction osteotomy vs. vertebral column resection for spinal deformity.

    PubMed

    Bridwell, Keith H

    2006-09-01

    Author experience and literature review. To investigate and discuss decision-making on when to perform a Smith-Petersen osteotomy as opposed to a pedicle subtraction procedure and/or a vertebral column resection. Articles have been published regarding Smith-Petersen osteotomies, pedicle subtraction procedures, and vertebral column resections. Expectations and complications have been reviewed. However, decision-making regarding which of the 3 procedures is most useful for a particular spinal deformity case is not clearly investigated. Discussed in this manuscript is the author's experience and the literature regarding the operative options for a fixed coronal or sagittal deformity. There are roles for Smith-Petersen osteotomy, pedicle subtraction, and vertebral column resection. Each has specific applications and potential complications. As the magnitude of resection increases, the ability to correct deformity improves, but also the risk of complication increases. Therein, an understanding of potential applications and complications is helpful.

  5. Parametric color coding of digital subtraction angiography.

    PubMed

    Strother, C M; Bender, F; Deuerling-Zheng, Y; Royalty, K; Pulfer, K A; Baumgart, J; Zellerhoff, M; Aagaard-Kienitz, B; Niemann, D B; Lindstrom, M L

    2010-05-01

    Color has been shown to facilitate both visual search and recognition tasks. It was our purpose to examine the impact of a color-coding algorithm on the interpretation of 2D-DSA acquisitions by experienced and inexperienced observers. Twenty-six 2D-DSA acquisitions obtained as part of routine clinical care from subjects with a variety of cerebrovascular disease processes were selected from an internal data base so as to include a variety of disease states (aneurysms, AVMs, fistulas, stenosis, occlusions, dissections, and tumors). Three experienced and 3 less experienced observers were each shown the acquisitions on a prerelease version of a commercially available double-monitor workstation (XWP, Siemens Healthcare). Acquisitions were presented first as a subtracted image series and then as a single composite color-coded image of the entire acquisition. Observers were then asked a series of questions designed to assess the value of the color-coded images for the following purposes: 1) to enhance their ability to make a diagnosis, 2) to have confidence in their diagnosis, 3) to plan a treatment, and 4) to judge the effect of a treatment. The results were analyzed by using 1-sample Wilcoxon tests. Color-coded images enhanced the ease of evaluating treatment success in >40% of cases (P < .0001). They also had a statistically significant impact on treatment planning, making planning easier in >20% of the cases (P = .0069). In >20% of the examples, color-coding made diagnosis and treatment planning easier for all readers (P < .0001). Color-coding also increased the confidence of diagnosis compared with the use of DSA alone (P = .056). The impact of this was greater for the naïve readers than for the expert readers. At no additional cost in x-ray dose or contrast medium, color-coding of DSA enhanced the conspicuity of findings on DSA images. It was particularly useful in situations in which there was a complex flow pattern and in evaluation of pre- and posttreatment

  6. Black hole thermodynamics from a variational principle: asymptotically conical backgrounds

    DOE PAGES

    An, Ok Song; Cvetič, Mirjam; Papadimitriou, Ioannis

    2016-03-14

    The variational problem of gravity theories is directly related to black hole thermodynamics. For asymptotically locally AdS backgrounds it is known that holographic renormalization results in a variational principle in terms of equivalence classes of boundary data under the local asymptotic symmetries of the theory, which automatically leads to finite conserved charges satisfying the first law of thermodynamics. We show that this connection holds well beyond asymptotically AdS black holes. In particular, we formulate the variational problem for N = 2 STU supergravity in four dimensions with boundary conditions corresponding to those obeyed by the so called ‘subtracted geometries’. Wemore » show that such boundary conditions can be imposed covariantly in terms of a set of asymptotic second class constraints, and we derive the appropriate boundary terms that render the variational problem well posed in two different duality frames of the STU model. This allows us to define finite conserved charges associated with any asymptotic Killing vector and to demonstrate that these charges satisfy the Smarr formula and the first law of thermodynamics. Moreover, by uplifting the theory to five dimensions and then reducing on a 2-sphere, we provide a precise map between the thermodynamic observables of the subtracted geometries and those of the BTZ black hole. Finally, surface terms play a crucial role in this identification.« less

  7. Black hole thermodynamics from a variational principle: asymptotically conical backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ok Song; Cvetič, Mirjam; Papadimitriou, Ioannis

    The variational problem of gravity theories is directly related to black hole thermodynamics. For asymptotically locally AdS backgrounds it is known that holographic renormalization results in a variational principle in terms of equivalence classes of boundary data under the local asymptotic symmetries of the theory, which automatically leads to finite conserved charges satisfying the first law of thermodynamics. We show that this connection holds well beyond asymptotically AdS black holes. In particular, we formulate the variational problem for N = 2 STU supergravity in four dimensions with boundary conditions corresponding to those obeyed by the so called ‘subtracted geometries’. Wemore » show that such boundary conditions can be imposed covariantly in terms of a set of asymptotic second class constraints, and we derive the appropriate boundary terms that render the variational problem well posed in two different duality frames of the STU model. This allows us to define finite conserved charges associated with any asymptotic Killing vector and to demonstrate that these charges satisfy the Smarr formula and the first law of thermodynamics. Moreover, by uplifting the theory to five dimensions and then reducing on a 2-sphere, we provide a precise map between the thermodynamic observables of the subtracted geometries and those of the BTZ black hole. Finally, surface terms play a crucial role in this identification.« less

  8. K-edge subtraction synchrotron X-ray imaging in bio-medical research.

    PubMed

    Thomlinson, W; Elleaume, H; Porra, L; Suortti, P

    2018-05-01

    High contrast in X-ray medical imaging, while maintaining acceptable radiation dose levels to the patient, has long been a goal. One of the most promising methods is that of K-edge subtraction imaging. This technique, first advanced as long ago as 1953 by B. Jacobson, uses the large difference in the absorption coefficient of elements at energies above and below the K-edge. Two images, one taken above the edge and one below the edge, are subtracted leaving, ideally, only the image of the distribution of the target element. This paper reviews the development of the KES techniques and technology as applied to bio-medical imaging from the early low-power tube sources of X-rays to the latest high-power synchrotron sources. Applications to coronary angiography, functional lung imaging and bone growth are highlighted. A vision of possible imaging with new compact sources is presented. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Recursive least squares background prediction of univariate syndromic surveillance data.

    PubMed

    Najmi, Amir-Homayoon; Burkom, Howard

    2009-01-16

    Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We

  10. In vivo optical imaging of amblyopia: Digital subtraction autofluorescence and split-spectrum amplitude-decorrelation angiography.

    PubMed

    Guo, Lei; Tao, Jun; Xia, Fan; Yang, Zhi; Ma, Xiaoli; Hua, Rui

    2016-09-01

    Amblyopia is a visual impairment that is attributed to either abnormal binocular interactions or visual deprivation. The retina and choroids have been shown to be involved in the development of amblyopia. The purpose of this study was to investigate the retinal and choroidal microstructural abnormalities of amblyopia using digital subtraction autofluorescence and split-spectrum amplitude-decorrelation angiography (SSADA) approaches. This prospective study included 44 eyes of 22 patients with unilateral amblyopia. All patients who received indirect ophthalmoscopy, combined depth imaging spectral domain optical coherence tomography (OCT), SSADA-OCT, and macular blue light (BL-) and near-infrared (NIR-) autofluorescences underwent pupil dilation. The subfoveal choroidal thickness (SFCT) was measured. BL- and NIR-autofluorescences were determined for all patients and used to generate subtraction images with ImageJ software. The superficial, deep layers of the retina, and inner choroid layer were required for SSADA-OCT. For the normal eyes, a regularly increasing signal was observed in the central macula based on the subtraction images. In contrast, a decreased signal for the central patch or a reduced peak was detected in 16 of 22 amblyopic eyes (72.7%). The mean SFCT of the amblyopic eyes was greater than that of the fellow normal eyes (399.25 ± 4.944 µm vs. 280.58 ± 6.491 µm, respectively, P < 0.05). SSADA-OCT revealed a normal choroidal capillary network in all fellow normal eyes. However, 18 of 22 amblyopic eyes (86.4%) exhibited a blurry choroidal capillary network, and 15 of 22 amblyopic eyes (68.2%) displayed a dark atrophic patch. This is the first report of amblyopia using SSADA-OCT and digital subtraction images of autofluorescence. The mechanistic relationship of a thicker choroid and choroidal capillary atrophy with amblyopia remains to be described. The digital subtraction image confirmed the changes in the microstructure of the

  11. Generative electronic background music system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazurowski, Lukasz

    In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.

  12. Water dynamics in small reverse micelles in two solvents: two-dimensional infrared vibrational echoes with two-dimensional background subtraction.

    PubMed

    Fenn, Emily E; Wong, Daryl B; Fayer, M D

    2011-02-07

    Water dynamics as reflected by the spectral diffusion of the water hydroxyl stretch were measured in w(0) = 2 (1.7 nm diameter) Aerosol-OT (AOT)/water reverse micelles in carbon tetrachloride and in isooctane solvents using ultrafast 2D IR vibrational echo spectroscopy. Orientational relaxation and population relaxation are observed for w(0) = 2, 4, and 7.5 in both solvents using IR pump-probe measurements. It is found that the pump-probe observables are sensitive to w(0), but not to the solvent. However, initial analysis of the vibrational echo data from the water nanopool in the reverse micelles in the isooctane solvent seems to yield different dynamics than the CCl(4) system in spite of the fact that the spectra, vibrational lifetimes, and orientational relaxation are the same in the two systems. It is found that there are beat patterns in the interferograms with isooctane as the solvent. The beats are observed from a signal generated by the AOT/isooctane system even when there is no water in the system. A beat subtraction data processing procedure does a reasonable job of removing the distortions in the isooctane data, showing that the reverse micelle dynamics are the same within experimental error regardless of whether isooctane or carbon tetrachloride is used as the organic phase. Two time scales are observed in the vibrational echo data, ~1 and ~10 ps. The slower component contains a significant amount of the total inhomogeneous broadening. Physical arguments indicate that there is a much slower component of spectral diffusion that is too slow to observe within the experimental window, which is limited by the OD stretch vibrational lifetime.

  13. Water dynamics in small reverse micelles in two solvents: Two-dimensional infrared vibrational echoes with two-dimensional background subtraction

    NASA Astrophysics Data System (ADS)

    Fenn, Emily E.; Wong, Daryl B.; Fayer, M. D.

    2011-02-01

    Water dynamics as reflected by the spectral diffusion of the water hydroxyl stretch were measured in w0 = 2 (1.7 nm diameter) Aerosol-OT (AOT)/water reverse micelles in carbon tetrachloride and in isooctane solvents using ultrafast 2D IR vibrational echo spectroscopy. Orientational relaxation and population relaxation are observed for w0 = 2, 4, and 7.5 in both solvents using IR pump-probe measurements. It is found that the pump-probe observables are sensitive to w0, but not to the solvent. However, initial analysis of the vibrational echo data from the water nanopool in the reverse micelles in the isooctane solvent seems to yield different dynamics than the CCl4 system in spite of the fact that the spectra, vibrational lifetimes, and orientational relaxation are the same in the two systems. It is found that there are beat patterns in the interferograms with isooctane as the solvent. The beats are observed from a signal generated by the AOT/isooctane system even when there is no water in the system. A beat subtraction data processing procedure does a reasonable job of removing the distortions in the isooctane data, showing that the reverse micelle dynamics are the same within experimental error regardless of whether isooctane or carbon tetrachloride is used as the organic phase. Two time scales are observed in the vibrational echo data, ~1 and ~10 ps. The slower component contains a significant amount of the total inhomogeneous broadening. Physical arguments indicate that there is a much slower component of spectral diffusion that is too slow to observe within the experimental window, which is limited by the OD stretch vibrational lifetime.

  14. Improving chlorophyll-a retrievals and cross-sensor consistency through the OCI algorithm concept

    NASA Astrophysics Data System (ADS)

    Feng, L.; Hu, C.; Lee, Z.; Franz, B. A.

    2016-02-01

    Abstract: The recently developed band-subtraction based OCI chlorophyll-a algorithm is more tolerant than the band-ratio OCx algorithms to errors from atmospheric correction and other sources in oligotrophic oceans (Chl ≤ 0.25 mg m-3), and it has been implemented by NASA as the default algorithm to produce global Chl data from all ocean color missions. However, two areas still require improvements in its current implementation. Firstly, the originally proposed algorithm switch between oligotrophic and more productive waters has been changed from 0.25 - 0.3 mg m-3 to 0.15 - 0.2 mg m-3 to account for the observed discontinuity in data statistics. Additionally, the algorithm does not account for variable proportions of colored dissolved organic matter (CDOM) in different ocean basins. Here, new step-wise regression equations with fine-tuned regression coefficients are used to improve raise the algorithm switch zone and to improve data statistics as well as retrieval accuracy. A new CDOM index (CDI) based on three spectral bands (412, 443 and 490 nm) is used as a weighting factor to adjust the algorithm for the optical disparities between different oceans. The updated Chl OCI algorithm is then evaluated for its overall accuracy using field observations through the SeaBASS data archive, and for its cross-sensor consistency using multi-sensor observations over the global oceans. Keywords: Chlorophyll-a, Remote sensing, Ocean color, OCI, OCx, CDOM, MODIS, SeaWiFS, VIIRS

  15. Developing Prospective Teachers' Understanding of Addition and Subtraction with Whole Numbers

    ERIC Educational Resources Information Center

    Roy, George J.

    2014-01-01

    This study was situated in a semester-long classroom teaching experiment examining prospective teachers' understanding of number concepts and operations. The purpose of this paper is to describe the learning goals, tasks, and tools used to cultivate prospective teachers' understanding of addition and subtraction with whole numbers. Research…

  16. Simultaneous K-edge subtraction tomography for tracing strontium using parametric X-ray radiation

    NASA Astrophysics Data System (ADS)

    Hayakawa, Y.; Hayakawa, K.; Kaneda, T.; Nogami, K.; Sakae, T.; Sakai, T.; Sato, I.; Takahashi, Y.; Tanaka, T.

    2017-07-01

    The X-ray source based on parametric X-ray radiation (PXR) has been regularly providing a coherent X-ray beam for application studies at Nihon University. Recently, three dimensional (3D) computed tomography (CT) has become one of the most important applications of the PXR source. The methodology referred to as K-edge subtraction (KES) imaging is a particularly successful application utilizing the energy selectivity of PXR. In order to demonstrate the applicability of PXR-KES, a simultaneous KES experiment for a specimen containing strontium was performed using a PXR beam having an energy near the Sr K-edge of 16.1 keV. As a result, the 3D distribution of Sr was obtained by subtraction between the two simultaneously acquired tomographic images.

  17. Finding False Positives Planet Candidates Due To Background Eclipsing Binaries in K2

    NASA Astrophysics Data System (ADS)

    Mullally, Fergal; Thompson, Susan E.; Coughlin, Jeffrey; DAVE Team

    2016-06-01

    We adapt the difference image centroid approach, used for finding background eclipsing binaries, to vet K2 planet candidates. Difference image centroids were used with great success to vet planet candidates in the original Kepler mission, where the source of a transit could be identified by subtracting images of out-of-transit cadences from in-transit cadences. To account for K2's roll pattern, we reconstruct out-of-transit images from cadences that are nearby in both time and spacecraft roll angle. We describe the method and discuss some K2 planet candidates which this method suggests are false positives.

  18. Separation of gravitational-wave and cosmic-shear contributions to cosmic microwave background polarization.

    PubMed

    Kesden, Michael; Cooray, Asantha; Kamionkowski, Marc

    2002-07-01

    Inflationary gravitational waves (GW) contribute to the curl component in the polarization of the cosmic microwave background (CMB). Cosmic shear--gravitational lensing of the CMB--converts a fraction of the dominant gradient polarization to the curl component. Higher-order correlations can be used to map the cosmic shear and subtract this contribution to the curl. Arcminute resolution will be required to pursue GW amplitudes smaller than those accessible by the Planck surveyor mission. The blurring by lensing of small-scale CMB power leads with this reconstruction technique to a minimum detectable GW amplitude corresponding to an inflation energy near 10(15) GeV.

  19. Cloud Optical Depth Retrievals from Solar Background "signal" of Micropulse Lidars

    NASA Technical Reports Server (NTRS)

    Chiu, J. Christine; Marshak, A.; Wiscombe, W.; Valencia, S.; Welton, E. J.

    2007-01-01

    Pulsed lidars are commonly used to retrieve vertical distributions of cloud and aerosol layers. It is widely believed that lidar cloud retrievals (other than cloud base altitude) are limited to optically thin clouds. Here we demonstrate that lidars can retrieve optical depths of thick clouds using solar background light as a signal, rather than (as now) merely a noise to be subtracted. Validations against other instruments show that retrieved cloud optical depths agree within 10-15% for overcast stratus and broken clouds. In fact, for broken cloud situations one can retrieve not only the aerosol properties in clear-sky periods using lidar signals, but also the optical depth of thick clouds in cloudy periods using solar background signals. This indicates that, in general, it may be possible to retrieve both aerosol and cloud properties using a single lidar. Thus, lidar observations have great untapped potential to study interactions between clouds and aerosols.

  20. Imaging of a parapharyngeal hemangiopericytoma. Radioimmunoscintigraphy (SPECT) with indium-111-labeled anti-CEA antibody, and comparison to digital subtraction angiography, computed tomography, and immunohistochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kairemo, K.J.; Hopsu, E.V.; Melartin, E.J.

    1991-01-01

    A 27-year-old male patient with a parapharyngeal hemangiopericytoma was investigated radiologically with orthopantomography, computed tomography, and digital subtraction angiography before the operation. Because a malignancy was suspected, the patient was imaged with gamma camera using radiolabeled monoclonal anticarcinoembryonal antigen antibody including single photon emission computed tomography. The radioantibody accumulated strongly into the neoplasm. Tumor to background ratio was 2.2. Samples of the excised tumor were stained immunohistochemically for desmin, vimentin, muscle actin, cytokeratin, CEA (carcinoembryonic antigen), and factor VIII. They showed that the antibody uptake was of unspecific nature and not due to CEA expression in the tumor.

  1. A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the iterated singly-unresolved subtraction terms

    NASA Astrophysics Data System (ADS)

    Bolzoni, Paolo; Somogyi, Gábor; Trócsányi, Zoltán

    2011-01-01

    We perform the integration of all iterated singly-unresolved subtraction terms, as defined in ref. [1], over the two-particle factorized phase space. We also sum over the unresolved parton flavours. The final result can be written as a convolution (in colour space) of the Born cross section and an insertion operator. We spell out the insertion operator in terms of 24 basic integrals that are defined explicitly. We compute the coefficients of the Laurent expansion of these integrals in two different ways, with the method of Mellin-Barnes representations and sector decomposition. Finally, we present the Laurent-expansion of the full insertion operator for the specific examples of electron-positron annihilation into two and three jets.

  2. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    PubMed

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  3. Performance evaluations of demons and free form deformation algorithms for the liver region.

    PubMed

    Wang, Hui; Gong, Guanzhong; Wang, Hongjun; Li, Dengwang; Yin, Yong; Lu, Jie

    2014-04-01

    We investigated the influence of breathing motion on radiation therapy according to four- dimensional computed tomography (4D-CT) technology and indicated the registration of 4D-CT images was significant. The demons algorithm in two interpolation modes was compared to the FFD model algorithm to register the different phase images of 4D-CT in tumor tracking, using iodipin as verification. Linear interpolation was used in both mode 1 and mode 2. Mode 1 set outside pixels to nearest pixel, while mode 2 set outside pixels to zero. We used normalized mutual information (NMI), sum of squared differences, modified Hausdorff-distance, and registration speed to evaluate the performance of each algorithm. The average NMI after demons registration method in mode 1 improved 1.76% and 4.75% when compared to mode 2 and FFD model algorithm, respectively. Further, the modified Hausdorff-distance was no different between demons modes 1 and 2, but mode 1 was 15.2% lower than FFD. Finally, demons algorithm has the absolute advantage in registration speed. The demons algorithm in mode 1 was therefore found to be much more suitable for the registration of 4D-CT images. The subtractions of floating images and reference image before and after registration by demons further verified that influence of breathing motion cannot be ignored and the demons registration method is feasible.

  4. Home Camera-Based Fall Detection System for the Elderly.

    PubMed

    de Miguel, Koldo; Brunete, Alberto; Hernando, Miguel; Gambao, Ernesto

    2017-12-09

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.

  5. Subtractive transcriptome analysis of leaf and rhizome reveals differentially expressed transcripts in Panax sokpayensis.

    PubMed

    Gurung, Bhusan; Bhardwaj, Pardeep K; Talukdar, Narayan C

    2016-11-01

    In the present study, suppression subtractive hybridization (SSH) strategy was used to identify rare and differentially expressed transcripts in leaf and rhizome tissues of Panax sokpayensis. Out of 1102 randomly picked clones, 513 and 374 high quality expressed sequenced tags (ESTs) were generated from leaf and rhizome subtractive libraries, respectively. Out of them, 64.92 % ESTs from leaf and 69.26 % ESTs from rhizome SSH libraries were assembled into different functional categories, while others were of unknown function. In particular, ESTs encoding galactinol synthase 2, ribosomal RNA processing Brix domain protein, and cell division cycle protein 20.1, which are involved in plant growth and development, were most abundant in the leaf SSH library. Other ESTs encoding protein KIAA0664 homologue, ubiquitin-activating enzyme e11, and major latex protein, which are involved in plant immunity and defense response, were most abundant in the rhizome SSH library. Subtractive ESTs also showed similarity with genes involved in ginsenoside biosynthetic pathway, namely farnesyl pyrophosphate synthase, squalene synthase, and dammarenediol synthase. Expression profiles of selected ESTs validated the quality of libraries and confirmed their differential expression in the leaf, stem, and rhizome tissues. In silico comparative analyses revealed that around 13.75 % of unigenes from the leaf SSH library were not represented in the available leaf transcriptome of Panax ginseng. Similarly, around 18.12, 23.75, 25, and 6.25 % of unigenes from the rhizome SSH library were not represented in available root/rhizome transcriptomes of P. ginseng, Panax notoginseng, Panax quinquefolius, and Panax vietnamensis, respectively, indicating a major fraction of novel ESTs. Therefore, these subtractive transcriptomes provide valuable resources for gene discovery in P. sokpayensis and would complement the available transcriptomes from other Panax species.

  6. Subtractive fabrication of ferroelectric thin films with precisely controlled thickness

    NASA Astrophysics Data System (ADS)

    Ievlev, Anton V.; Chyasnavichyus, Marius; Leonard, Donovan N.; Agar, Joshua C.; Velarde, Gabriel A.; Martin, Lane W.; Kalinin, Sergei V.; Maksymovych, Petro; Ovchinnikova, Olga S.

    2018-04-01

    The ability to control thin-film growth has led to advances in our understanding of fundamental physics as well as to the emergence of novel technologies. However, common thin-film growth techniques introduce a number of limitations related to the concentration of defects on film interfaces and surfaces that limit the scope of systems that can be produced and studied experimentally. Here, we developed an ion-beam based subtractive fabrication process that enables creation and modification of thin films with pre-defined thicknesses. To accomplish this we transformed a multimodal imaging platform that combines time-of-flight secondary ion mass spectrometry with atomic force microscopy to a unique fabrication tool that allows for precise sputtering of the nanometer-thin layers of material. To demonstrate fabrication of thin-films with in situ feedback and control on film thickness and functionality we systematically studied thickness dependence of ferroelectric switching of lead-zirconate-titanate, within a single epitaxial film. Our results demonstrate that through a subtractive film fabrication process we can control the piezoelectric response as a function of film thickness as well as improve on the overall piezoelectric response versus an untreated film.

  7. Subtractive fabrication of ferroelectric thin films with precisely controlled thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ievlev, Anton; Chyasnavichyus, Marius; Leonard, Donovan N.

    The ability to control thin-film growth has led to advances in our understanding of fundamental physics as well as to the emergence of novel technologies. However, common thin-film growth techniques introduce a number of limitations related to the concentration of defects on film interfaces and surfaces that limit the scope of systems that can be produced and studied experimentally. Here, we developed an ion-beam based subtractive fabrication process that enables creation and modification of thin films with pre-defined thicknesses. To accomplish this we transformed a multimodal imaging platform that combines time-of-flight secondary ion mass spectrometry with atomic force microscopy tomore » a unique fabrication tool that allows for precise sputtering of the nanometer-thin layers of material. To demonstrate fabrication of thin-films with in situ feedback and control on film thickness and functionality we systematically studied thickness dependence of ferroelectric switching of lead-zirconate-titanate, within a single epitaxial film. Lastly, our results demonstrate that through a subtractive film fabrication process we can control the piezoelectric response as a function of film thickness as well as improve on the overall piezoelectric response versus an untreated film.« less

  8. Subtractive fabrication of ferroelectric thin films with precisely controlled thickness

    DOE PAGES

    Ievlev, Anton; Chyasnavichyus, Marius; Leonard, Donovan N.; ...

    2018-02-22

    The ability to control thin-film growth has led to advances in our understanding of fundamental physics as well as to the emergence of novel technologies. However, common thin-film growth techniques introduce a number of limitations related to the concentration of defects on film interfaces and surfaces that limit the scope of systems that can be produced and studied experimentally. Here, we developed an ion-beam based subtractive fabrication process that enables creation and modification of thin films with pre-defined thicknesses. To accomplish this we transformed a multimodal imaging platform that combines time-of-flight secondary ion mass spectrometry with atomic force microscopy tomore » a unique fabrication tool that allows for precise sputtering of the nanometer-thin layers of material. To demonstrate fabrication of thin-films with in situ feedback and control on film thickness and functionality we systematically studied thickness dependence of ferroelectric switching of lead-zirconate-titanate, within a single epitaxial film. Lastly, our results demonstrate that through a subtractive film fabrication process we can control the piezoelectric response as a function of film thickness as well as improve on the overall piezoelectric response versus an untreated film.« less

  9. Subtractive fabrication of ferroelectric thin films with precisely controlled thickness.

    PubMed

    Ievlev, Anton V; Chyasnavichyus, Marius; Leonard, Donovan N; Agar, Joshua C; Velarde, Gabriel A; Martin, Lane W; Kalinin, Sergei V; Maksymovych, Petro; Ovchinnikova, Olga S

    2018-04-02

    The ability to control thin-film growth has led to advances in our understanding of fundamental physics as well as to the emergence of novel technologies. However, common thin-film growth techniques introduce a number of limitations related to the concentration of defects on film interfaces and surfaces that limit the scope of systems that can be produced and studied experimentally. Here, we developed an ion-beam based subtractive fabrication process that enables creation and modification of thin films with pre-defined thicknesses. To accomplish this we transformed a multimodal imaging platform that combines time-of-flight secondary ion mass spectrometry with atomic force microscopy to a unique fabrication tool that allows for precise sputtering of the nanometer-thin layers of material. To demonstrate fabrication of thin-films with in situ feedback and control on film thickness and functionality we systematically studied thickness dependence of ferroelectric switching of lead-zirconate-titanate, within a single epitaxial film. Our results demonstrate that through a subtractive film fabrication process we can control the piezoelectric response as a function of film thickness as well as improve on the overall piezoelectric response versus an untreated film.

  10. Analysis of luminosity distributions of strong lensing galaxies: subtraction of diffuse lensed signal

    NASA Astrophysics Data System (ADS)

    Biernaux, J.; Magain, P.; Hauret, C.

    2017-08-01

    Context. Strong gravitational lensing gives access to the total mass distribution of galaxies. It can unveil a great deal of information about the lenses' dark matter content when combined with the study of the lenses' light profile. However, gravitational lensing galaxies, by definition, appear surrounded by lensed signal, both point-like and diffuse, that is irrelevant to the lens flux. Therefore, the observer is most often restricted to studying the innermost portions of the galaxy, where classical fitting methods show some instabilities. Aims: We aim at subtracting that lensed signal and at characterising some lenses' light profile by computing their shape parameters (half-light radius, ellipticity, and position angle). Our objective is to evaluate the total integrated flux in an aperture the size of the Einstein ring in order to obtain a robust estimate of the quantity of ordinary (luminous) matter in each system. Methods: We are expanding the work we started in a previous paper that consisted in subtracting point-like lensed images and in independently measuring each shape parameter. We improve it by designing a subtraction of the diffuse lensed signal, based only on one simple hypothesis of symmetry. We apply it to the cases where it proves to be necessary. This extra step improves our study of the shape parameters and we refine it even more by upgrading our half-light radius measurement method. We also calculate the impact of our specific image processing on the error bars. Results: The diffuse lensed signal subtraction makes it possible to study a larger portion of relevant galactic flux, as the radius of the fitting region increases by on average 17%. We retrieve new half-light radii values that are on average 11% smaller than in our previous work, although the uncertainties overlap in most cases. This shows that not taking the diffuse lensed signal into account may lead to a significant overestimate of the half-light radius. We are also able to measure

  11. Sensory subtraction in robot-assisted surgery: fingertip skin deformation feedback to ensure safety and improve transparency in bimanual haptic interaction.

    PubMed

    Meli, Leonardo; Pacchierotti, Claudio; Prattichizzo, Domenico

    2014-04-01

    This study presents a novel approach to force feedback in robot-assisted surgery. It consists of substituting haptic stimuli, composed of a kinesthetic component and a skin deformation, with cutaneous stimuli only. The force generated can then be thought as a subtraction between the complete haptic interaction, cutaneous, and kinesthetic, and the kinesthetic part of it. For this reason, we refer to this approach as sensory subtraction. Sensory subtraction aims at outperforming other nonkinesthetic feedback techniques in teleoperation (e.g., sensory substitution) while guaranteeing the stability and safety of the system. We tested the proposed approach in a challenging 7-DoF bimanual teleoperation task, similar to the Pegboard experiment of the da Vinci Skills Simulator. Sensory subtraction showed improved performance in terms of completion time, force exerted, and total displacement of the rings with respect to two popular sensory substitution techniques. Moreover, it guaranteed a stable interaction in the presence of a communication delay in the haptic loop.

  12. The Aquarius Salinity Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David

    2012-01-01

    The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.

  13. Analysis of differentially expressed genes in two immunologically distinct strains of Eimeria maxima using suppression subtractive hybridization and dot-blot hybridization

    PubMed Central

    2014-01-01

    Background It is well known that different Eimeria maxima strains exhibit significant antigenic variation. However, the genetic basis of these phenotypes remains unclear. Methods Total RNA and mRNA were isolated from unsporulated oocysts of E. maxima strains SH and NT, which were found to have significant differences in immunogenicity in our previous research. Two subtractive cDNA libraries were constructed using suppression subtractive hybridization (SSH) and specific genes were further analyzed by dot-blot hybridization and qRT-PCR analysis. Results A total of 561 clones were selected from both cDNA libraries and the length of the inserted fragments was 0.25–1.0 kb. Dot-blot hybridization revealed a total of 86 differentially expressed clones (63 from strain SH and 23 from strain NT). Nucleotide sequencing analysis of these clones revealed ten specific contigs (six from strain SH and four from strain NT). Further analysis found that six contigs from strain SH and three from strain NT shared significant identities with previously reported proteins, and one contig was presumed to be novel. The specific differentially expressed genes were finally verified by RT-PCR and qRT-PCR analyses. Conclusions The data presented here suggest that specific genes identified between the two strains may be important molecules in the immunogenicity of E. maxima that may present potential new drug targets or vaccine candidates for coccidiosis. PMID:24894832

  14. The value of subtraction MRI in detection of amyloid-related imaging abnormalities with oedema or effusion in Alzheimer's patients: An interobserver study.

    PubMed

    Martens, Roland M; Bechten, Arianne; Ingala, Silvia; van Schijndel, Ronald A; Machado, Vania B; de Jong, Marcus C; Sanchez, Esther; Purcell, Derk; Arrighi, Michael H; Brashear, Robert H; Wattjes, Mike P; Barkhof, Frederik

    2018-03-01

    Immunotherapeutic treatments targeting amyloid-β plaques in Alzheimer's disease (AD) are associated with the presence of amyloid-related imaging abnormalities with oedema or effusion (ARIA-E), whose detection and classification is crucial to evaluate subjects enrolled in clinical trials. To investigate the applicability of subtraction MRI in the ARIA-E detection using an established ARIA-E-rating scale. We included 75 AD patients receiving bapineuzumab treatment, including 29 ARIA-E cases. Five neuroradiologists rated their brain MRI-scans with and without subtraction images. The accuracy of evaluating the presence of ARIA-E, intraclass correlation coefficient (ICC) and specific agreement was calculated. Subtraction resulted in higher sensitivity (0.966) and lower specificity (0.970) than native images (0.959, 0.991, respectively). Individual rater detection was excellent. ICC scores ranged from excellent to good, except for gyral swelling (moderate). Excellent negative and good positive specific agreement among all ARIA-E imaging features was reported in both groups. Combining sulcal hyperintensity and gyral swelling significantly increased positive agreement for subtraction images. Subtraction MRI has potential as a visual aid increasing the sensitivity of ARIA-E assessment. However, in order to improve its usefulness isotropic acquisition and enhanced training are required. The ARIA-E rating scale may benefit from combining sulcal hyperintensity and swelling. • Subtraction technique can improve detection amyloid-related imaging-abnormalities with edema/effusion in Alzheimer's patients. • The value of ARIA-E detection, classification and monitoring using subtraction was assessed. • Validation of an established ARIA-E rating scale, recommendations for improvement are reported. • Complementary statistical methods were employed to measure accuracy, inter-rater-reliability and specific agreement.

  15. [The improved design of table operating box of digital subtraction angiography device].

    PubMed

    Qi, Xianying; Zhang, Minghai; Han, Fengtan; Tang, Feng; He, Lemin

    2009-12-01

    In this paper are analyzed the disadvantages of CGO-3000 digital subtraction angiography table Operating Box. The authors put forward a communication control scheme between single-chip microcomputer(SCM) and programmable logic controller(PLC). The details of hardware and software of communication are given.

  16. A Voxel-by-Voxel Comparison of Deformable Vector Fields Obtained by Three Deformable Image Registration Algorithms Applied to 4DCT Lung Studies.

    PubMed

    Fatyga, Mirek; Dogan, Nesrin; Weiss, Elizabeth; Sleeman, William C; Zhang, Baoshe; Lehman, William J; Williamson, Jeffrey F; Wijesooriya, Krishni; Christensen, Gary E

    2015-01-01

    Commonly used methods of assessing the accuracy of deformable image registration (DIR) rely on image segmentation or landmark selection. These methods are very labor intensive and thus limited to relatively small number of image pairs. The direct voxel-by-voxel comparison can be automated to examine fluctuations in DIR quality on a long series of image pairs. A voxel-by-voxel comparison of three DIR algorithms applied to lung patients is presented. Registrations are compared by comparing volume histograms formed both with individual DIR maps and with a voxel-by-voxel subtraction of the two maps. When two DIR maps agree one concludes that both maps are interchangeable in treatment planning applications, though one cannot conclude that either one agrees with the ground truth. If two DIR maps significantly disagree one concludes that at least one of the maps deviates from the ground truth. We use the method to compare 3 DIR algorithms applied to peak inhale-peak exhale registrations of 4DFBCT data obtained from 13 patients. All three algorithms appear to be nearly equivalent when compared using DICE similarity coefficients. A comparison based on Jacobian volume histograms shows that all three algorithms measure changes in total volume of the lungs with reasonable accuracy, but show large differences in the variance of Jacobian distribution on contoured structures. Analysis of voxel-by-voxel subtraction of DIR maps shows differences between algorithms that exceed a centimeter for some registrations. Deformation maps produced by DIR algorithms must be treated as mathematical approximations of physical tissue deformation that are not self-consistent and may thus be useful only in applications for which they have been specifically validated. The three algorithms tested in this work perform fairly robustly for the task of contour propagation, but produce potentially unreliable results for the task of DVH accumulation or measurement of local volume change. Performance of

  17. Linear: A Novel Algorithm for Reconstructing Slitless Spectroscopy from HST/WFC3

    NASA Astrophysics Data System (ADS)

    Ryan, R. E., Jr.; Casertano, S.; Pirzkal, N.

    2018-03-01

    We present a grism extraction package (LINEAR) designed to reconstruct 1D spectra from a collection of slitless spectroscopic images, ideally taken at a variety of orientations, dispersion directions, and/or dither positions. Our approach is to enumerate every transformation between all direct image positions (i.e., a potential source) and the collection of grism images at all relevant wavelengths. This leads to solving a large, sparse system of linear equations, which we invert using the standard LSQR algorithm. We implement a number of color and geometric corrections (such as flat field, pixel-area map, source morphology, and spectral bandwidth), but assume many effects have been calibrated out (such as basic reductions, background subtraction, and astrometric refinement). We demonstrate the power of our approach with several Monte Carlo simulations and the analysis of archival data. The simulations include astrometric and photometric uncertainties, sky-background estimation, and signal-to-noise calculations. The data are G141 observations obtained with the Wide-Field Camera 3 of the Hubble Ultra-Deep Field, and show the power of our formalism by improving the spectral resolution without sacrificing the signal-to-noise (a tradeoff that is often made by current approaches). Additionally, our approach naturally accounts for source contamination, which is only handled heuristically by present softwares. We conclude with a discussion of various observations where our approach will provide much improved spectral 1D spectra, such as crowded fields (star or galaxy clusters), spatially resolved spectroscopy, or surveys with strict completeness requirements. At present our software is heavily geared for Wide-Field Camera 3 IR, however we plan extend the codebase for additional instruments.

  18. Nonlinear ultrasonic imaging method for closed cracks using subtraction of responses at different external loads.

    PubMed

    Ohara, Yoshikazu; Horinouchi, Satoshi; Hashimoto, Makoto; Shintaku, Yohei; Yamanaka, Kazushi

    2011-08-01

    To improve the selectivity of closed cracks for objects other than cracks in ultrasonic imaging, we propose an extension of a novel imaging method, namely, subharmonic phased array for crack evaluation (SPACE) as well as another approach using the subtraction of responses at different external loads. By applying external static or dynamic loads to closed cracks, the contact state in the cracks varies, resulting in an intensity change of responses at cracks. In contrast, objects other than cracks are independent of external load. Therefore, only cracks can be extracted by subtracting responses at different loads. In this study, we performed fundamental experiments on a closed fatigue crack formed in an aluminum alloy compact tension (CT) specimen using the proposed method. We examined the static load dependence of SPACE images and the dynamic load dependence of linear phased array (PA) images by simulating the external loads with a servohydraulic fatigue testing machine. By subtracting the images at different external loads, we show that this method is useful in extracting only the intensity change of responses related to closed cracks, while canceling the responses of objects other than cracks. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. New algorithm for lossless hyper-spectral image compression with mixing transform to eliminate redundancy

    NASA Astrophysics Data System (ADS)

    Xie, ChengJun; Xu, Lin

    2008-03-01

    This paper presents a new algorithm based on mixing transform to eliminate redundancy, SHIRCT and subtraction mixing transform is used to eliminate spectral redundancy, 2D-CDF(2,2)DWT to eliminate spatial redundancy, This transform has priority in hardware realization convenience, since it can be fully implemented by add and shift operation. Its redundancy elimination effect is better than (1D+2D)CDF(2,2)DWT. Here improved SPIHT+CABAC mixing compression coding algorithm is used to implement compression coding. The experiment results show that in lossless image compression applications the effect of this method is a little better than the result acquired using (1D+2D)CDF(2,2)DWT+improved SPIHT+CABAC, still it is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, NMST and MST. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, on the average the compression ratio of this algorithm exceeds the above algorithms by 42%,37%,35%,30%,16%,13%,11% respectively.

  20. Quantitative Assessment of Regional Wall Motion Abnormalities Using Dual-Energy Digital Subtraction Intravenous Ventriculography

    NASA Astrophysics Data System (ADS)

    McCollough, Cynthia H.

    Healthy portions of the left ventricle (LV) can often compensate for regional dysfunction, thereby masking regional disease when global indices of LV function are employed. Thus, quantitation of regional function provides a more useful method of assessing LV function, especially in diseases that have regional effects such as coronary artery disease. This dissertation studied the ability of a phase -matched dual-energy digital subtraction angiography (DE -DSA) technique to quantitate changes in regional LV systolic volume. The potential benefits and a theoretical description of the DE imaging technique are detailed. A correlated noise reduction algorithm is also presented which raises the signal-to-noise ratio of DE images by a factor of 2 -4. Ten open-chest dogs were instrumented with transmural ultrasonic crystals to assess regional LV function in terms of systolic normalized-wall-thickening rate (NWTR) and percent-systolic-thickening (PST). A pneumatic occluder was placed on the left-anterior-descending (LAD) coronary artery to temporarily reduce myocardial blood flow, thereby changing regional LV function in the LAD bed. DE-DSA intravenous left ventriculograms were obtained at control and four levels of graded myocardial ischemia, as determined by reductions in PST. Phase-matched images displaying changes in systolic contractile function were created by subtracting an end-systolic (ES) control image from ES images acquired at each level of myocardial ischemia. The resulting wall-motion difference signal (WMD), which represents a change in regional systolic volume between the control and ischemic states, was quantitated by videodensitometry and compared with changes in NWTR and PST. Regression analysis of 56 data points from 10 animals shows a linear relationship between WMD and both NWTR and PST: WMD = -2.46 NWTR + 13.9, r = 0.64, p < 0.001; WMD = -2.11 PST + 18.4, r = 0.54, p < 0.001. Thus, changes in regional ES LV volume between rest and ischemic states, as

  1. [Design and development of the DSA digital subtraction workstation].

    PubMed

    Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo

    2008-05-01

    According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.

  2. Improved visualization of intracranial vessels with intraoperative coregistration of rotational digital subtraction angiography and intraoperative 3D ultrasound.

    PubMed

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative rotational digital subtraction angiography.

  3. Demonstration of an optical directed half-subtracter using integrated silicon photonic circuits.

    PubMed

    Liu, Zilong; Zhao, Yongpeng; Xiao, Huifu; Deng, Lin; Meng, Yinghao; Guo, Xiaonan; Liu, Guipeng; Tian, Yonghui; Yang, Jianhong

    2018-04-01

    An integrated silicon photonic circuit consisting of two silicon microring resonators (MRRs) is proposed and experimentally demonstrated for the purpose of half-subtraction operation. The thermo-optic modulation scheme is employed to modulate the MRRs due to its relatively simple fabrication process. The high and low levels of the electrical pulse signal are utilized to define logic 1 and 0 in the electrical domain, respectively, and the high and low levels of the optical power represent logic 1 and 0 in the optical domain, respectively. Two electrical pulse sequences regarded as the operands are applied to the corresponding micro-heaters fabricated on the top of the MRRs to achieve their dynamic modulations. The final operation results of bit-wise borrow and difference are obtained at their corresponding output ports in the form of light. At last, the subtraction operation of two bits with the operation speed of 10 kbps is demonstrated successfully.

  4. Characterization of unknown genetic modifications using high throughput sequencing and computational subtraction.

    PubMed

    Tengs, Torstein; Zhang, Haibo; Holst-Jensen, Arne; Bohlin, Jon; Butenko, Melinka A; Kristoffersen, Anja Bråthen; Sorteberg, Hilde-Gunn Opsahl; Berdal, Knut G

    2009-10-08

    When generating a genetically modified organism (GMO), the primary goal is to give a target organism one or several novel traits by using biotechnology techniques. A GMO will differ from its parental strain in that its pool of transcripts will be altered. Currently, there are no methods that are reliably able to determine if an organism has been genetically altered if the nature of the modification is unknown. We show that the concept of computational subtraction can be used to identify transgenic cDNA sequences from genetically modified plants. Our datasets include 454-type sequences from a transgenic line of Arabidopsis thaliana and published EST datasets from commercially relevant species (rice and papaya). We believe that computational subtraction represents a powerful new strategy for determining if an organism has been genetically modified as well as to define the nature of the modification. Fewer assumptions have to be made compared to methods currently in use and this is an advantage particularly when working with unknown GMOs.

  5. Modification of the background flow by roll vortices

    NASA Technical Reports Server (NTRS)

    Shirer, Hampton N.; Haack, Tracy

    1990-01-01

    Use of observed wind profiles, such as those obtained from ascent or descent aircraft soundings, for the identification of the expected roll modes is hindered by the fact that these modes are able to modify the wind profiles. When such modified wind profiles are utilized to estimate the critical values of the dynamic and thermodynamic forcing rates, large errors in the preferred orientation angles and aspect ratios of the rolls may result. Nonlinear analysis of a 14 coefficient spectral model of roll circulations shows that the primary modification of the background wind is the addition of a linear component. When the linear profile having the correct amount of shear is subtracted from the observed cross-roll winds, then the pre-roll wind profile can be estimated. A preliminary test of this hypothesis is given for a case in which cloud streets were observed during FIRE.

  6. Entanglement evaluation of non-Gaussian states generated by photon subtraction from squeezed states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitagawa, Akira; Takeoka, Masahiro; Sasaki, Masahide

    2006-04-15

    We consider the problem of evaluating the entanglement of non-Gaussian mixed states generated by photon subtraction from entangled squeezed states. The entanglement measures we use are the negativity and the logarithmic negativity. These measures possess the unusual property of being computable with linear algebra packages even for high-dimensional quantum systems. We numerically evaluate these measures for the non-Gaussian mixed states which are generated by photon subtraction with on/off photon detectors. The results are compared with the behavior of certain operational measures, namely the teleportation fidelity and the mutual information in the dense coding scheme. It is found that all ofmore » these results are mutually consistent, in the sense that whenever the enhancement is seen in terms of the operational measures, the negativity and the logarithmic negativity are also enhanced.« less

  7. Algorithm for Detecting a Bright Spot in an Image

    NASA Technical Reports Server (NTRS)

    2009-01-01

    An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)

  8. A Novel mRNA Level Subtraction Method for Quick Identification of Target-Orientated Uniquely Expressed Genes Between Peanut Immature Pod and Leaf

    PubMed Central

    2010-01-01

    Subtraction technique has been broadly applied for target gene discovery. However, most current protocols apply relative differential subtraction and result in great amount clone mixtures of unique and differentially expressed genes. This makes it more difficult to identify unique or target-orientated expressed genes. In this study, we developed a novel method for subtraction at mRNA level by integrating magnetic particle technology into driver preparation and tester–driver hybridization to facilitate uniquely expressed gene discovery between peanut immature pod and leaf through a single round subtraction. The resulting target clones were further validated through polymerase chain reaction screening using peanut immature pod and leaf cDNA libraries as templates. This study has resulted in identifying several genes expressed uniquely in immature peanut pod. These target genes can be used for future peanut functional genome and genetic engineering research. PMID:21406066

  9. Suppression subtractive hybridization identified differentially expressed genes in lung adenocarcinoma: ERGIC3 as a novel lung cancer-related gene

    PubMed Central

    2013-01-01

    Background To understand the carcinogenesis caused by accumulated genetic and epigenetic alterations and seek novel biomarkers for various cancers, studying differentially expressed genes between cancerous and normal tissues is crucial. In the study, two cDNA libraries of lung cancer were constructed and screened for identification of differentially expressed genes. Methods Two cDNA libraries of differentially expressed genes were constructed using lung adenocarcinoma tissue and adjacent nonmalignant lung tissue by suppression subtractive hybridization. The data of the cDNA libraries were then analyzed and compared using bioinformatics analysis. Levels of mRNA and protein were measured by quantitative real-time polymerase chain reaction (q-RT-PCR) and western blot respectively, as well as expression and localization of proteins were determined by immunostaining. Gene functions were investigated using proliferation and migration assays after gene silencing and gene over-expression. Results Two libraries of differentially expressed genes were obtained. The forward-subtracted library (FSL) and the reverse-subtracted library (RSL) contained 177 and 59 genes, respectively. Bioinformatic analysis demonstrated that these genes were involved in a wide range of cellular functions. The vast majority of these genes were newly identified to be abnormally expressed in lung cancer. In the first stage of the screening for 16 genes, we compared lung cancer tissues with their adjacent non-malignant tissues at the mRNA level, and found six genes (ERGIC3, DDR1, HSP90B1, SDC1, RPSA, and LPCAT1) from the FSL were significantly up-regulated while two genes (GPX3 and TIMP3) from the RSL were significantly down-regulated (P < 0.05). The ERGIC3 protein was also over-expressed in lung cancer tissues and cultured cells, and expression of ERGIC3 was correlated with the differentiated degree and histological type of lung cancer. The up-regulation of ERGIC3 could promote cellular migration

  10. Design Study: Integer Subtraction Operation Teaching Learning Using Multimedia in Primary School

    ERIC Educational Resources Information Center

    Aris, Rendi Muhammad; Putri, Ratu Ilma Indra

    2017-01-01

    This study aims to develop a learning trajectory to help students understand concept of subtraction of integers using multimedia in the fourth grade. This study is thematic integrative learning in Curriculum 2013 PMRI based. The method used is design research consists of three stages; preparing for the experiment, design experiment, retrospective…

  11. Improved Visualization of Intracranial Vessels with Intraoperative Coregistration of Rotational Digital Subtraction Angiography and Intraoperative 3D Ultrasound

    PubMed Central

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Introduction Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. Methods We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Results Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Conclusions Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative

  12. Dynamic Segmentation Of Behavior Patterns Based On Quantity Value Movement Using Fuzzy Subtractive Clustering Method

    NASA Astrophysics Data System (ADS)

    Sangadji, Iriansyah; Arvio, Yozika; Indrianto

    2018-03-01

    to understand by analyzing the pattern of changes in value movements that can dynamically vary over a given period with relative accuracy, an equipment is required based on the utilization of technical working principles or specific analytical method. This will affect the level of validity of the output that will occur from this system. Subtractive clustering is based on the density (potential) size of data points in a space (variable). The basic concept of subtractive clustering is to determine the regions in a variable that has high potential for the surrounding points. In this paper result is segmentation of behavior pattern based on quantity value movement. It shows the number of clusters is formed and that has many members.

  13. THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au

    Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less

  14. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  15. Children's understanding of the addition/subtraction complement principle.

    PubMed

    Torbeyns, Joke; Peters, Greet; De Smedt, Bert; Ghesquière, Pol; Verschaffel, Lieven

    2016-09-01

    In the last decades, children's understanding of mathematical principles has become an important research topic. Different from the commutativity and inversion principles, only few studies have focused on children's understanding of the addition/subtraction complement principle (if a - b = c, then c + b = a), mainly relying on verbal techniques. This contribution aimed at deepening our understanding of children's knowledge of the addition/subtraction complement principle, combining verbal and non-verbal techniques. Participants were 67 third and fourth graders (9- to 10-year-olds). Children solved two tasks in which verbal reports as well as accuracy and speed data were collected. These two tasks differed only in the order of the problems and the instructions. In the looking-back task, children were told that sometimes the preceding problem might help to answer the next problem. In the baseline task, no helpful preceding items were offered. The looking-back task included 10 trigger-target problem pairs on the complement relation. Children verbally reported looking back on about 40% of all target problems in the looking-back task; the target problems were also solved faster and more accurately than in the baseline task. These results suggest that children used their understanding of the complement principle. The verbal and non-verbal data were highly correlated. This study complements previous work on children's understanding of mathematical principles by highlighting interindividual differences in 9- to 10-year-olds' understanding of the complement principle and indicating the potential of combining verbal and non-verbal techniques to investigate (the acquisition of) this understanding. © 2016 The British Psychological Society.

  16. DFT Calculation of IR Absorption Spectra for PCE-nH2O, TCE-nH2O, DCE-nH2O, VC-nH2O for Small and Water-Dominated Molecular Clusters

    DTIC Science & Technology

    2017-10-31

    of isolated molecules and that of bulk systems. DFT calculated absorption spectra represent quantitative estimates that can be correlated with...spectra, can be correlated with the presence of these hydrocarbons (see reference [1]). Accordingly, the molecular structure and IR absorption spectra of...associated with different types of ambient molecules, e.g., H2O, in order to apply background subtraction or spectral-signature- correlation algorithms

  17. Fluid surface compensation in digital holographic microscopy for topography measurement

    NASA Astrophysics Data System (ADS)

    Lin, Li-Chien; Tu, Han-Yen; Lai, Xin-Ji; Wang, Sheng-Shiun; Cheng, Chau-Jern

    2012-06-01

    A novel technique is presented for surface compensation and topography measurement of a specimen in fluid medium by digital holographic microscopy (DHM). In the measurement, the specimen is preserved in a culture dish full of liquid culture medium and an environmental vibration induces a series of ripples to create a non-uniform background on the reconstructed phase image. A background surface compensation algorithm is proposed to account for this problem. First, we distinguish the cell image from the non-uniform background and a morphological image operation is used to reduce the noise effect on the background surface areas. Then, an adaptive sampling from the background surface is employed, taking dense samples from the high-variation area while leaving the smooth region mostly untouched. A surface fitting algorithm based on the optimal bi-cubic functional approximation is used to establish a whole background surface for the phase image. Once the background surface is found, the background compensated phase can be obtained by subtracting the estimated background from the original phase image. From the experimental results, the proposed algorithm performs effectively in removing the non-uniform background of the phase image and has the ability to obtain the specimen topography inside fluid medium under environmental vibrations.

  18. Clinical use of the ABO-Scoring Index: reliability and subtraction frequency.

    PubMed

    Lieber, William S; Carlson, Sean K; Baumrind, Sheldon; Poulton, Donald R

    2003-10-01

    This study tested the reliability and subtraction frequency of the study model-scoring system of the American Board of Orthodontists (ABO). We used a sample of 36 posttreatment study models that were selected randomly from six different orthodontic offices. Intrajudge and interjudge reliability was calculated using nonparametric statistics (Spearman rank coefficient, Wilcoxon, Kruskal-Wallis, and Mann-Whitney tests). We found differences ranging from 3 to 6 subtraction points (total score) for intrajudge scoring between two sessions. For overall total ABO score, the average correlation was .77. Intrajudge correlation was greatest for occlusal relationships and least for interproximal contacts. Interjudge correlation for ABO score averaged r = .85. Correlation was greatest for buccolingual inclination and least for overjet. The data show that some judges, on average, were much more lenient than others and that this resulted in a range of total scores between 19.7 and 27.5. Most of the deductions were found in the buccal segments and most were related to the second molars. We present these findings in the context of clinicians preparing for the ABO phase III examination and for orthodontists in their ongoing evaluation of clinical results.

  19. Work and information from thermal states after subtraction of energy quanta.

    PubMed

    Hloušek, J; Ježek, M; Filip, R

    2017-10-12

    Quantum oscillators prepared out of thermal equilibrium can be used to produce work and transmit information. By intensive cooling of a single oscillator, its thermal energy deterministically dissipates to a colder environment, and the oscillator substantially reduces its entropy. This out-of-equilibrium state allows us to obtain work and to carry information. Here, we propose and experimentally demonstrate an advanced approach, conditionally preparing more efficient out-of-equilibrium states only by a weak dissipation, an inefficient quantum measurement of the dissipated thermal energy, and subsequent triggering of that states. Although it conditionally subtracts the energy quanta from the oscillator, average energy grows, and second-order correlation function approaches unity as by coherent external driving. On the other hand, the Fano factor remains constant and the entropy of the subtracted state increases, which raise doubts about a possible application of this approach. To resolve it, we predict and experimentally verify that both available work and transmitted information can be conditionally higher in this case than by arbitrary cooling or adequate thermal heating up to the same average energy. It qualifies the conditional procedure as a useful source for experiments in quantum information and thermodynamics.

  20. Home Camera-Based Fall Detection System for the Elderly

    PubMed Central

    de Miguel, Koldo

    2017-01-01

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%. PMID:29232846

  1. Identification of floral genes for sex determination in Calamus palustris Griff. by using suppression subtractive hybridization.

    PubMed

    Ng, C Y; Wickneswari, R; Choong, C Y

    2014-08-07

    Calamus palustris Griff. is an economically important dioecious rattan species in Southeast Asia. However, dioecy and onset of flowering at 3-4 years old render uncertainties in desired female:male seedling ratios to establish a productive seed orchard for this rattan species. We constructed a subtractive library for male floral tissue to understand the genetic mechanism for gender determination in C. palustris. The subtractive library produced 1536 clones with 1419 clones of high quality. Reverse Northern screening showed 313 clones with differential expression, and sequence analyses clustered them into 205 unigenes, including 32 contigs and 173 singletons. The subtractive library was further validated with reverse transcription-quantitative polymerase chain reaction analysis. Homology identification classified the unigenes into 12 putative functional proteins with 83% unigenes showing significant match to proteins in databases. Functional annotations of these unigenes revealed genes involved in male flower development, including MADS-box genes, pollen-related genes, phytohormones for flower development, and male flower organ development. Our results showed that the male floral genes may play a vital role in sex determination in C. palustris. The identified genes can be exploited to understand the molecular basis of sex determination in C. palustris.

  2. Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium

    NASA Astrophysics Data System (ADS)

    Chen, Xudong

    2010-07-01

    This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.

  3. Quantifying the Relative Contributions of Divisive and Subtractive Feedback to Rhythm Generation

    PubMed Central

    Tabak, Joël; Rinzel, John; Bertram, Richard

    2011-01-01

    Biological systems are characterized by a high number of interacting components. Determining the role of each component is difficult, addressed here in the context of biological oscillations. Rhythmic behavior can result from the interplay of positive feedback that promotes bistability between high and low activity, and slow negative feedback that switches the system between the high and low activity states. Many biological oscillators include two types of negative feedback processes: divisive (decreases the gain of the positive feedback loop) and subtractive (increases the input threshold) that both contribute to slowly move the system between the high- and low-activity states. Can we determine the relative contribution of each type of negative feedback process to the rhythmic activity? Does one dominate? Do they control the active and silent phase equally? To answer these questions we use a neural network model with excitatory coupling, regulated by synaptic depression (divisive) and cellular adaptation (subtractive feedback). We first attempt to apply standard experimental methodologies: either passive observation to correlate the variations of a variable of interest to system behavior, or deletion of a component to establish whether a component is critical for the system. We find that these two strategies can lead to contradictory conclusions, and at best their interpretive power is limited. We instead develop a computational measure of the contribution of a process, by evaluating the sensitivity of the active (high activity) and silent (low activity) phase durations to the time constant of the process. The measure shows that both processes control the active phase, in proportion to their speed and relative weight. However, only the subtractive process plays a major role in setting the duration of the silent phase. This computational method can be used to analyze the role of negative feedback processes in a wide range of biological rhythms. PMID:21533065

  4. A Monte Carlo simulation study of an improved K-edge log-subtraction X-ray imaging using a photon counting CdTe detector

    NASA Astrophysics Data System (ADS)

    Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung

    2016-09-01

    Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on

  5. The Anisotropy of the Microwave Background to l=3500: Mosaic Observations with the Cosmic Background Imager

    NASA Technical Reports Server (NTRS)

    Pearson, T. J.; Mason, B. S.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J. L.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.; hide

    2002-01-01

    Using the Cosmic Background Imager, a 13-element interferometer array operating in the 26-36 GHz frequency band, we have observed 40 deg (sup 2) of sky in three pairs of fields, each approximately 145 feet x 165 feet, using overlapping pointings: (mosaicing). We present images and power spectra of the cosmic microwave background radiation in these mosaic fields. We remove ground radiation and other low-level contaminating signals by differencing matched observations of the fields in each pair. The primary foreground contamination is due to point sources (radio galaxies and quasars). We have subtracted the strongest sources from the data using higher-resolution measurements, and we have projected out the response to other sources of known position in the power-spectrum analysis. The images show features on scales approximately 6 feet-15 feet, corresponding to masses approximately 5-80 x 10(exp 14) solar mass at the surface of last scattering, which are likely to be the seeds of clusters of galaxies. The power spectrum estimates have a resolution delta l approximately 200 and are consistent with earlier results in the multipole range l approximately less than 1000. The power spectrum is detected with high signal-to-noise ratio in the range 300 approximately less than l approximately less than 1700. For 1700 approximately less than l approximately less than 3000 the observations are consistent with the results from more sensitive CBI deep-field observations. The results agree with the extrapolation of cosmological models fitted to observations at lower l, and show the predicted drop at high l (the "damping tail").

  6. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  7. The Australia Telescope search for cosmic microwave background anisotropy

    NASA Astrophysics Data System (ADS)

    Subrahmanyan, Ravi; Kesteven, Michael J.; Ekers, Ronald D.; Sinclair, Malcolm; Silk, Joseph

    1998-08-01

    In an attempt to detect cosmic microwave background (CMB) anisotropy on arcmin scales, we have made an 8.7-GHz image of a sky region with a resolution of 2 arcmin and high surface brightness sensitivity using the Australia Telescope Compact Array (ATCA) in an ultracompact configuration. The foreground discrete-source confusion was estimated from observations with higher resolution at the same frequency and in a scaled array at a lower frequency. Following the subtraction of the foreground confusion, the field shows no features in excess of the instrument noise. This limits the CMB anisotropy flat-band power to Q_flat<23.6muK with 95 per cent confidence; the ATCA filter function (which is available at the website www.atnf.csiro.au/Research/cmbr/cmbr_atca.html) F_l in multipole l-space peaks at l_eff=4700 and has half-maximum values at l=3350 and 6050.

  8. Polarization independent subtractive color printing based on ultrathin hexagonal nanodisk-nanohole hybrid structure arrays.

    PubMed

    Zhao, Jiancun; Yu, Xiaochang; Yang, Xiaoming; Xiang, Quan; Duan, Huigao; Yu, Yiting

    2017-09-18

    Structural color printing based on plasmonic metasurfaces has been recognized as a promising alternative to the conventional dye colorants, though the color brightness and polarization tolerance are still a great challenge for practical applications. In this work, we report a novel plasmonic metasurface for subtractive color printing employing the ultrathin hexagonal nanodisk-nanohole hybrid structure arrays. Through both the experimental and numerical investigations, the subtractive color thus generated taking advantages of extraordinary low transmission (ELT) exhibits high brightness, polarization independence and wide color tunability by varying key geometrical parameters. In addition, other regular patterns including square, pentagonal and circular shapes are also surveyed, and reveal a high color brightness, wide gamut and polarization independence as well. These results indicate that the demonstrated plasmonic metasurface has various potential applications in high-definition displays, high-density optical data storage, imaging and filtering technologies.

  9. NNLO jet cross sections by subtraction

    NASA Astrophysics Data System (ADS)

    Somogyi, G.; Bolzoni, P.; Trócsányi, Z.

    2010-08-01

    We report on the computation of a class of integrals that appear when integrating the so-called iterated singly-unresolved approximate cross section of the NNLO subtraction scheme of Refs. [G. Somogyi, Z. Trócsányi, and V. Del Duca, JHEP 06, 024 (2005), arXiv:hep-ph/0502226; G. Somogyi and Z. Trócsányi, (2006), arXiv:hep-ph/0609041; G. Somogyi, Z. Trócsányi, and V. Del Duca, JHEP 01, 070 (2007), arXiv:hep-ph/0609042; G. Somogyi and Z. Trócsányi, JHEP 01, 052 (2007), arXiv:hep-ph/0609043] over the factorised phase space of unresolved partons. The integrated approximate cross section itself can be written as the product of an insertion operator (in colour space) times the Born cross section. We give selected results for the insertion operator for processes with two and three hard partons in the final state.

  10. Improvement of the diagnostic accuracy of MRA with subtraction technique in cerebral vasospasm.

    PubMed

    Hamaguchi, Akiyoshi; Fujima, Noriyuki; Yoshida, Daisuke; Hamaguchi, Naoko; Kodera, Shuichi

    2014-01-01

    Vasospasm has been considered the most severe acute complication after subarachnoid hemorrhage (SAH). MRA is not considered ideal for detecting cerebral vasospasm because of background including the hemorrhage. The aim of this study is to evaluate the efficacy of Subtraction MRA (SMRA) by comparing it to that of conventional MRA (CMRA) for diagnosis of cerebral vasospasm. Arteries were assigned to one of three categories based on the degree of MRA diagnostic quality of vasospasm (quality score): 0, bad … 2, good. Furthermore each artery was assigned to one of four categories based on the degree of vasospasm severity (SV score): 0, no vasospasm … 3, severe. The value of the difference between DSA-SV score and MRA-SV score was defined as the DIF score. CMRA and SMRA were compared for each arterial region with regard to quality score and DIF score. The average CMRA and SMRA quality score were 1.46 and 1.79; the difference was statistically significant. The average CMRA and SMRA DIF score were 1.08 and .60; the difference was statistically significant. Diagnosis of cerebral vasospasm is more accurate by SMRA than by CMRA. The advantages are its noninvasive nature and its ability to detect cerebral vasospasm. Copyright © 2014 by the American Society of Neuroimaging.

  11. Geometrical accuracy of metallic objects produced with additive or subtractive manufacturing: A comparative in vitro study.

    PubMed

    Braian, Michael; Jönsson, David; Kevci, Mir; Wennerberg, Ann

    2018-07-01

    To evaluate the accuracy and precision of objects produced by additive manufacturing systems (AM) for use in dentistry and to compare with subtractive manufacturing systems (SM). Ten specimens of two geometrical objects were produced by five different AM machines and one SM machine. Object A mimics an inlay-shaped object, while object B imitates a four-unit bridge model. All the objects were sorted into different measurement dimensions (x, y, z), linear distances, angles and corner radius. None of the additive manufacturing or subtractive manufacturing groups presented a perfect match to the CAD file with regard to all parameters included in the present study. Considering linear measurements, the precision for subtractive manufacturing group was consistent in all axes for object A, presenting results of <0.050mm. The additive manufacturing groups had consistent precision in the x-axis and y-axis but not in the z-axis. With regard to corner radius measurements, the SM group had the best overall accuracy and precision for both objects A and B when compared to the AM groups. Within the limitations of this in vitro study, the conclusion can be made that subtractive manufacturing presented overall precision on all measurements below 0.050mm. The AM machines also presented fairly good precision, <0.150mm, on all axes except for the z-axis. Knowledge regarding accuracy and precision for different production techniques utilized in dentistry is of great clinical importance. The dental community has moved from casting to milling and additive techniques are now being implemented. Thus all these production techniques need to be tested, compared and validated. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  12. The identification of genes specific to Prevotella intermedia and Prevotella nigrescens using genomic subtractive hybridization.

    PubMed

    Masakiyo, Yoshiaki; Yoshida, Akihiro; Shintani, Yasuyuki; Takahashi, Yusuke; Ansai, Toshihiro; Takehara, Tadamichi

    2010-06-01

    Prevotella intermedia and Prevotella nigrescens, which are often isolated from periodontal sites, were once considered two different genotypes of P. intermedia. Although the genomic sequence of P. intermedia was determined recently, little is known about the genetic differences between P. intermedia and P. nigrescens. The subtractive hybridization technique is a powerful method for generating a set of DNA fragments differing between two closely related bacterial strains or species. We used subtractive hybridization to identify the DNA regions specific to P. intermedia ATCC 25611 and P. nigrescens ATCC 25261. Using this method, four P. intermedia ATCC 25611-specific and three P. nigrescens ATCC 25261-specific regions were determined. From the species-specific regions, insertion sequence (IS) elements were isolated for P. intermedia. IS elements play an important role in the pathogenicity of bacteria. For the P. intermedia-specific regions, the genes adenine-specific DNA-methyltransferase and 8-amino-7-oxononanoate synthase were isolated. The P. nigrescens-specific region contained a Flavobacterium psychrophilum SprA homologue, a cell-surface protein involved in gliding motility, Prevotella melaninogenica ATCC 25845 glutathione peroxide, and Porphyromonas gingivalis ATCC 33277 leucyl-tRNA synthetase. The results demonstrate that the subtractive hybridization technique was useful for distinguishing between the two closely related species. Furthermore, this technique will contribute to our understanding of the virulence of these species. 2009 Elsevier Ltd. All rights reserved.

  13. Methodology for the Elimination of Reflection and System Vibration Effects in Particle Image Velocimetry Data Processing

    NASA Technical Reports Server (NTRS)

    Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.

    2005-01-01

    A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and

  14. Automated detection of abnormalities in paranasal sinus on dental panoramic radiographs by using contralateral subtraction technique based on mandible contour

    NASA Astrophysics Data System (ADS)

    Mori, Shintaro; Hara, Takeshi; Tagami, Motoki; Muramatsu, Chicako; Kaneda, Takashi; Katsumata, Akitoshi; Fujita, Hiroshi

    2013-02-01

    Inflammation in paranasal sinus sometimes becomes chronic to take long terms for the treatment. The finding is important for the early treatment, but general dentists may not recognize the findings because they focus on teeth treatments. The purpose of this study was to develop a computer-aided detection (CAD) system for the inflammation in paranasal sinus on dental panoramic radiographs (DPRs) by using the mandible contour and to demonstrate the potential usefulness of the CAD system by means of receiver operating characteristic analysis. The detection scheme consists of 3 steps: 1) Contour extraction of mandible, 2) Contralateral subtraction, and 3) Automated detection. The Canny operator and active contour model were applied to extract the edge at the first step. At the subtraction step, the right region of the extracted contour image was flipped to compare with the left region. Mutual information between two selected regions was obtained to estimate the shift parameters of image registration. The subtraction images were generated based on the shift parameter. Rectangle regions of left and right paranasal sinus on the subtraction image were determined based on the size of mandible. The abnormal side of the regions was determined by taking the difference between the averages of each region. Thirteen readers were responded to all cases without and with the automated results. The averaged AUC of all readers was increased from 0.69 to 0.73 with statistical significance (p=0.032) when the automated detection results were provided. In conclusion, the automated detection method based on contralateral subtraction technique improves readers' interpretation performance of inflammation in paranasal sinus on DPRs.

  15. Adults' understanding of inversion concepts: how does performance on addition and subtraction inversion problems compare to performance on multiplication and division inversion problems?

    PubMed

    Robinson, Katherine M; Ninowski, Jerilyn E

    2003-12-01

    Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.

  16. Identification of cadmium-induced Agaricus blazei genes through suppression subtractive hybridization.

    PubMed

    Wang, Liling; Li, Haibo; Wei, Hailong; Wu, Xueqian; Ke, Leqin

    2014-01-01

    Cadmium (Cd) is one of the most serious environmental pollutants. Filamentous fungi are very promising organisms for controlling and reducing the amount of heavy metals released by human and industrial activities. However, the molecular mechanisms involved in Cd accumulation and tolerance of filamentous fungi are not fully understood. Agaricus blazei Murrill, an edible mushroom with medicinal properties, demonstrates high tolerance for heavy metals, especially Cd. To investigate the molecular mechanisms underlying the response of A. blazei after Cd exposure, we constructed a forward subtractive library that represents cadmium-induced genes in A. blazei under 4 ppm Cd stress for 14 days using suppression subtractive hybridization combined with mirror orientation selection. Differential screening allowed us to identify 39 upregulated genes, 26 of which are involved in metabolism, protein fate, cellular transport, transport facilitation and transport routes, cell rescue, defense and virulence, transcription, and the action of proteins with a binding function, and 13 are encoding hypothetical proteins with unknown functions. Induction of six A. blazei genes after Cd exposure was further confirmed by RT-qPCR. The cDNAs isolated in this study contribute to our understanding of genes involved in the biochemical pathways that participate in the response of filamentous fungi to Cd exposure. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual

  18. A fast, robust algorithm for power line interference cancellation in neural recording.

    PubMed

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. The proposed algorithm features a highly robust operation, fast

  19. A fast, robust algorithm for power line interference cancellation in neural recording

    NASA Astrophysics Data System (ADS)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed

  20. Teaching Students with Cognitive Impairment Chained Mathematical Task of Decimal Subtraction Using Simultaneous Prompting

    ERIC Educational Resources Information Center

    Rao, Shaila; Kane, Martha T.

    2009-01-01

    This study assessed effectiveness of simultaneous prompting procedure in teaching two middle school students with cognitive impairment decimal subtraction using regrouping. A multiple baseline, multiple probe design replicated across subjects successfully taught two students with cognitive impairment at middle school level decimal subtraction…

  1. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  2. A robust generalized fuzzy operator approach to film contrast correction in digital subtraction radiography.

    PubMed

    Leung, Chung-Chu

    2006-03-01

    Digital subtraction radiography requires close matching of the contrast in each pair of X-ray images to be subtracted. Previous studies have shown that nonparametric contrast/brightness correction methods using the cumulative density function (CDF) and its improvements, which are based on gray-level transformation associated with the pixel histogram, perform well in uniform contrast/brightness difference conditions. However, for radiographs with nonuniform contrast/ brightness, the CDF produces unsatisfactory results. In this paper, we propose a new approach in contrast correction based on the generalized fuzzy operator with least square method. The result shows that 50% of the contrast/brightness errors can be corrected using this approach when the contrast/brightness difference between a radiographic pair is 10 U. A comparison of our approach with that of CDF is presented, and this modified GFO method produces better contrast normalization results than the CDF approach.

  3. Distortion correction and cross-talk compensation algorithm for use with an imaging spectrometer based spatially resolved diffuse reflectance system

    NASA Astrophysics Data System (ADS)

    Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.

    2016-12-01

    Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.

  4. Development of Cloud and Precipitation Property Retrieval Algorithms and Measurement Simulators from ASR Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, Gerald G.

    What has made the ASR program unique is the amount of information that is available. The suite of recently deployed instruments significantly expands the scope of the program (Mather and Voyles, 2013). The breadth of this information allows us to pose sophisticated process-level questions. Our ASR project, now entering its third year, has been about developing algorithms that use this information in ways that fully exploit the new capacity of the ARM data streams. Using optimal estimation (OE) and Markov Chain Monte Carlo (MCMC) inversion techniques, we have developed methodologies that allow us to use multiple radar frequency Doppler spectramore » along with lidar and passive constraints where data streams can be added or subtracted efficiently and algorithms can be reformulated for various combinations of hydrometeors by exchanging sets of empirical coefficients. These methodologies have been applied to boundary layer clouds, mixed phase snow cloud systems, and cirrus.« less

  5. How number line estimation skills relate to neural activations in single digit subtraction problems

    PubMed Central

    Berteletti, I.; Man, G.; Booth, J.R.

    2014-01-01

    The Number Line (NL) task requires judging the relative numerical magnitude of a number and estimating its value spatially on a continuous line. Children's skill on this task has been shown to correlate with and predict future mathematical competence. Neurofunctionally, this task has been shown to rely on brain regions involved in numerical processing. However, there is no direct evidence that performance on the NL task is related to brain areas recruited during arithmetical processing and that these areas are domain-specific to numerical processing. In this study, we test whether 8- to 14-year-old's behavioral performance on the NL task is related to fMRI activation during small and large single-digit subtraction problems. Domain-specific areas for numerical processing were independently localized through a numerosity judgment task. Results show a direct relation between NL estimation performance and the amount of the activation in key areas for arithmetical processing. Better NL estimators showed a larger problem size effect than poorer NL estimators in numerical magnitude (i.e., intraparietal sulcus) and visuospatial areas (i.e., posterior superior parietal lobules), marked by less activation for small problems. In addition, the direction of the activation with problem size within the IPS was associated to differences in accuracies for small subtraction problems. This study is the first to show that performance in the NL task, i.e. estimating the spatial position of a number on an interval, correlates with brain activity observed during single-digit subtraction problem in regions thought to be involved numerical magnitude and spatial processes. PMID:25497398

  6. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  7. On the Difference Between Additive and Subtractive QM/MM Calculations

    PubMed Central

    Cao, Lili; Ryde, Ulf

    2018-01-01

    The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e., the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic, and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended. PMID:29666794

  8. On the difference between additive and subtractive QM/MM calculations

    NASA Astrophysics Data System (ADS)

    Cao, Lili; Ryde, Ulf

    2018-04-01

    The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e. the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended.

  9. Evaluation of the morphology structure of meibomian glands based on mask dodging method

    NASA Astrophysics Data System (ADS)

    Yan, Huangping; Zuo, Yingbo; Chen, Yisha; Chen, Yanping

    2016-10-01

    Low contrast and non-uniform illumination of infrared (IR) meibography images make the detection of meibomian glands challengeable. An improved Mask dodging algorithm is proposed. To overcome the shortage of low contrast using traditional Mask dodging method, a scale factor is used to enhance the image after subtracting background image from an original one. Meibomian glands are detected and the ratio of the meibomian gland area to the measurement area is calculated. The results show that the improved Mask algorithm has ideal dodging effect, which can eliminate non-uniform illumination and improve contrast of meibography images effectively.

  10. Chemical Source Localization Fusing Concentration Information in the Presence of Chemical Background Noise.

    PubMed

    Pomareda, Víctor; Magrans, Rudys; Jiménez-Soto, Juan M; Martínez, Dani; Tresánchez, Marcel; Burgués, Javier; Palacín, Jordi; Marco, Santiago

    2017-04-20

    We present the estimation of a likelihood map for the location of the source of a chemical plume dispersed under atmospheric turbulence under uniform wind conditions. The main contribution of this work is to extend previous proposals based on Bayesian inference with binary detections to the use of concentration information while at the same time being robust against the presence of background chemical noise. For that, the algorithm builds a background model with robust statistics measurements to assess the posterior probability that a given chemical concentration reading comes from the background or from a source emitting at a distance with a specific release rate. In addition, our algorithm allows multiple mobile gas sensors to be used. Ten realistic simulations and ten real data experiments are used for evaluation purposes. For the simulations, we have supposed that sensors are mounted on cars which do not have among its main tasks navigating toward the source. To collect the real dataset, a special arena with induced wind is built, and an autonomous vehicle equipped with several sensors, including a photo ionization detector (PID) for sensing chemical concentration, is used. Simulation results show that our algorithm, provides a better estimation of the source location even for a low background level that benefits the performance of binary version. The improvement is clear for the synthetic data while for real data the estimation is only slightly better, probably because our exploration arena is not able to provide uniform wind conditions. Finally, an estimation of the computational cost of the algorithmic proposal is presented.

  11. Children's Understanding of the Relation between Addition and Subtraction: Inversion, Identity, and Decomposition.

    ERIC Educational Resources Information Center

    Bryant, Peter; Rendu, Alison; Christie, Clare

    1999-01-01

    Examined whether 5- and 6-year-olds understand that addition and subtraction cancel each other and whether this understanding is based on identity or quantity of addend and subtrahend. Found that children used inversion principle. Six- to eight-year-olds also used inversion and decomposition to solve a + b - (B+1) problems. Concluded that…

  12. Automatic detection of the breast border and nipple position on digital mammograms using genetic algorithm for asymmetry approach to detection of microcalcifications.

    PubMed

    Karnan, M; Thangavel, K

    2007-07-01

    The presence of microcalcifications in breast tissue is one of the most incident signs considered by radiologist for an early diagnosis of breast cancer, which is one of the most common forms of cancer among women. In this paper, the Genetic Algorithm (GA) is proposed for automatic look at commonly prone area the breast border and nipple position to discover the suspicious regions on digital mammograms based on asymmetries between left and right breast image. The basic idea of the asymmetry approach is to scan left and right images are subtracted to extract the suspicious region. The proposed system consists of two steps: First, the mammogram images are enhanced using median filter, normalize the image, at the pectoral muscle region is excluding the border of the mammogram and comparing for both left and right images from the binary image. Further GA is applied to magnify the detected border. The figure of merit is calculated to evaluate whether the detected border is exact or not. And the nipple position is identified using GA. The some comparisons method is adopted for detection of suspected area. Second, using the border points and nipple position as the reference the mammogram images are aligned and subtracted to extract the suspicious region. The algorithms are tested on 114 abnormal digitized mammograms from Mammogram Image Analysis Society database.

  13. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  14. Genetic differences between two strains of Xylella fastidiosa revealed by suppression subtractive hybridization.

    PubMed

    Harakava, Ricardo; Gabriel, Dean W

    2003-02-01

    Suppression subtractive hybridization was used to rapidly identify 18 gene differences between a citrus variegated chlorosis (CVC) strain and a Pierce's disease of grape (PD) strain of Xylella fastidiosa. The results were validated as being highly representative of actual differences by comparison of the completely sequenced genome of a CVC strain with that of a PD strain.

  15. Quantitative kinetic analysis of lung nodules by temporal subtraction technique in dynamic chest radiography with a flat panel detector

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Yuichiro; Kodera, Yoshie; Tanaka, Rie; Sanada, Shigeru

    2007-03-01

    Early detection and treatment of lung cancer is one of the most effective means to reduce cancer mortality; chest X-ray radiography has been widely used as a screening examination or health checkup. The new examination method and the development of computer analysis system allow obtaining respiratory kinetics by the use of flat panel detector (FPD), which is the expanded method of chest X-ray radiography. Through such changes functional evaluation of respiratory kinetics in chest has become available. Its introduction into clinical practice is expected in the future. In this study, we developed the computer analysis algorithm for the purpose of detecting lung nodules and evaluating quantitative kinetics. Breathing chest radiograph obtained by modified FPD was converted into 4 static images drawing the feature, by sequential temporal subtraction processing, morphologic enhancement processing, kinetic visualization processing, and lung region detection processing, after the breath synchronization process utilizing the diaphragmatic analysis of the vector movement. The artificial neural network used to analyze the density patterns detected the true nodules by analyzing these static images, and drew their kinetic tracks. For the algorithm performance and the evaluation of clinical effectiveness with 7 normal patients and simulated nodules, both showed sufficient detecting capability and kinetic imaging function without statistically significant difference. Our technique can quantitatively evaluate the kinetic range of nodules, and is effective in detecting a nodule on a breathing chest radiograph. Moreover, the application of this technique is expected to extend computer-aided diagnosis systems and facilitate the development of an automatic planning system for radiation therapy.

  16. N -jettiness subtractions for g g →H at subleading power

    NASA Astrophysics Data System (ADS)

    Moult, Ian; Rothen, Lorena; Stewart, Iain W.; Tackmann, Frank J.; Zhu, Hua Xing

    2018-01-01

    N -jettiness subtractions provide a general approach for performing fully-differential next-to-next-to-leading order (NNLO) calculations. Since they are based on the physical resolution variable N -jettiness, TN , subleading power corrections in τ =TN/Q , with Q a hard interaction scale, can also be systematically computed. We study the structure of power corrections for 0-jettiness, T0, for the g g →H process. Using the soft-collinear effective theory we analytically compute the leading power corrections αsτ ln τ and αs2τ ln3τ (finding partial agreement with a previous result in the literature), and perform a detailed numerical study of the power corrections in the g g , g q , and q q ¯ channels. This includes a numerical extraction of the αsτ and αs2τ ln2τ corrections, and a study of the dependence on the T0 definition. Including such power suppressed logarithms significantly reduces the size of missing power corrections, and hence improves the numerical efficiency of the subtraction method. Having a more detailed understanding of the power corrections for both q q ¯ and g g initiated processes also provides insight into their universality, and hence their behavior in more complicated processes where they have not yet been analytically calculated.

  17. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-06-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.

  18. Video Analytics Evaluation: Survey of Datasets, Performance Metrics and Approaches

    DTIC Science & Technology

    2014-09-01

    training phase and a fusion of the detector outputs. 6.3.1 Training Techniques 1. Bagging: The basic idea of Bagging is to train multiple classifiers...can reduce more noise interesting points. Person detection and background subtraction methods were used to create hot regions. The hot regions were...detection algorithms are incorporated with MHT to construct one integrated detector /tracker. 6.8 IRDS-CASIA team IRDS-CASIA proposed a method to solve a

  19. Characterizing the Cosmic Infrared Background Fluctuations

    NASA Astrophysics Data System (ADS)

    Li, Yanxia

    2015-08-01

    A salient feature of the Cosmic Infrared Background (CIB) fluctuations is that their spatial power spectrum rises a factor of ~10 above the expected contribution from all known sources at angular scales >20‧‧. A tantalizing large-scale correlation signal between the residual Cosmic X-ray Background (CXB) and CIB found in the Extended Groth Strip (EGS) further suggests that at least 20% of the CIB fluctuations are associated with accreting X-ray sources, with efficient energy production similar to black holes. However, there is still a controversy about the sources that produce the excess flux. They could be faint, local populations with different spatial distribution from other known galaxies, e.g., intra-halo light (emitted from stars in the outskirts of local galaxies), or really high-z populations at the epoch of reionization that we know little of. Constraining the origin of the CIB fluctuations will help to establish our understanding of the overall cosmic energy budget.In this talk, we will present our plan to break down this controversy, current state of data collection and analysis.(1) We will combine the archival Spitzer/IRAC and Herschel/PACS data, with the Chandra data of the Cosmic Evolution Survey (COSMOS), to accurately measure the source-subtracted CIB and CXB fluctuations to the largest angular scale (~1-2 deg) to date. The newly discovered link between CIB and CXB fluctuations found in the EGS will be revisited in the COSMOS, which provides better photon statistics. (2) We have been working on cross-correlating the unresolved background with the discrete sources detected at shorter wavelengths (1- 2μm), using ground-based multi-wavelength observations. In addition to exploring the Pan-STARRS 3PI and Medium Deep Survey database, we have also been awarded the telescope time of CFHT/WIRCam and Subaru/Hyper-Suprime-Cam for this purpose. The preliminary data analysis will be presented.

  20. In Pursuit of LSST Science Requirements: A Comparison of Photometry Algorithms

    NASA Astrophysics Data System (ADS)

    Becker, Andrew C.; Silvestri, Nicole M.; Owen, Russell E.; Ivezić, Željko; Lupton, Robert H.

    2007-12-01

    We have developed an end-to-end photometric data-processing pipeline to compare current photometric algorithms commonly used on ground-based imaging data. This test bed is exceedingly adaptable and enables us to perform many research and development tasks, including image subtraction and co-addition, object detection and measurements, the production of photometric catalogs, and the creation and stocking of database tables with time-series information. This testing has been undertaken to evaluate existing photometry algorithms for consideration by a next-generation image-processing pipeline for the Large Synoptic Survey Telescope (LSST). We outline the results of our tests for four packages: the Sloan Digital Sky Survey's Photo package, DAOPHOT and ALLFRAME, DOPHOT, and two versions of Source Extractor (SExtractor). The ability of these algorithms to perform point-source photometry, astrometry, shape measurements, and star-galaxy separation and to measure objects at low signal-to-noise ratio is quantified. We also perform a detailed crowded-field comparison of DAOPHOT and ALLFRAME, and profile the speed and memory requirements in detail for SExtractor. We find that both DAOPHOT and Photo are able to perform aperture photometry to high enough precision to meet LSST's science requirements, and less adequately at PSF-fitting photometry. Photo performs the best at simultaneous point- and extended-source shape and brightness measurements. SExtractor is the fastest algorithm, and recent upgrades in the software yield high-quality centroid and shape measurements with little bias toward faint magnitudes. ALLFRAME yields the best photometric results in crowded fields.

  1. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    PubMed

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Research on Bayes matting algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Jiang, Shan; Han, Cheng; Zhang, Chao; Jiang, Zhengang

    2015-12-01

    The digital matting problem is a classical problem of imaging. It aims at separating non-rectangular foreground objects from a background image, and compositing with a new background image. Accurate matting determines the quality of the compositing image. A Bayesian matting Algorithm Based on Gaussian Mixture Model is proposed to solve this matting problem. Firstly, the traditional Bayesian framework is improved by introducing Gaussian mixture model. Then, a weighting factor is added in order to suppress the noises of the compositing images. Finally, the effect is further improved by regulating the user's input. This algorithm is applied to matting jobs of classical images. The results are compared to the traditional Bayesian method. It is shown that our algorithm has better performance in detail such as hair. Our algorithm eliminates the noise well. And it is very effectively in dealing with the kind of work, such as interested objects with intricate boundaries.

  3. Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra

    NASA Astrophysics Data System (ADS)

    Luo, Yi; Celenk, Mehmet; Bejai, Prashanth

    2006-03-01

    A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.

  4. Approach for counting vehicles in congested traffic flow

    NASA Astrophysics Data System (ADS)

    Tan, Xiaojun; Li, Jun; Liu, Wei

    2005-02-01

    More and more image sensors are used in intelligent transportation systems. In practice, occlusion is always a problem when counting vehicles in congested traffic. This paper tries to present an approach to solve the problem. The proposed approach consists of three main procedures. Firstly, a new algorithm of background subtraction is performed. The aim is to segment moving objects from an illumination-variant background. Secondly, object tracking is performed, where the CONDENSATION algorithm is used. This can avoid the problem of matching vehicles in successive frames. Thirdly, an inspecting procedure is executed to count the vehicles. When a bus firstly occludes a car and then the bus moves away a few frames later, the car will appear in the scene. The inspecting procedure should find the "new" car and add it as a tracking object.

  5. Subtraction CT angiography in head and neck with low radiation and contrast dose dual-energy spectral CT using rapid kV-switching technique.

    PubMed

    Ma, Guangming; Yu, Yong; Duan, Haifeng; Dou, Yuequn; Jia, Yongjun; Zhang, Xirong; Yang, Chuangbo; Chen, Xiaoxia; Han, Dong; Guo, Changyi; He, Taiping

    2018-06-01

    To investigate the application of low radiation and contrast dose spectral CT angiology using rapid kV-switching technique in the head and neck with subtraction method for bone removal. This prospective study was approved by the local ethics committee. 64 cases for head and neck CT angiology were randomly divided into Groups A (n = 32) and B (n = 32). Group A underwent unenhanced CT with 100 kVp, 200 mA and contrast-enhanced CT with spectral CT mode with body mass index-dependent low dose protocols. Group B used conventional helical scanning with 120 kVp, auto mA for noise index of 12 HU (Hounsfield unit) for both the unenhanced and contrast-enhanced CT. Subtraction images were formed by subtracting the unenhanced images from enhanced images (with the 65 keV-enhanced spectral CT image in Group A). CT numbers and their standard deviations in aortic arch, carotid arteries, middle cerebral artery and air were measured in the subtraction images. The signal-to-noise ratio and contrast-to-noise ratio for the common and internal carotid arteries and middle cerebral artery were calculated. Image quality in terms of bone removal effect was evaluated by two experienced radiologists independently and blindly using a 4-point system. Radiation dose and total iodine load were recorded. Measurements were statistically compared between the two groups. The two groups had same demographic results. There was no difference in the CT number, signal-to-noise and contrast-to-noise ratio values for carotid arteries and middle cerebral artery in the subtraction images between the two groups (p > 0.05). However, the bone removal effect score [median (min-max)] in Group A [4 (3-4)] was rated better than in Group B [3 (2-4)] (p < 0.001), with excellent agreement between the two observers (κ > 0.80). The radiation dose in Group A (average of 2.64 mSv) was 57% lower than the 6.18 mSv in Group B (p < 0.001). The total iodine intake in Group A was 13.5g, 36% lower than the 21g in

  6. Comparison of Measured Galactic Background Radiation at L-Band with Model

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, William J.; Skou, Niels; Sobjaerg, Sten

    2004-01-01

    Radiation from the celestial sky in the spectral window at 1.413 GHz is strong and an accurate accounting of this background radiation is needed for calibration and retrieval algorithms. Modern radio astronomy measurements in this window have been converted into a brightness temperature map of the celestial sky at L-band suitable for such applications. This paper presents a comparison of the background predicted by this map with the measurements of several modern L-band remote sensing radiometer Keywords-Galactic background, microwave radiometry; remote sensing;

  7. Multicore and GPU algorithms for Nussinov RNA folding

    PubMed Central

    2014-01-01

    Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539

  8. Addition Table of Colours: Additive and Subtractive Mixtures Described Using a Single Reasoning Model

    ERIC Educational Resources Information Center

    Mota, A. R.; Lopes dos Santos, J. M. B.

    2014-01-01

    Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…

  9. MAXIMA-1: A Measurement of the Cosmic Microwave Background Anisotropy on Angular Scales of 10' to 5 degrees

    DOE R&D Accomplishments Database

    Ade, P.; Balbi, A.; Bock, J.; Borrill, J.; Boscaleri, A.; de Bernardis, P.; Ferreira, P. G.; Hanany, S.; Hristov, V. V.; Jaffe, A. H.; Lange, A. E.; Lee, A. T.; Mauskopf, P. D.; Netterfield, C. B.; Oh, S.; Pascale, E.; Rabii, B.; Richards, P. L.; Smoot, G. F.; Stompor, R.; Winant,C. D.; Wu, J. H. P.

    2005-06-04

    We present a map and an angular power spectrum of the anisotropy of the cosmic microwave background (CMB) from the first flight of MAXIMA. MAXIMA is a balloon-borne experiment with an array of 16 bolometric photometers operated at 100 mK. MAXIMA observed a 124 deg{sup 2} region of the sky with 10' resolution at frequencies of 150, 240 and 410 GHz. The data were calibrated using in-flight measurements of the CMB dipole anisotropy. A map of the CMB anisotropy was produced from three 150 and one 240 GHz photometer without need for foreground subtractions.

  10. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  11. Quantitative Image Quality and Histogram-Based Evaluations of an Iterative Reconstruction Algorithm at Low-to-Ultralow Radiation Dose Levels: A Phantom Study in Chest CT

    PubMed Central

    Lee, Ki Baek

    2018-01-01

    Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008

  12. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  13. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  14. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  15. Study of robot landmark recognition with complex background

    NASA Astrophysics Data System (ADS)

    Huang, Yuqing; Yang, Jia

    2007-12-01

    It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.

  16. High fidelity nanopatterning of proteins onto well-defined surfaces through subtractive contact printing

    PubMed Central

    García, José R.; Singh, Ankur; García, Andrés J.

    2016-01-01

    In the pursuit to develop enhanced technologies for cellular bioassays as well as understand single cell interactions with its underlying substrate, the field of biotechnology has extensively utilized lithographic techniques to spatially pattern proteins onto surfaces in user-defined geometries. Microcontact printing (μCP) remains an incredibly useful patterning method due to its inexpensive nature, scalability, and the lack of considerable use of specialized clean room equipment. However, as new technologies emerge that necessitate various nano-sized areas of deposited proteins, traditional microcontact printing methods may not be able to supply users with the needed resolution size. Recently, our group developed a modified “subtractive microcontact printing” method which still retains many of the benefits offered by conventional μCP. Using this technique, we have been able to reach resolution sizes of fibronectin as small as 250 nm in largely spaced arrays for cell culture. In this communication, we present a detailed description of our subtractive μCP procedure that expands on many of the little tips and tricks that together make this procedure an easy and effective method for controlling protein patterning. PMID:24439290

  17. High fidelity nanopatterning of proteins onto well-defined surfaces through subtractive contact printing.

    PubMed

    García, José R; Singh, Ankur; García, Andrés J

    2014-01-01

    In the pursuit to develop enhanced technologies for cellular bioassays as well as understand single cell interactions with its underlying substrate, the field of biotechnology has extensively utilized lithographic techniques to spatially pattern proteins onto surfaces in user-defined geometries. Microcontact printing (μCP) remains an incredibly useful patterning method due to its inexpensive nature, scalability, and the lack of considerable use of specialized clean room equipment. However, as new technologies emerge that necessitate various nano-sized areas of deposited proteins, traditional μCP methods may not be able to supply users with the needed resolution size. Recently, our group developed a modified "subtractive μCP" method which still retains many of the benefits offered by conventional μCP. Using this technique, we have been able to reach resolution sizes of fibronectin as small as 250 nm in largely spaced arrays for cell culture. In this communication, we present a detailed description of our subtractive μCP procedure that expands on many of the little tips and tricks that together make this procedure an easy and effective method for controlling protein patterning. © 2014 Elsevier Inc. All rights reserved.

  18. Trust index based fault tolerant multiple event localization algorithm for WSNs.

    PubMed

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  19. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    PubMed Central

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  20. Computerized mass detection in whole breast ultrasound images: reduction of false positives using bilateral subtraction technique

    NASA Astrophysics Data System (ADS)

    Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako

    2007-03-01

    The comparison of left and right mammograms is a common technique used by radiologists for the detection and diagnosis of masses. In mammography, computer-aided detection (CAD) schemes using bilateral subtraction technique have been reported. However, in breast ultrasonography, there are no reports on CAD schemes using comparison of left and right breasts. In this study, we propose a scheme of false positive reduction based on bilateral subtraction technique in whole breast ultrasound images. Mass candidate regions are detected by using the information of edge directions. Bilateral breast images are registered with reference to the nipple positions and skin lines. A false positive region is detected based on a comparison of the average gray values of a mass candidate region and a region with the same position and same size as the candidate region in the contralateral breast. In evaluating the effectiveness of the false positive reduction method, three normal and three abnormal bilateral pairs of whole breast images were employed. These abnormal breasts included six masses larger than 5 mm in diameter. The sensitivity was 83% (5/6) with 13.8 (165/12) false positives per breast before applying the proposed reduction method. By applying the method, false positives were reduced to 4.5 (54/12) per breast without removing a true positive region. This preliminary study indicates that the bilateral subtraction technique is effective for improving the performance of a CAD scheme in whole breast ultrasound images.

  1. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  2. High-speed scanning: an improved algorithm

    NASA Astrophysics Data System (ADS)

    Nachimuthu, A.; Hoang, Khoi

    1995-10-01

    In using machine vision for assessing an object's surface quality, many images are required to be processed in order to separate the good areas from the defective ones. Examples can be found in the leather hide grading process; in the inspection of garments/canvas on the production line; in the nesting of irregular shapes into a given surface... . The most common method of subtracting the total area from the sum of defective areas does not give an acceptable indication of how much of the `good' area can be used, particularly if the findings are to be used for the nesting of irregular shapes. This paper presents an image scanning technique which enables the estimation of useable areas within an inspected surface in terms of the user's definition, not the supplier's claims. That is, how much useable area the user can use, not the total good area as the supplier estimated. An important application of the developed technique is in the leather industry where the tanner (the supplier) and the footwear manufacturer (the user) are constantly locked in argument due to disputed quality standards of finished leather hide, which disrupts production schedules and wasted costs in re-grading, re- sorting... . The developed basic algorithm for area scanning of a digital image will be presented. The implementation of an improved scanning algorithm will be discussed in detail. The improved features include Boolean OR operations and many other innovative functions which aim at optimizing the scanning process in terms of computing time and the accurate estimation of useable areas.

  3. Suppression subtractive hybridization as a tool to identify anthocyanin metabolism-related genes in apple skin.

    PubMed

    Ban, Yusuke; Moriguchi, Takaya

    2010-01-01

    The pigmentation of anthocyanins is one of the important determinants for consumer preference and marketability in horticultural crops such as fruits and flowers. To elucidate the mechanisms underlying the physiological process leading to the pigmentation of anthocyanins, identification of the genes differentially expressed in response to anthocyanin accumulation is a useful strategy. Currently, microarrays have been widely used to isolate differentially expressed genes. However, the use of microarrays is limited by its high cost of special apparatus and materials. Therefore, availability of microarrays is limited and does not come into common use at present. Suppression subtractive hybridization (SSH) is an alternative tool that has been widely used to identify differentially expressed genes due to its easy handling and relatively low cost. This chapter describes the procedures for SSH, including RNA extraction from polysaccharides and polyphenol-rich samples, poly(A)+ RNA purification, evaluation of subtraction efficiency, and differential screening using reverse northern in apple skin.

  4. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  5. Spectral K-edge subtraction imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Samadi, N.; Martinson, M.; Bassey, B.; Wei, Z.; Belev, G.; Chapman, D.

    2014-05-01

    We describe a spectral x-ray transmission method to provide images of independent material components of an object using a synchrotron x-ray source. The imaging system and process is similar to K-edge subtraction (KES) imaging where two imaging energies are prepared above and below the K-absorption edge of a contrast element and a quantifiable image of the contrast element and a water equivalent image are obtained. The spectral method, termed ‘spectral-KES’ employs a continuous spectrum encompassing an absorption edge of an element within the object. The spectrum is prepared by a bent Laue monochromator with good focal and energy dispersive properties. The monochromator focuses the spectral beam at the object location, which then diverges onto an area detector such that one dimension in the detector is an energy axis. A least-squares method is used to interpret the transmitted spectral data with fits to either measured and/or calculated absorption of the contrast and matrix material-water. The spectral-KES system is very simple to implement and is comprised of a bent Laue monochromator, a stage for sample manipulation for projection and computed tomography imaging, and a pixelated area detector. The imaging system and examples of its applications to biological imaging are presented. The system is particularly well suited for a synchrotron bend magnet beamline with white beam access.

  6. Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.

    2006-01-01

    A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.

  7. Gadolinium-enhanced digital subtraction angiography of hemodialysis fistulas: a diagnostic and therapeutic approach.

    PubMed

    Le Blanche, Alain-Ferdinand; Tassart, Marc; Deux, Jean-François; Rossert, Jérôme; Bigot, Jean-Michel; Boudghene, Frank

    2002-10-01

    The aim of our study was to evaluate the feasibility, safety, and potential role of the contrast agent gadoterate meglumine for digital subtraction angiography as a single diagnostic procedure or before percutaneous transluminal angioplasty of malfunctioning native dialysis fistulas. Over a 20-month period, 23 patients (15 women, eight men) with an age range of 42-87 years (mean, 63 years) having end-stage renal insufficiency and with recent hemodialysis fistula surgical placement underwent gadoterate-enhanced digital subtraction angiography with a digital 1024 x 1024 matrix. Opacification was performed on the forearm, arm, and chest with the patient in the supine position using an injection (retrograde, n = 14; anterograde, n = 8; arterial, n = 1) of gadoterate meglumine into the perianastomotic fistula segment at a rate of 3 mL/sec for a total volume ranging from 24 to 32 mL. Percutaneous transluminal angioplasty was performed in three patients and required an additional 8 mL per procedure. Examinations were compared using a 3-step confidence scale and a two-radiologist agreement (Cohen's kappa statistic) for diagnostic and opacification quality. Tolerability was evaluated on the basis of serum creatinine levels and the development of complications. No impairment of renal function was found in the 15 patients who were not treated with hemodialysis. Serum creatinine level change varied from -11.9% to 11.6%. All studies were of diagnostic quality. The presence of stenosis (n = 14) or thrombosis (n = 3) in arteriovenous fistulas was shown with good interobserver agreement (kappa = 0.71-0.80) in relation to opacification quality (kappa = 0.59-0.84). No pain, neurologic complications, or allergiclike reactions occurred. Three percutaneous transluminal angioplasty procedures (brachiocephalic, n = 2; radiocephalic, n = 1) were successfully performed. Gadoterate-enhanced digital subtraction angiography is an effective and safe method to assess causes of malfunction of

  8. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    PubMed

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  9. Interobserver Agreement on Arteriovenous Malformation Diffuseness Using Digital Subtraction Angiography.

    PubMed

    Braileanu, Maria; Yang, Wuyang; Caplan, Justin M; Lin, Li-Mei; Radvany, Martin G; Tamargo, Rafael J; Huang, Judy

    2016-11-01

    Arteriovenous malformation (AVM) diffuseness has been shown to be prognostic of treatment outcomes. We assessed interobserver agreement of AVM diffuseness among physicians of different specialty and training backgrounds using digital subtraction angiography (DSA). All research protocols were approved by the institutional review board for this retrospective chart review. In a single-blinded setting, 2 attending neurosurgeons, 1 attending interventional neuroradiologist, and 1 senior neurosurgical resident rated 80 DSA views of 36 AVMs as either compact or diffuse. Individual interobserver agreement and subgroup agreement were analyzed using κ agreement and intraclass correlation coefficient. Disagreement regarding AVM diffuseness occurred in 43.8% of all DSA views (n = 80). Interobserver κ agreement on AVM diffuseness using DSA views among 4 physicians ranged from fair (κ = 0.40 [95% confidence interval (CI) = 0.22-0.58]) to substantial (κ = 0.65 [95% CI = 0.48-0.81]), whereas total intraclass correlation coefficient was 0.81 (95% CI = 0.73-0.87). For the 36 AVMs, κ agreement ranged from fair (κ = 0.36 [95% CI = 0.13-0.60]) to moderate (κ = 0.57 [95% CI = 0.35-0.79]), whereas intraclass correlation coefficient among all 4 physicians was 0.68 (95% CI = 0.47-0.82). Moderate agreement on AVM diffuseness (n = 80) was found between attending and resident assessments (κ = 0.57 [95% CI = 0.39-0.75]) and between neurosurgeon and interventional neuroradiologist assessments (κ = 0.55 [95% CI = 0.37-0.73]). Agreement of individual physicians on AVM diffuseness varies from fair to substantial. Objective and three-dimensional measures of AVM diffuseness should be developed for consistent clinical application. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. New Algorithm to Enable Construction and Display of 3D Structures from Scanning Probe Microscopy Images Acquired Layer-by-Layer.

    PubMed

    Deng, William Nanqiao; Wang, Shuo; Ventrici de Souza, Joao; Kuhl, Tonya L; Liu, Gang-Yu

    2018-06-25

    Scanning probe microscopy (SPM), such as atomic force microscopy (AFM), is widely known for high-resolution imaging of surface structures and nanolithography in two dimensions (2D), providing important physical insights into surface science and material science. This work reports a new algorithm to enable construction and display of layer-by-layer 3D structures from SPM images. The algorithm enables alignment of SPM images acquired during layer-by-layer deposition and removal of redundant features and faithfully constructs the deposited 3D structures. The display uses a "see-through" strategy to enable the structure of each layer to be visible. The results demonstrate high spatial accuracy as well as algorithm versatility; users can set parameters for reconstruction and display as per image quality and research needs. To the best of our knowledge, this method represents the first report to enable SPM technology for 3D imaging construction and display. The detailed algorithm is provided to facilitate usage of the same approach in any SPM software. These new capabilities support wide applications of SPM that require 3D image reconstruction and display, such as 3D nanoprinting and 3D additive and subtractive manufacturing and imaging.

  11. Anomaly detection in hyperspectral imagery: statistics vs. graph-based algorithms

    NASA Astrophysics Data System (ADS)

    Berkson, Emily E.; Messinger, David W.

    2016-05-01

    Anomaly detection (AD) algorithms are frequently applied to hyperspectral imagery, but different algorithms produce different outlier results depending on the image scene content and the assumed background model. This work provides the first comparison of anomaly score distributions between common statistics-based anomaly detection algorithms (RX and subspace-RX) and the graph-based Topological Anomaly Detector (TAD). Anomaly scores in statistical AD algorithms should theoretically approximate a chi-squared distribution; however, this is rarely the case with real hyperspectral imagery. The expected distribution of scores found with graph-based methods remains unclear. We also look for general trends in algorithm performance with varied scene content. Three separate scenes were extracted from the hyperspectral MegaScene image taken over downtown Rochester, NY with the VIS-NIR-SWIR ProSpecTIR instrument. In order of most to least cluttered, we study an urban, suburban, and rural scene. The three AD algorithms were applied to each scene, and the distributions of the most anomalous 5% of pixels were compared. We find that subspace-RX performs better than RX, because the data becomes more normal when the highest variance principal components are removed. We also see that compared to statistical detectors, anomalies detected by TAD are easier to separate from the background. Due to their different underlying assumptions, the statistical and graph-based algorithms highlighted different anomalies within the urban scene. These results will lead to a deeper understanding of these algorithms and their applicability across different types of imagery.

  12. Comparison of Intraoperative Indocyanine Green Angiography and Digital Subtraction Angiography for Clipping of Intracranial Aneurysms

    PubMed Central

    Doss, Vinodh T.; Goyal, Nitin; Humphries, William; Hoit, Dan; Arthur, Adam; Elijovich, Lucas

    2015-01-01

    Background Residual aneurysm after microsurgical clipping carries a risk of aneurysm growth and rupture. Digital subtraction angiography (DSA) remains the standard to determine the adequacy of clipping. Intraoperative indocyanine green (ICG) angiography is increasingly utilized to confirm optimal clip positioning across the neck and to evaluate the adjacent vasculature. Objective We evaluated the correlation between ICG and DSA in clipped intracranial aneurysms. Methods A retrospective study of patients who underwent craniotomy and microsurgical clipping of intracranial aneurysms with ICG for 2 years. Patient characteristics, presentation details, operative reports, and pre- and postclipping angiographic images were reviewed to determine the adequacy of the clipping. Results Forty-seven patients underwent clipping with ICG and postoperative DSA: 57 aneurysms were clipped; 23 patients (48.9%) presented with subarachnoid hemorrhage. Nine aneurysms demonstrated a residual on DSA not identified on ICG (residual sizes ranged from 0.5 to 4.3 mm; average size: 1.8 mm). Postoperative DSA demonstrated no branch occlusions. Conclusion Intraoperative ICG is useful in the clipping of intracranial aneurysms to ensure a gross patency of branch vessels; however, the presence of residual aneurysms and subtle changes in flow in branch vessels is best seen by DSA. This has important clinical implications with regard to follow-up imaging and surgical/endovascular management. PMID:26279659

  13. The Chandra Source Catalog: Algorithms

    NASA Astrophysics Data System (ADS)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  14. On Semiotics and Jumping Frogs: The Role of Gesture in the Teaching of Subtraction

    ERIC Educational Resources Information Center

    Farrugia, Marie Therese

    2017-01-01

    In this article, I describe a research/teaching experience I undertook with a class of 5-year-old children in Malta. The topic was subtraction on the number line. I interpret the teaching/learning process through a semiotic perspective. In particular, I highlight the role played by the gesture of forming "frog jumps" on the number line.…

  15. Dissociation of Subtraction and Multiplication in the Right Parietal Cortex: Evidence from Intraoperative Cortical Electrostimulation

    ERIC Educational Resources Information Center

    Yu, Xiaodan; Chen, Chuansheng; Pu, Song; Wu, Chenxing; Li, Yongnian; Jiang, Tao; Zhou, Xinlin

    2011-01-01

    Previous research has consistently shown that the left parietal cortex is critical for numerical processing, but the role of the right parietal lobe has been much less clear. This study used the intraoperative cortical electrical stimulation approach to investigate neural dissociation in the right parietal cortex for subtraction and…

  16. Background levels in the Borexino detector

    NASA Astrophysics Data System (ADS)

    D'Angelo, Davide; Wurm, Michael; Borexino Collaboration

    2008-11-01

    The Borexino detector, designed and constructed for sub-MeV solar neutrino spectroscopy, is taking data at the Gran Sasso Laboratory, Italy; since May 2007. The main physics objective of Borexino, based on elastic scattering of neutrinos in organic liquid scintillator, is the real time flux measurement of the 862keV mono-energetic neutrinos from 7Be, which set extremely severe radio-purity requirements in the detector's design and handling. The first year of continous data taking provide now evidence of the extremely low background levels achieved in the construction of the detector and in the purification of the target mass. Several pieces of analysis sense the presence of radioisotopes of the 238U and 232Th chains, of 85Kr and of 210Po out of equilibrium from other Radon daughters. Particular emphasis is given to the detection of the cosmic muon background whose angular distributions have been obtained with the outer detector tracking algorithm and to the possibility of tagging the muon-induced neutron background in the scintillator with the recently enhanced electronics setup.

  17. Remote Evaluation of Rotational Velocity Using a Quadrant Photo-Detector and a DSC Algorithm

    PubMed Central

    Zeng, Xiangkai; Zhu, Zhixiong; Chen, Yang

    2016-01-01

    This paper presents an approach to remotely evaluate the rotational velocity of a measured object by using a quadrant photo-detector and a differential subtraction correlation (DSC) algorithm. The rotational velocity of a rotating object is determined by two temporal-delay numbers at the minima of two DSCs that are derived from the four output signals of the quadrant photo-detector, and the sign of the calculated rotational velocity directly represents the rotational direction. The DSC algorithm does not require any multiplication operations. Experimental calculations were performed to confirm the proposed evaluation method. The calculated rotational velocity, including its amplitude and direction, showed good agreement with the given one, which had an amplitude error of ~0.3%, and had over 1100 times the efficiency of the traditional cross-correlation method in the case of data number N > 4800. The confirmations have shown that the remote evaluation of rotational velocity can be done without any circular division disk, and that it has much fewer error sources, making it simple, accurate and effective for remotely evaluating rotational velocity. PMID:27120607

  18. Use of the Genomic Subtractive Hybridization Technique To Develop a Real-Time PCR Assay for Quantitative Detection of Prevotella spp. in Oral Biofilm Samples

    PubMed Central

    Nagashima, Shiori; Yoshida, Akihiro; Suzuki, Nao; Ansai, Toshihiro; Takehara, Tadamichi

    2005-01-01

    Genomic subtractive hybridization was used to design Prevotella nigrescens-specific primers and TaqMan probes. Based on this technique, a TaqMan real-time PCR assay was developed for quantifying four oral black-pigmented Prevotella species. The combination of real-time PCR and genomic subtractive hybridization is useful for preparing species-specific primer-probe sets for closely related species. PMID:15956428

  19. Hyper-spectral image compression algorithm based on mixing transform of wave band grouping to eliminate redundancy

    NASA Astrophysics Data System (ADS)

    Xie, ChengJun; Xu, Lin

    2008-03-01

    This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.

  20. Biological aerosol background characterization

    NASA Astrophysics Data System (ADS)

    Blatny, Janet; Fountain, Augustus W., III

    2011-05-01

    To provide useful information during military operations, or as part of other security situations, a biological aerosol detector has to respond within seconds or minutes to an attack by virulent biological agents, and with low false alarms. Within this time frame, measuring virulence of a known microorganism is extremely difficult, especially if the microorganism is of unknown antigenic or nucleic acid properties. Measuring "live" characteristics of an organism directly is not generally an option, yet only viable organisms are potentially infectious. Fluorescence based instruments have been designed to optically determine if aerosol particles have viability characteristics. Still, such commercially available biological aerosol detection equipment needs to be improved for their use in military and civil applications. Air has an endogenous population of microorganisms that may interfere with alarm software technologies. To design robust algorithms, a comprehensive knowledge of the airborne biological background content is essential. For this reason, there is a need to study ambient live bacterial populations in as many locations as possible. Doing so will permit collection of data to define diverse biological characteristics that in turn can be used to fine tune alarm algorithms. To avoid false alarms, improving software technologies for biological detectors is a crucial feature requiring considerations of various parameters that can be applied to suppress alarm triggers. This NATO Task Group will aim for developing reference methods for monitoring biological aerosol characteristics to improve alarm algorithms for biological detection. Additionally, they will focus on developing reference standard methodology for monitoring biological aerosol characteristics to reduce false alarm rates.

  1. Image-processing algorithms for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Incompletely closed glumes, germ and disease are three characteristics of hybrid rice seed. Image-processing algorithms developed to detect these seed characteristics were presented in this paper. The rice seed used for this study involved five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and IIyou. The algorithms were implemented with a 5*600 images set, a 4*400 images set and the other 5*600 images set respectively. The image sets included black background images, white background images and both sides images of rice seed. Results show that the algorithm for inspecting seeds with incompletely closed glumes based on Radon Transform achieved an accuracy of 96% for normal seeds, 92% for seeds with fine fissure and 87% for seeds with unclosed glumes, the algorithm for inspecting germinated seeds on panicle based on PCA and ANN achieved n average accuracy of 98% for normal seeds, 88% for germinated seeds on panicle and the algorithm for inspecting diseased seeds based on color features achieved an accuracy of 92% for normal and healthy seeds, 95% for spot diseased seeds and 83% for severe diseased seeds.

  2. Attentional bias induced by solving simple and complex addition and subtraction problems.

    PubMed

    Masson, Nicolas; Pesenti, Mauro

    2014-01-01

    The processing of numbers has been shown to induce shifts of spatial attention in simple probe detection tasks, with small numbers orienting attention to the left and large numbers to the right side of space. Recently, the investigation of this spatial-numerical association has been extended to mental arithmetic with the hypothesis that solving addition or subtraction problems may induce attentional displacements (to the right and to the left, respectively) along a mental number line onto which the magnitude of the numbers would range from left to right, from small to large numbers. Here we investigated such attentional shifts using a target detection task primed by arithmetic problems in healthy participants. The constituents of the addition and subtraction problems (first operand; operator; second operand) were flashed sequentially in the centre of a screen, then followed by a target on the left or the right side of the screen, which the participants had to detect. This paradigm was employed with arithmetic facts (Experiment 1) and with more complex arithmetic problems (Experiment 2) in order to assess the effects of the operation, the magnitude of the operands, the magnitude of the results, and the presence or absence of a requirement for the participants to carry or borrow numbers. The results showed that arithmetic operations induce some spatial shifts of attention, possibly through a semantic link between the operation and space.

  3. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  4. Brief Report: Additive and Subtractive Counterfactual Reasoning of Children with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Begeer, Sander; Terwogt, Mark Meerum; Lunenburg, Patty; Stegge, Hedy

    2009-01-01

    The development of additive ("If only I had done...") and subtractive ("If only I had not done....") counterfactual reasoning was examined in children with High Functioning Autism Spectrum Disorders (HFASD) (n = 72) and typically developing controls (n = 71), aged 6-12 years. Children were presented four stories where they could generate…

  5. Spatiotemporal models for the simulation of infrared backgrounds

    NASA Astrophysics Data System (ADS)

    Wilkes, Don M.; Cadzow, James A.; Peters, R. Alan, II; Li, Xingkang

    1992-09-01

    It is highly desirable for designers of automatic target recognizers (ATRs) to be able to test their algorithms on targets superimposed on a wide variety of background imagery. Background imagery in the infrared spectrum is expensive to gather from real sources, consequently, there is a need for accurate models for producing synthetic IR background imagery. We have developed a model for such imagery that will do the following: Given a real, infrared background image, generate another image, distinctly different from the one given, that has the same general visual characteristics as well as the first and second-order statistics of the original image. The proposed model consists of a finite impulse response (FIR) kernel convolved with an excitation function, and histogram modification applied to the final solution. A procedure for deriving the FIR kernel using a signal enhancement algorithm has been developed, and the histogram modification step is a simple memoryless nonlinear mapping that imposes the first order statistics of the original image onto the synthetic one, thus the overall model is a linear system cascaded with a memoryless nonlinearity. It has been found that the excitation function relates to the placement of features in the image, the FIR kernel controls the sharpness of the edges and the global spectrum of the image, and the histogram controls the basic coloration of the image. A drawback to this method of simulating IR backgrounds is that a database of actual background images must be collected in order to produce accurate FIR and histogram models. If this database must include images of all types of backgrounds obtained at all times of the day and all times of the year, the size of the database would be prohibitive. In this paper we propose improvements to the model described above that enable time-dependent modeling of the IR background. This approach can greatly reduce the number of actual IR backgrounds that are required to produce a

  6. Global Binary Continuity for Color Face Detection With Complex Background

    NASA Astrophysics Data System (ADS)

    Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.

    2017-08-01

    In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.

  7. NLSE: Parameter-Based Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  8. An advanced algorithm for fetal heart rate estimation from non-invasive low electrode density recordings.

    PubMed

    Dessì, Alessia; Pani, Danilo; Raffo, Luigi

    2014-08-01

    Non-invasive fetal electrocardiography is still an open research issue. The recent publication of an annotated dataset on Physionet providing four-channel non-invasive abdominal ECG traces promoted an international challenge on the topic. Starting from that dataset, an algorithm for the identification of the fetal QRS complexes from a reduced number of electrodes and without any a priori information about the electrode positioning has been developed, entering into the top ten best-performing open-source algorithms presented at the challenge.In this paper, an improved version of that algorithm is presented and evaluated exploiting the same challenge metrics. It is mainly based on the subtraction of the maternal QRS complexes in every lead, obtained by synchronized averaging of morphologically similar complexes, the filtering of the maternal P and T waves and the enhancement of the fetal QRS through independent component analysis (ICA) applied on the processed signals before a final fetal QRS detection stage. The RR time series of both the mother and the fetus are analyzed to enhance pseudoperiodicity with the aim of correcting wrong annotations. The algorithm has been designed and extensively evaluated on the open dataset A (N = 75), and finally evaluated on datasets B (N = 100) and C (N = 272) to have the mean scores over data not used during the algorithm development. Compared to the results achieved by the previous version of the algorithm, the current version would mark the 5th and 4th position in the final ranking related to the events 1 and 2, reserved to the open-source challenge entries, taking into account both official and unofficial entrants. On dataset A, the algorithm achieves 0.982 median sensitivity and 0.976 median positive predictivity.

  9. Fostering First Graders' Fluency with Basic Subtraction and Larger Addition Combinations via Computer-Assisted Instruction

    ERIC Educational Resources Information Center

    Baroody, Arthur J.; Purpura, David J.; Eiland, Michael D.; Reid, Erin E.

    2014-01-01

    Achieving fluency with basic subtraction and add-with-8 or -9 combinations is difficult for primary grade children. A 9-month training experiment entailed evaluating the efficacy of software designed to promote such fluency via guided learning of reasoning strategies. Seventy-five eligible first graders were randomly assigned to one of three…

  10. Influence of the large-small split effect on strategy choice in complex subtraction.

    PubMed

    Xiang, Yan Hui; Wu, Hao; Shang, Rui Hong; Chao, Xiaomei; Ren, Ting Ting; Zheng, Li Ling; Mo, Lei

    2018-04-01

    Two main theories have been used to explain the arithmetic split effect: decision-making process theory and strategy choice theory. Using the inequality paradigm, previous studies have confirmed that individuals tend to adopt a plausibility-checking strategy and a whole-calculation strategy to solve large and small split problems in complex addition arithmetic, respectively. This supports strategy choice theory, but it is unknown whether this theory also explains performance in solving different split problems in complex subtraction arithmetic. This study used small, intermediate and large split sizes, with each split condition being further divided into problems requiring and not requiring borrowing. The reaction times (RTs) for large and intermediate splits were significantly shorter than those for small splits, while accuracy was significantly higher for large and middle splits than for small splits, reflecting no speed-accuracy trade-off. Further, RTs and accuracy differed significantly between the borrow and no-borrow conditions only for small splits. This study indicates that strategy choice theory is suitable to explain the split effect in complex subtraction arithmetic. That is, individuals tend to choose the plausibility-checking strategy or the whole-calculation strategy according to the split size. © 2016 International Union of Psychological Science.

  11. Rapid generation of three-dimensional microchannels for vascularization using a subtractive printing technique.

    PubMed

    Burtch, Stephanie R; Sameti, Mahyar; Olmstead, Richard T; Bashur, Chris A

    2018-05-01

    The development of tissue-engineered products has been limited by lack of a perfused microvasculature that delivers nutrients and maintains cell viability. Current strategies to promote vascularization such as additive three-dimensional printing techniques have limitations. This study validates the use of an ultra-fast laser subtractive printing technique to generate capillary-sized channels in hydrogels prepopulated with cells by demonstrating cell viability relative to the photodisrupted channels in the gel. The system can move the focal spot laterally in the gel at a rate of 2500 mm/s by using a galvanometric scanner to raster the in plane focal spot. A Galilean telescope allows z-axis movement. Blended hydrogels of polyethylene glycol and collagen with a range of optical clarities, mechanical properties and swelling behavior were tested to demonstrate that the subtractive printing process for writing vascular channels is compatible with all of the blended hydrogels tested. Channel width and patterns were controlled by adjusting the laser energy and focal spot positioning, respectively. After treatment, high cell viability was observed at distances greater than or equal to 18 μm from the fabricated channels. Overall, this study demonstrates a flexible technique that has the potential to rapidly generate channels in tissue-engineered constructs. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Algorithm architecture co-design for ultra low-power image sensor

    NASA Astrophysics Data System (ADS)

    Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.

    2012-03-01

    In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.

  13. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  14. Segmentation methods for breast vasculature in dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Lee, Hyo Min; Singh, Tanushriya; Maidment, Andrew D. A.

    2015-03-01

    Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) uses an iodinated contrast agent to image the three-dimensional breast vasculature. The University of Pennsylvania has an ongoing DE CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 post-contrast). DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. Temporal subtraction of the post-contrast DE images from the pre-contrast DE image is performed to analyze iodine uptake. Our previous work investigated image registration methods to correct for patient motion, enhancing the evaluation of vascular kinetics. In this project we investigate a segmentation algorithm which identifies blood vessels in the breast from our temporal DE subtraction images. Anisotropic diffusion filtering, Gabor filtering, and morphological filtering are used for the enhancement of vessel features. Vessel labeling methods are then used to distinguish vessel and background features successfully. Statistical and clinical evaluations of segmentation accuracy in DE-CBT images are ongoing.

  15. People counting in classroom based on video surveillance

    NASA Astrophysics Data System (ADS)

    Zhang, Quanbin; Huang, Xiang; Su, Juan

    2014-11-01

    Currently, the switches of the lights and other electronic devices in the classroom are mainly relied on manual control, as a result, many lights are on while no one or only few people in the classroom. It is important to change the current situation and control the electronic devices intelligently according to the number and the distribution of the students in the classroom, so as to reduce the considerable waste of electronic resources. This paper studies the problem of people counting in classroom based on video surveillance. As the camera in the classroom can not get the full shape contour information of bodies and the clear features information of faces, most of the classical algorithms such as the pedestrian detection method based on HOG (histograms of oriented gradient) feature and the face detection method based on machine learning are unable to obtain a satisfied result. A new kind of dual background updating model based on sparse and low-rank matrix decomposition is proposed in this paper, according to the fact that most of the students in the classroom are almost in stationary state and there are body movement occasionally. Firstly, combining the frame difference with the sparse and low-rank matrix decomposition to predict the moving areas, and updating the background model with different parameters according to the positional relationship between the pixels of current video frame and the predicted motion regions. Secondly, the regions of moving objects are determined based on the updated background using the background subtraction method. Finally, some operations including binarization, median filtering and morphology processing, connected component detection, etc. are performed on the regions acquired by the background subtraction, in order to induce the effects of the noise and obtain the number of people in the classroom. The experiment results show the validity of the algorithm of people counting.

  16. A private DNA motif finding algorithm.

    PubMed

    Chen, Rui; Peng, Yun; Choi, Byron; Xu, Jianliang; Hu, Haibo

    2014-08-01

    With the increasing availability of genomic sequence data, numerous methods have been proposed for finding DNA motifs. The discovery of DNA motifs serves a critical step in many biological applications. However, the privacy implication of DNA analysis is normally neglected in the existing methods. In this work, we propose a private DNA motif finding algorithm in which a DNA owner's privacy is protected by a rigorous privacy model, known as ∊-differential privacy. It provides provable privacy guarantees that are independent of adversaries' background knowledge. Our algorithm makes use of the n-gram model and is optimized for processing large-scale DNA sequences. We evaluate the performance of our algorithm over real-life genomic data and demonstrate the promise of integrating privacy into DNA motif finding. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Electrode configuration and signal subtraction technique for single polarity charge carrier sensing in ionization detectors

    DOEpatents

    Luke, Paul

    1996-01-01

    An ionization detector electrode and signal subtraction apparatus and method provides at least one first conductive trace formed onto the first surface of an ionization detector. The first surface opposes a second surface of the ionization detector. At least one second conductive trace is also formed on the first surface of the ionization detector in a substantially interlaced and symmetrical pattern with the at least one first conductive trace. Both of the traces are held at a voltage potential of a first polarity type. By forming the traces in a substantially interlaced and symmetric pattern, signals generated by a charge carrier are substantially of equal strength with respect to both of the traces. The only significant difference in measured signal strength occurs when the charge carrier moves to within close proximity of the traces and is received at the collecting trace. The measured signals are then subtracted and compared to quantitatively measure the magnitude of the charge and to determine the position at which the charge carrier originated within the ionization detector.

  18. Electrode configuration and signal subtraction technique for single polarity charge carrier sensing in ionization detectors

    DOEpatents

    Luke, P.

    1996-06-25

    An ionization detector electrode and signal subtraction apparatus and method provide at least one first conductive trace formed onto the first surface of an ionization detector. The first surface opposes a second surface of the ionization detector. At least one second conductive trace is also formed on the first surface of the ionization detector in a substantially interlaced and symmetrical pattern with the at least one first conductive trace. Both of the traces are held at a voltage potential of a first polarity type. By forming the traces in a substantially interlaced and symmetric pattern, signals generated by a charge carrier are substantially of equal strength with respect to both of the traces. The only significant difference in measured signal strength occurs when the charge carrier moves to within close proximity of the traces and is received at the collecting trace. The measured signals are then subtracted and compared to quantitatively measure the magnitude of the charge and to determine the position at which the charge carrier originated within the ionization detector. 9 figs.

  19. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  20. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  1. WAXS fat subtraction model to estimate differential linear scattering coefficients of fatless breast tissue: Phantom materials evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Robert Y., E-mail: rx-tang@laurentian.ca; Laamanen, Curtis, E-mail: cx-laamanen@laurentian.ca; McDonald, Nancy, E-mail: mcdnancye@gmail.com

    Purpose: Develop a method to subtract fat tissue contributions to wide-angle x-ray scatter (WAXS) signals of breast biopsies in order to estimate the differential linear scattering coefficients μ{sub s} of fatless tissue. Cancerous and fibroglandular tissue can then be compared independent of fat content. In this work phantom materials with known compositions were used to test the efficacy of the WAXS subtraction model. Methods: Each sample 5 mm in diameter and 5 mm thick was interrogated by a 50 kV 2.7 mm diameter beam for 3 min. A 25 mm{sup 2} by 1 mm thick CdTe detector allowed measurements ofmore » a portion of the θ = 6° scattered field. A scatter technique provided means to estimate the incident spectrum N{sub 0}(E) needed in the calculations of μ{sub s}[x(E, θ)] where x is the momentum transfer argument. Values of μ{sup ¯}{sub s} for composite phantoms consisting of three plastic layers were estimated and compared to the values obtained via the sum μ{sup ¯}{sub s}{sup ∑}(x)=ν{sub 1}μ{sub s1}(x)+ν{sub 2}μ{sub s2}(x)+ν{sub 3}μ{sub s3}(x), where ν{sub i} is the fractional volume of the ith plastic component. Water, polystyrene, and a volume mixture of 0.6 water + 0.4 polystyrene labelled as fibphan were chosen to mimic cancer, fat, and fibroglandular tissue, respectively. A WAXS subtraction model was used to remove the polystyrene signal from tissue composite phantoms so that the μ{sub s} of water and fibphan could be estimated. Although the composite samples were layered, simulations were performed to test the models under nonlayered conditions. Results: The well known μ{sub s} signal of water was reproduced effectively between 0.5 < x < 1.6 nm{sup −1}. The μ{sup ¯}{sub s} obtained for the heterogeneous samples agreed with μ{sup ¯}{sub s}{sup ∑}. Polystyrene signals were subtracted successfully from composite phantoms. The simulations validated the usefulness of the WAXS models for nonlayered biopsies. Conclusions: The methodology

  2. Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.

    PubMed

    Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald

    2018-04-01

    The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.

  3. Effect of scaling and root planing on alveolar bone as measured by subtraction radiography.

    PubMed

    Hwang, You-Jeong; Fien, Matthew Jonas; Lee, Sam-Sun; Kim, Tae-Il; Seol, Yang-Jo; Lee, Yong-Moo; Ku, Young; Rhyu, In-Chul; Chung, Chong-Pyoung; Han, Soo-Boo

    2008-09-01

    Scaling and root planing of diseased periodontal pockets is fundamental to the treatment of periodontal disease. Although various clinical parameters have been used to assess the efficacy of this therapy, radiographic analysis of changes in bone density following scaling and root planing has not been extensively researched. In this study, digital subtraction radiography was used to analyze changes that occurred in the periodontal hard tissues following scaling and root planing. Thirteen subjects with a total of 39 sites that presented with >3 mm of vertical bone loss were included in this study. Clinical examinations were performed and radiographs were taken prior to treatment and were repeated 6 months following scaling and root planing. Radiographic analysis was performed with computer-assisted radiographic evaluation software. Three regions of interest (ROI) were defined as the most coronal, middle, and apical portions of each defect. A fourth ROI was used for each site as a control region and was placed at a distant, untreated area. Statistical analysis was carried out to evaluate changes in the mean gray level at the coronal, middle, and apical region of each treated defect. Digital subtraction radiography revealed an increase in radiographic density in 101 of the 117 test regions (83.3%). A 256 gray level was used, and a value >128 was assumed to represent a density gain in the ROI. The average gray level increase was 18.65. Although the coronal, middle, and apical regions displayed increases in bone density throughout this study, the bone density of the apical ROI (gray level = 151.27 +/- 20.62) increased significantly more than the bone density of the coronal ROI (gray level = 139.19 +/- 21.78). A significant increase in bone density was seen in probing depths >5 mm compared to those <5 mm in depth. No significant difference was found with regard to bone-density changes surrounding single- versus multiple-rooted teeth. Scaling and root planing of diseased

  4. NIKOS II - A System For Non-Invasive Imaging Of Coronary Arteries

    NASA Astrophysics Data System (ADS)

    Dix, Wolf-Rainer; Engelke, Klaus; Heintze, Gerhard; Heuer, Joachim; Graeff, Walter; Kupper, Wolfram; Lohmann, Michael; Makin, I.; Moechel, Thomas; Reumann, Reinhold; Stellmaschek, Karl-Heinz

    1989-05-01

    This paper presents results of the initial in-vivo investigations with the system NIKOS II (NIKOS = Nicht-invasive Koronarangiographie mit Synchrotronstrahlung), an advanced version of NIKOS I which was developed since 1981. Aim of the work is to be able to visualize coronary arteries down to 1mm diameter with an iodine mass density of lmg/cm2, thus allowing non-invasive investigations by intravenous injection of the contrast agent. For this purpose Digital Subtraction Angiography (DSA) in energy subtraction mode (dichromography) is employed. The two images for subtraction are taken at photon energies just below and above the iodine K-edge (33.17keV) After subtraction the background contrast from bone and soft tissue is suppressed and the iodinated structures are strongly enhanced because of the abrupt change of absorption at the K-edge. The two monoenergetic beams are filtered out of a synchrotron radiation beam by a crystal monochromator and measured with a two line detector. One scan (two images) lasts between 250ms (final version) and ls (at present ). The images from the in-vivo investigations of dogs have been promising. The right coronary artery (diameter 1.5mm) was clearly visible. With application of better image processing algorithms the images illustrated in this paper have a definite potential for improvement.

  5. Influence of background size, luminance and eccentricity on different adaptation mechanisms

    PubMed Central

    Gloriani, Alejandro H.; Matesanz, Beatriz M.; Barrionuevo, Pablo A.; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A.

    2016-01-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06–110 cd/m2) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5 cd/m2. In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. PMID:27210038

  6. Influence of background size, luminance and eccentricity on different adaptation mechanisms.

    PubMed

    Gloriani, Alejandro H; Matesanz, Beatriz M; Barrionuevo, Pablo A; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A

    2016-08-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06-110cd/m(2)) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5cd/m(2). In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Infrared image background modeling based on improved Susan filtering

    NASA Astrophysics Data System (ADS)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  8. Quantitative coronary angiography using image recovery techniques for background estimation in unsubtracted images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Jerry T.; Kamyar, Farzad; Molloi, Sabee

    2007-10-15

    Densitometry measurements have been performed previously using subtracted images. However, digital subtraction angiography (DSA) in coronary angiography is highly susceptible to misregistration artifacts due to the temporal separation of background and target images. Misregistration artifacts due to respiration and patient motion occur frequently, and organ motion is unavoidable. Quantitative densitometric techniques would be more clinically feasible if they could be implemented using unsubtracted images. The goal of this study is to evaluate image recovery techniques for densitometry measurements using unsubtracted images. A humanoid phantom and eight swine (25-35 kg) were used to evaluate the accuracy and precision of the followingmore » image recovery techniques: Local averaging (LA), morphological filtering (MF), linear interpolation (LI), and curvature-driven diffusion image inpainting (CDD). Images of iodinated vessel phantoms placed over the heart of the humanoid phantom or swine were acquired. In addition, coronary angiograms were obtained after power injections of a nonionic iodinated contrast solution in an in vivo swine study. Background signals were estimated and removed with LA, MF, LI, and CDD. Iodine masses in the vessel phantoms were quantified and compared to known amounts. Moreover, the total iodine in left anterior descending arteries was measured and compared with DSA measurements. In the humanoid phantom study, the average root mean square errors associated with quantifying iodine mass using LA and MF were approximately 6% and 9%, respectively. The corresponding average root mean square errors associated with quantifying iodine mass using LI and CDD were both approximately 3%. In the in vivo swine study, the root mean square errors associated with quantifying iodine in the vessel phantoms with LA and MF were approximately 5% and 12%, respectively. The corresponding average root mean square errors using LI and CDD were both 3%. The standard

  9. Use of Caval Subtraction 2D Phase-Contrast MR Imaging to Measure Total Liver and Hepatic Arterial Blood Flow: Preclinical Validation and Initial Clinical Translation.

    PubMed

    Chouhan, Manil D; Mookerjee, Rajeshwar P; Bainbridge, Alan; Walker-Samuel, Simon; Davies, Nathan; Halligan, Steve; Lythgoe, Mark F; Taylor, Stuart A

    2016-09-01

    Purpose To validate caval subtraction two-dimensional (2D) phase-contrast magnetic resonance (MR) imaging measurements of total liver blood flow (TLBF) and hepatic arterial fraction in an animal model and evaluate consistency and reproducibility in humans. Materials and Methods Approval from the institutional ethical committee for animal care and research ethics was obtained. Fifteen Sprague-Dawley rats underwent 2D phase-contrast MR imaging of the portal vein (PV) and infrahepatic and suprahepatic inferior vena cava (IVC). TLBF and hepatic arterial flow were estimated by subtracting infrahepatic from suprahepatic IVC flow and PV flow from estimated TLBF, respectively. Direct PV transit-time ultrasonography (US) and fluorescent microsphere measurements of hepatic arterial fraction were the standards of reference. Thereafter, consistency of caval subtraction phase-contrast MR imaging-derived TLBF and hepatic arterial flow was assessed in 13 volunteers (mean age, 28.3 years ± 1.4) against directly measured phase-contrast MR imaging PV and proper hepatic arterial inflow; reproducibility was measured after 7 days. Bland-Altman analysis of agreement and coefficient of variation comparisons were undertaken. Results There was good agreement between PV flow measured with phase-contrast MR imaging and that measured with transit-time US (mean difference, -3.5 mL/min/100 g; 95% limits of agreement [LOA], ±61.3 mL/min/100 g). Hepatic arterial fraction obtained with caval subtraction agreed well with those with fluorescent microspheres (mean difference, 4.2%; 95% LOA, ±20.5%). Good consistency was demonstrated between TLBF in humans measured with caval subtraction and direct inflow phase-contrast MR imaging (mean difference, -1.3 mL/min/100 g; 95% LOA, ±23.1 mL/min/100 g). TLBF reproducibility at 7 days was similar between the two methods (95% LOA, ±31.6 mL/min/100 g vs ±29.6 mL/min/100 g). Conclusion Caval subtraction phase-contrast MR imaging is a simple and clinically

  10. Image quality improvement in three-dimensional time-of-flight magnetic resonance angiography using the subtraction method for brain and temporal bone diseases.

    PubMed

    Peng, Shu-Hui; Shen, Chao-Yu; Wu, Ming-Chi; Lin, Yue-Der; Huang, Chun-Huang; Kang, Ruei-Jin; Tyan, Yeu-Sheng; Tsao, Teng-Fu

    2013-08-01

    Time-of-flight (TOF) magnetic resonance (MR) angiography is based on flow-related enhancement using the T1-weighted spoiled gradient echo, or the fast low-angle shot gradient echo sequence. However, materials with short T1 relaxation times may show hyperintensity signals and contaminate the TOF images. The objective of our study was to determine whether subtraction three-dimensional (3D) TOF MR angiography improves image quality in brain and temporal bone diseases with unwanted contaminations with short T1 relaxation times. During the 12-month study period, patients who had masses with short T1 relaxation times noted on precontrast T1-weighted brain MR images and 24 healthy volunteers were scanned using conventional and subtraction 3D TOF MR angiography. The qualitative evaluation of each MR angiogram was based on signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and scores in three categories, namely, (1) presence of misregistration artifacts, (2) ability to display arterial anatomy selectively (without contamination by materials with short T1 relaxation times), and (3) arterial flow-related enhancement. We included 12 patients with intracranial hematomas, brain tumors, or middle-ear cholesterol granulomas. Subtraction 3D TOF MR angiography yielded higher CNRs between the area of the basilar artery (BA) and normal-appearing parenchyma of the brain and lower SNRs in the area of the BA compared with the conventional technique (147.7 ± 77.6 vs. 130.6 ± 54.2, p < 0.003 and 162.5 ± 79.9 vs. 194.3 ± 62.3, p < 0.001, respectively) in all 36 cases. The 3D subtraction angiography did not deteriorate image quality with misregistration artifacts and showed a better selective display of arteries (p < 0.0001) and arterial flow-related enhancement (p < 0.044) than the conventional method. Subtraction 3D TOF MR angiography is more appropriate than the conventional method in improving the image quality in brain and temporal bone diseases with unwanted contaminations

  11. An AK-LDMeans algorithm based on image clustering

    NASA Astrophysics Data System (ADS)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  12. A Spectral Algorithm for Solving the Relativistic Vlasov-Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    2001-01-01

    A spectral method algorithm is developed for the numerical solution of the full six-dimensional Vlasov-Maxwell system of equations. Here, the focus is on the electron distribution function, with positive ions providing a constant background. The algorithm consists of a Jacobi polynomial-spherical harmonic formulation in velocity space and a trigonometric formulation in position space. A transform procedure is used to evaluate nonlinear terms. The algorithm is suitable for performing moderate resolution simulations on currently available supercomputers for both scientific and engineering applications.

  13. Objective criteria for acceptability and constancy tests of digital subtraction angiography.

    PubMed

    de las Heras, Hugo; Torres, Ricardo; Fernández-Soto, José Miguel; Vañó, Eliseo

    2016-01-01

    Demonstrate an objective procedure to quantify image quality in digital subtraction angiography (DSA) and suggest thresholds for acceptability and constancy tests. Series of images were obtained in a DSA system simulating a small (paediatric) and a large patient using the dynamic phantom described in the IEC and DIN standards for acceptance tests of DSA equipment. Image quality was quantified using measurements of contrast-to-noise ratio (CNR). Overall scores combining the CNR of 10-100 mg/ml Iodine at a vascular diameter of 1-4 mm in a homogeneous background were defined. Phantom entrance surface air kerma (Ka,e) was measured with an ionisation chamber. The visibility of a low-contrast vessel in DSA images has been identified with a CNR value of 0.50 ± 0.03. Despite using 14 times more Ka,e (8.85 vs 0.63 mGy/image), the protocol for large patients showed a decrease in the overall score CNRsum of 67% (4.21 ± 0.06 vs 2.10 ± 0.05). The uncertainty in the results of the objective method was below 5%. Objective evaluation of DSA images using CNR is feasible with dedicated phantom measurements. An objective methodology has been suggested for acceptance tests compliant with the IEC/DIN standards. The defined overall scores can serve to fix a reproducible baseline for constancy tests, as well as to study the device stability within one acquisition series and compare different imaging protocols. This work provides aspects that have not been included in the recent European guidelines on Criteria for Acceptability of Medical Radiological Equipment. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1996-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm is nearly complete. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. Simple algorithms such as subtracting the reflectance at 1380 nm from the visible and near infrared bands can significantly reduce the error; however, only if the diffuse transmittance of the aerosol layer is taken into account. The atmospheric correction code has been modified for use with absorbing aerosols. Tests of the code showed that, in contrast to non absorbing aerosols, the retrievals were strongly influenced by the vertical structure of the aerosol, even when the candidate aerosol set was restricted to a set appropriate to the absorbing aerosol. This will further complicate the problem of atmospheric correction in an atmosphere with strongly absorbing aerosols. Our whitecap radiometer system and solar aureole camera were both tested at sea and performed well. Investigation of a technique to remove the effects of residual instrument polarization sensitivity were initiated and applied to an instrument possessing (approx.) 3-4 times the polarization sensitivity expected for MODIS. Preliminary results suggest that for such an instrument, elimination of the polarization effect is possible at the required level of accuracy by estimating the polarization of the top-of-atmosphere radiance to be that expected for a pure Rayleigh scattering atmosphere. This may be of significance for design of a follow-on MODIS instrument. W.M. Balch participated on two month-long cruises to the Arabian sea, measuring coccolithophore abundance, production, and optical properties. A thorough understanding of the relationship between calcite abundance and light scatter, in situ, will provide the basis for a generic suspended calcite algorithm.

  15. An Analysis of Kindergarten and First Grade Children's Addition and Subtraction Problem Solving Modeling and Accuracy.

    ERIC Educational Resources Information Center

    Shores, Jay H.; Underhill, Robert G.

    A study was undertaken of the effects of formal education and conservation of numerousness on addition and subtraction problem types. Thirty-six kindergarten and 36 first-grade subjects randomly selected from one area of a school district were administered measures of conservation, problem-solving success, and modeling ability. Following factor…

  16. Heritage Language Learners in Mixed Spanish Classes: Subtractive Practices and Perceptions of High School Spanish Teachers

    ERIC Educational Resources Information Center

    Randolph, Linwood J., Jr.

    2017-01-01

    This qualitative study investigated the language ideologies and instructional practices of an entire Spanish language faculty at a high school in a new gateway state for immigration. The study examined additive and subtractive practices of teachers as they strived to teach Spanish to heritage language learners (HLLs) enrolled in mixed…

  17. An algorithm to improve speech recognition in noise for hearing-impaired listeners

    PubMed Central

    Healy, Eric W.; Yoho, Sarah E.; Wang, Yuxuan; Wang, DeLiang

    2013-01-01

    Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438

  18. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  19. [A new peak detection algorithm of Raman spectra].

    PubMed

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  20. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kagie, Matthew J.; Lanterman, Aaron D.

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  1. Rotational digital subtraction angiography of the renal arteries: technique and evaluation in the study of native and transplant renal arteries.

    PubMed

    Seymour, H R; Matson, M B; Belli, A M; Morgan, R; Kyriou, J; Patel, U

    2001-02-01

    Rotational digital subtraction angiography (RDSA) allows multidirectional angiographic acquisitions with a single injection of contrast medium. The role of RDSA was evaluated in 60 patients referred over a 7-month period for diagnostic renal angiography and 12 patients referred for renal transplant studies. All angiograms were assessed for their diagnostic value, the presence of anomalies and the quantity of contrast medium used. The effective dose for native renal RDSA was determined. 41 (68.3%) native renal RDSA images and 8 (66.7%) transplant renal RDSA images were of diagnostic quality. Multiple renal arteries were identified in 9/41 (22%) native renal RDSA diagnostic images. The mean volume of contrast medium in the RDSA runs was 51.2 ml and 50 ml for native and transplant renal studies, respectively. The mean effective dose for 120 degrees native renal RDSA was 2.36 mSv, equivalent to 1 year's mean background radiation. Those RDSA images that were non-diagnostic allowed accurate prediction of the optimal angle for further static angiographic series, which is of great value in transplant renal vessels.

  2. Subtractive Plasma-Assisted-Etch Process for Developing High Performance Nanocrystalline Zinc-Oxide Thin-Film-Transistors

    DTIC Science & Technology

    2015-03-26

    THIN - FILM - TRANSISTORS THESIS Thomas M. Donigan, First Lieutenant, USAF AFIT-ENG-MS-15-M-027 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR...DEVELOPING HIGH PERFORMANCE NANOCRYSTALLINE ZINC-OXIDE THIN - FILM - TRANSISTORS THESIS Presented to the Faculty Department of Electrical and...15-M-027 SUBTRACTIVE PLASMA-ASSISTED-ETCH PROCESS FOR DEVELOPING HIGH PERFORMANCE NANOCRYSTALLINE ZINC-OXIDE THIN - FILM - TRANSISTORS

  3. Background radiation in inelastic X-ray scattering and X-ray emission spectroscopy. A study for Johann-type spectrometers

    NASA Astrophysics Data System (ADS)

    Paredes Mellone, O. A.; Bianco, L. M.; Ceppi, S. A.; Goncalves Honnicke, M.; Stutz, G. E.

    2018-06-01

    A study of the background radiation in inelastic X-ray scattering (IXS) and X-ray emission spectroscopy (XES) based on an analytical model is presented. The calculation model considers spurious radiation originated from elastic and inelastic scattering processes along the beam paths of a Johann-type spectrometer. The dependence of the background radiation intensity on the medium of the beam paths (air and helium), analysed energy and radius of the Rowland circle was studied. The present study shows that both for IXS and XES experiments the background radiation is dominated by spurious radiation owing to scattering processes along the sample-analyser beam path. For IXS experiments the spectral distribution of the main component of the background radiation shows a weak linear dependence on the energy for the most cases. In the case of XES, a strong non-linear behaviour of the background radiation intensity was predicted for energy analysis very close to the backdiffraction condition, with a rapid increase in intensity as the analyser Bragg angle approaches π / 2. The contribution of the analyser-detector beam path is significantly weaker and resembles the spectral distribution of the measured spectra. Present results show that for usual experimental conditions no appreciable structures are introduced by the background radiation into the measured spectra, both in IXS and XES experiments. The usefulness of properly calculating the background profile is demonstrated in a background subtraction procedure for a real experimental situation. The calculation model was able to simulate with high accuracy the energy dependence of the background radiation intensity measured in a particular XES experiment with air beam paths.

  4. Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2016-10-01

    We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

  5. Successive ratio subtraction as a novel manipulation of ratio spectra for quantitative determination of a mixture of furosemide, spironolactone and canrenone

    NASA Astrophysics Data System (ADS)

    Emam, Aml A.; Abdelaleem, Eglal A.; Naguib, Ibrahim A.; Abdallah, Fatma F.; Ali, Nouruddin W.

    2018-03-01

    Furosemide and spironolactone are commonly prescribed antihypertensive drugs. Canrenone is the main degradation product and main metabolite of spironolactone. Ratio subtraction and extended ratio subtraction spectrophotometric methods were previously applied for quantitation of only binary mixtures. An extension of the above mentioned methods; successive ratio subtraction, is introduced in the presented work for quantitative determination of ternary mixtures exemplified by furosemide, spironolactone and canrenone. Manipulating the ratio spectra of the ternary mixture allowed their determination at 273.6 nm, 285 nm and 240 nm and in the concentration ranges of (2-16 μg mL- 1), (4-32 μg mL- 1) and (1-18 μg mL- 1) for furosemide, spironolactone and canrenone, respectively. Method specificity was ensured by the application to laboratory prepared mixtures. The introduced method was ensured to be accurate and precise. Validation of the developed method was done with respect to ICH guidelines and its validity was further ensured by the application to the pharmaceutical formulation. Statistical comparison between the obtained results and those obtained from the reported HPLC method was achieved concerning student's t-test and F ratio test where no significant difference was observed.

  6. How to build institutionalization on students: a pilot experiment on a didactical design of addition and subtraction involving negative integers

    NASA Astrophysics Data System (ADS)

    Fuadiah, N. F.; Suryadi, D.; Turmudi

    2018-05-01

    This study focuses on the design of a didactical situation in addition and subtraction involving negative integers at the pilot experiment phase. As we know, negative numbers become an obstacle for students in solving problems related to them. This study aims to create a didactical design that can assist students in understanding the addition and subtraction. Another expected result in this way is that students are introduced to the characteristics of addition and subtraction of integers. The design was implemented on 32 seventh grade students in one of the classes in a junior secondary school as the pilot experiment. Learning activities were observed thoroughly including the students’ responses that emerged during the learning activities. The written documentation of the students was also used to support the analysis in the learning activities. The results of the analysis showed that this method could help the students perform a large number of integer operations that could not be done with a number line. The teacher’s support as a didactical potential contract was still needed to encourage institutionalization processes. The results of the design analysis used as the basis of the revision are expected to be implemented by the teacher in the teaching experiment.

  7. Assessment of blood supply to intracranial pathologies in children using MR digital subtraction angiography.

    PubMed

    Chooi, Weng Kong; Connolly, Dan J A; Coley, Stuart C; Griffiths, Paul D

    2006-10-01

    MR digital subtraction angiography (MR-DSA) is a contrast-enhanced MR angiographic sequence that enables time-resolved evaluation of the cerebral circulation. We describe the feasibility and technical success of our attempts at MR-DSA for the assessment of intracranial pathology in children. We performed MR-DSA in 15 children (age range 5 days to 16 years) referred for MR imaging because of known or suspected intracranial pathology that required a dynamic assessment of the cerebral vasculature. MR-DSA consisted of a thick (6-10 mm) slice-selective RF-spoiled fast gradient-echo sequence (RF-FAST) acquired before and during passage of an intravenously administered bolus of Gd-DTPA. The images were subtracted and viewed as a cine loop. MR-DSA was performed successfully in all patients. High-flow lesions were shown in four patients; these included vein of Galen aneurysmal malformation, dural fistula, and two partially treated arteriovenous malformations (AVMs). Low-flow lesions were seen in three patients, all of which were tumours. Normal flow was confirmed in eight patients including two with successfully treated AVMs, and in three patients with cavernomas. Our early experience suggests that MR-DSA is a realistic, non-invasive alternative to catheter angiography in certain clinical settings.

  8. [Raman spectroscopy fluorescence background correction and its application in clustering analysis of medicines].

    PubMed

    Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei

    2010-08-01

    During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.

  9. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    PubMed

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be

  10. A Toolbox to Improve Algorithms for Insulin-Dosing Decision Support

    PubMed Central

    Donsa, K.; Plank, J.; Schaupp, L.; Mader, J. K.; Truskaller, T.; Tschapeller, B.; Höll, B.; Spat, S.; Pieber, T. R.

    2014-01-01

    Summary Background Standardized insulin order sets for subcutaneous basal-bolus insulin therapy are recommended by clinical guidelines for the inpatient management of diabetes. The algorithm based GlucoTab system electronically assists health care personnel by supporting clinical workflow and providing insulin-dose suggestions. Objective To develop a toolbox for improving clinical decision-support algorithms. Methods The toolbox has three main components. 1) Data preparation: Data from several heterogeneous sources is extracted, cleaned and stored in a uniform data format. 2) Simulation: The effects of algorithm modifications are estimated by simulating treatment workflows based on real data from clinical trials. 3) Analysis: Algorithm performance is measured, analyzed and simulated by using data from three clinical trials with a total of 166 patients. Results Use of the toolbox led to algorithm improvements as well as the detection of potential individualized subgroup-specific algorithms. Conclusion These results are a first step towards individualized algorithm modifications for specific patient subgroups. PMID:25024768

  11. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    PubMed Central

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463

  12. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.

    2010-01-15

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, amore » chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.« less

  13. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.

    PubMed

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan

    2010-01-01

    Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  14. Use of Caval Subtraction 2D Phase-Contrast MR Imaging to Measure Total Liver and Hepatic Arterial Blood Flow: Preclinical Validation and Initial Clinical Translation

    PubMed Central

    Walker-Samuel, Simon; Davies, Nathan; Halligan, Steve; Lythgoe, Mark F.

    2016-01-01

    Purpose To validate caval subtraction two-dimensional (2D) phase-contrast magnetic resonance (MR) imaging measurements of total liver blood flow (TLBF) and hepatic arterial fraction in an animal model and evaluate consistency and reproducibility in humans. Materials and Methods Approval from the institutional ethical committee for animal care and research ethics was obtained. Fifteen Sprague-Dawley rats underwent 2D phase-contrast MR imaging of the portal vein (PV) and infrahepatic and suprahepatic inferior vena cava (IVC). TLBF and hepatic arterial flow were estimated by subtracting infrahepatic from suprahepatic IVC flow and PV flow from estimated TLBF, respectively. Direct PV transit-time ultrasonography (US) and fluorescent microsphere measurements of hepatic arterial fraction were the standards of reference. Thereafter, consistency of caval subtraction phase-contrast MR imaging–derived TLBF and hepatic arterial flow was assessed in 13 volunteers (mean age, 28.3 years ± 1.4) against directly measured phase-contrast MR imaging PV and proper hepatic arterial inflow; reproducibility was measured after 7 days. Bland-Altman analysis of agreement and coefficient of variation comparisons were undertaken. Results There was good agreement between PV flow measured with phase-contrast MR imaging and that measured with transit-time US (mean difference, −3.5 mL/min/100 g; 95% limits of agreement [LOA], ±61.3 mL/min/100 g). Hepatic arterial fraction obtained with caval subtraction agreed well with those with fluorescent microspheres (mean difference, 4.2%; 95% LOA, ±20.5%). Good consistency was demonstrated between TLBF in humans measured with caval subtraction and direct inflow phase-contrast MR imaging (mean difference, −1.3 mL/min/100 g; 95% LOA, ±23.1 mL/min/100 g). TLBF reproducibility at 7 days was similar between the two methods (95% LOA, ±31.6 mL/min/100 g vs ±29.6 mL/min/100 g). Conclusion Caval subtraction phase-contrast MR imaging is a simple and

  15. Global Energetics of Thirty-Eight Large Solar Eruptive Events

    DTIC Science & Technology

    2012-10-17

    is the energy radiated in the narrow GOES band from 1 to 8 Å, obtained directly from background -subtracted data (Section 2.1). The second is the energy... background -subtracted fluxes over the duration of the flare, from the GOES start time (given by NOAA and listed in Table 1) to the time when the flux...had decreased to 10% of the peak value. The background that was subtracted was taken as the lowest flux in the hour or so before and/or after the flare

  16. Topographic prominence discriminator for the detection of short-latency spikes of retinal ganglion cells

    NASA Astrophysics Data System (ADS)

    Choi, Myoung-Hwan; Ahn, Jungryul; Park, Dae Jin; Lee, Sang Min; Kim, Kwangsoo; Cho, Dong-il Dan; Senok, Solomon S.; Koo, Kyo-in; Goo, Yong Sook

    2017-02-01

    Objective. Direct stimulation of retinal ganglion cells in degenerate retinas by implanting epi-retinal prostheses is a recognized strategy for restoration of visual perception in patients with retinitis pigmentosa or age-related macular degeneration. Elucidating the best stimulus-response paradigms in the laboratory using multielectrode arrays (MEA) is complicated by the fact that the short-latency spikes (within 10 ms) elicited by direct retinal ganglion cell (RGC) stimulation are obscured by the stimulus artifact which is generated by the electrical stimulator. Approach. We developed an artifact subtraction algorithm based on topographic prominence discrimination, wherein the duration of prominences within the stimulus artifact is used as a strategy for identifying the artifact for subtraction and clarifying the obfuscated spikes which are then quantified using standard thresholding. Main results. We found that the prominence discrimination based filters perform creditably in simulation conditions by successfully isolating randomly inserted spikes in the presence of simple and even complex residual artifacts. We also show that the algorithm successfully isolated short-latency spikes in an MEA-based recording from degenerate mouse retinas, where the amplitude and frequency characteristics of the stimulus artifact vary according to the distance of the recording electrode from the stimulating electrode. By ROC analysis of false positive and false negative first spike detection rates in a dataset of one hundred and eight RGCs from four retinal patches, we found that the performance of our algorithm is comparable to that of a generally-used artifact subtraction filter algorithm which uses a strategy of local polynomial approximation (SALPA). Significance. We conclude that the application of topographic prominence discrimination is a valid and useful method for subtraction of stimulation artifacts with variable amplitudes and shapes. We propose that our algorithm

  17. Parametric Imaging Of Digital Subtraction Angiography Studies For Renal Transplant Evaluation

    NASA Astrophysics Data System (ADS)

    Gallagher, Joe H.; Meaney, Thomas F.; Flechner, Stuart M.; Novick, Andrew C.; Buonocore, Edward

    1981-11-01

    A noninvasive method for diagnosing acute tubular necrosis and rejection would be an important tool for the management of renal transplant patients. From a sequence of digital subtraction angiographic images acquired after an intravenous injection of radiographic contrast material, the parametric images of the maximum contrast, the time when the maximum contrast is reached, and two times the time at which one half of the maximum contrast is reached are computed. The parametric images of the time when the maximum is reached clearly distinguish normal from abnormal renal function. However, it is the parametric image of two times the time when one half of the maximum is reached which provides some assistance in differentiating acute tubular necrosis from rejection.

  18. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  19. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    ERIC Educational Resources Information Center

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  20. Research on infrared dim-point target detection and tracking under sea-sky-line complex background

    NASA Astrophysics Data System (ADS)

    Dong, Yu-xing; Li, Yan; Zhang, Hai-bo

    2011-08-01

    Target detection and tracking technology in infrared image is an important part of modern military defense system. Infrared dim-point targets detection and recognition under complex background is a difficulty and important strategic value and challenging research topic. The main objects that carrier-borne infrared vigilance system detected are sea-skimming aircrafts and missiles. Due to the characteristics of wide field of view of vigilance system, the target is usually under the sea clutter. Detection and recognition of the target will be taken great difficulties .There are some traditional point target detection algorithms, such as adaptive background prediction detecting method. When background has dispersion-decreasing structure, the traditional target detection algorithms would be more useful. But when the background has large gray gradient, such as sea-sky-line, sea waves etc .The bigger false-alarm rate will be taken in these local area .It could not obtain satisfactory results. Because dim-point target itself does not have obvious geometry or texture feature ,in our opinion , from the perspective of mathematics, the detection of dim-point targets in image is about singular function analysis .And from the perspective image processing analysis , the judgment of isolated singularity in the image is key problem. The foregoing points for dim-point targets detection, its essence is a separation of target and background of different singularity characteristics .The image from infrared sensor usually accompanied by different kinds of noise. These external noises could be caused by the complicated background or from the sensor itself. The noise might affect target detection and tracking. Therefore, the purpose of the image preprocessing is to reduce the effects from noise, also to raise the SNR of image, and to increase the contrast of target and background. According to the low sea-skimming infrared flying small target characteristics , the median filter is used to