Methods in Astronomical Image Processing
NASA Astrophysics Data System (ADS)
Jörsäter, S.
A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future
Effect of color coding and subtraction on the accuracy of contrast echocardiography
NASA Technical Reports Server (NTRS)
Pasquet, A.; Greenberg, N.; Brunken, R.; Thomas, J. D.; Marwick, T. H.
1999-01-01
BACKGROUND: Contrast echocardiography may be used to assess myocardial perfusion. However, gray scale assessment of myocardial contrast echocardiography (MCE) is difficult because of variations in regional backscatter intensity, difficulties in distinguishing varying shades of gray, and artifacts or attenuation. We sought to determine whether the assessment of rest myocardial perfusion by MCE could be improved with subtraction and color coding. METHODS AND RESULTS: MCE was performed in 31 patients with previous myocardial infarction with a 2nd generation agent (NC100100, Nycomed AS), using harmonic triggered or continuous imaging and gain settings were kept constant throughout the study. Digitized images were post processed by subtraction of baseline from contrast data and colorized to reflect the intensity of myocardial contrast. Gray scale MCE alone, MCE images combined with baseline and subtracted colorized images were scored independently using a 16 segment model. The presence and severity of myocardial contrast abnormalities were compared with perfusion defined by rest MIBI-SPECT. Segments that were not visualized by continuous (17%) or triggered imaging (14%) after color processing were excluded from further analysis. The specificity of gray scale MCE alone (56%) or MCE combined with baseline 2D (47%) was significantly enhanced by subtraction and color coding (76%, p<0.001) of triggered images. The accuracy of the gray scale approaches (respectively 52% and 47%) was increased to 70% (p<0.001). Similarly, for continuous images, the specificity of gray scale MCE with and without baseline comparison was 23% and 42% respectively, compared with 60% after post processing (p<0.001). The accuracy of colorized images (59%) was also significantly greater than gray scale MCE (43% and 29%, p<0.001). The sensitivity of MCE for both acquisitions was not altered by subtraction. CONCLUSION: Post-processing with subtraction and color coding significantly improves the accuracy and specificity of MCE for detection of perfusion defects.
NASA Astrophysics Data System (ADS)
Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen
2013-08-01
We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.
Martens, Roland M; Bechten, Arianne; Ingala, Silvia; van Schijndel, Ronald A; Machado, Vania B; de Jong, Marcus C; Sanchez, Esther; Purcell, Derk; Arrighi, Michael H; Brashear, Robert H; Wattjes, Mike P; Barkhof, Frederik
2018-03-01
Immunotherapeutic treatments targeting amyloid-β plaques in Alzheimer's disease (AD) are associated with the presence of amyloid-related imaging abnormalities with oedema or effusion (ARIA-E), whose detection and classification is crucial to evaluate subjects enrolled in clinical trials. To investigate the applicability of subtraction MRI in the ARIA-E detection using an established ARIA-E-rating scale. We included 75 AD patients receiving bapineuzumab treatment, including 29 ARIA-E cases. Five neuroradiologists rated their brain MRI-scans with and without subtraction images. The accuracy of evaluating the presence of ARIA-E, intraclass correlation coefficient (ICC) and specific agreement was calculated. Subtraction resulted in higher sensitivity (0.966) and lower specificity (0.970) than native images (0.959, 0.991, respectively). Individual rater detection was excellent. ICC scores ranged from excellent to good, except for gyral swelling (moderate). Excellent negative and good positive specific agreement among all ARIA-E imaging features was reported in both groups. Combining sulcal hyperintensity and gyral swelling significantly increased positive agreement for subtraction images. Subtraction MRI has potential as a visual aid increasing the sensitivity of ARIA-E assessment. However, in order to improve its usefulness isotropic acquisition and enhanced training are required. The ARIA-E rating scale may benefit from combining sulcal hyperintensity and swelling. • Subtraction technique can improve detection amyloid-related imaging-abnormalities with edema/effusion in Alzheimer's patients. • The value of ARIA-E detection, classification and monitoring using subtraction was assessed. • Validation of an established ARIA-E rating scale, recommendations for improvement are reported. • Complementary statistical methods were employed to measure accuracy, inter-rater-reliability and specific agreement.
Stanford, Tyman E; Bagley, Christopher J; Solomon, Patty J
2016-01-01
Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios ( m / z ), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel 'continuous' line segment algorithm that efficiently operates over data with a transformed m / z -axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m / z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Several of the transformations investigated were able to reduce, if not entirely remove, the peak width and peak location relationship resulting in near-optimal baseline subtraction using the automated pipeline. The proposed novel 'continuous' line segment algorithm is shown to far outperform naive sliding window algorithms with regard to the computational time required. The improvement in computational time was at least four-fold on real MALDI TOF-MS data and at least an order of magnitude on many simulated datasets. The advantages of the proposed pipeline include informed and data specific input arguments for baseline subtraction methods, the avoidance of time-intensive and subjective piecewise baseline subtraction, and the ability to automate baseline subtraction completely. Moreover, individual steps can be adopted as stand-alone routines.
2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.
2014-03-01
Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.
Bone images from dual-energy subtraction chest radiography in the detection of rib fractures.
Szucs-Farkas, Zsolt; Lautenschlager, Katrin; Flach, Patricia M; Ott, Daniel; Strautz, Tamara; Vock, Peter; Ruder, Thomas D
2011-08-01
To assess the sensitivity and image quality of chest radiography (CXR) with or without dual-energy subtracted (ES) bone images in the detection of rib fractures. In this retrospective study, 39 patients with 204 rib fractures and 24 subjects with no fractures were examined with a single exposure dual-energy subtraction digital radiography system. Three blinded readers first evaluated the non-subtracted posteroanterior and lateral chest radiographs alone, and 3 months later they evaluated the non-subtracted images together with the subtracted posteroanterior bone images. The locations of rib fractures were registered with confidence levels on a 3-grade scale. Image quality was rated on a 5-point scale. Marks by readers were compared with fracture localizations in CT as a standard of reference. The sensivity for fracture detection using both methods was very similar (34.3% with standard CXR and 33.5% with ES-CXR, p=0.92). At the patient level, both sensitivity (71.8%) and specificity (92.9%) with or without ES were identical. Diagnostic confidence was not significantly different (2.61 with CXR and 2.75 with ES-CXR, p=0.063). Image quality with ES was rated higher than that on standard CXR (4.08 vs. 3.74, p<0.001). Despite a better image quality, adding ES bone images to standard radiographs of the chest does not provide better sensitivity or improved diagnostic confidence in the detection of rib fractures. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Wehde, M. E.
1995-01-01
The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.
Digital subtraction dark-lumen MR colonography: initial experience.
Ajaj, Waleed; Veit, Patrick; Kuehle, Christiane; Joekel, Michaela; Lauenstein, Thomas C; Herborn, Christoph U
2005-06-01
To evaluate image subtraction for the detection of colonic pathologies in a dark-lumen MR colonography exam. A total of 20 patients (12 males; 8 females; mean 51.4 years of age) underwent MR colonography after standard cleansing and a rectal water enema on a 1.5-T whole-body MR system. After suppression of peristaltic motion, native and Gd-contrast-enhanced three-dimensional T1-w gradient echo images were acquired in the coronal plane. Two radiologists analyzed the MR data sets in consensus on two separate occasions, with and without the subtracted images for lesion detection, and assessed the value of the subtracted data set on a five-point Likert scale (1=very helpful to 5=very unhelpful). All imaging results were compared with endoscopy. Without subtracted images, MR-colonography detected a total of five polyps, two inflammatory lesions, and one carcinoma in eight patients, which were all verified by endoscopy. Using subtraction, an additional polyp was found, and readout time was significantly shorter (6:41 vs. 7:39 minutes; P<0.05). In two patients, endoscopy detected a flat adenoma and a polyp (0.4 cm) that were missed in the MR exam. Sensitivity and specificity without subtraction were 0.67/1.0, and 0.76/1.0 with the subtracted images, respectively. Subtraction was assessed as helpful in all exams (mean value 1.8+/-0.5; Likert scale). We consider subtraction of native from contrast-enhanced dark-lumen MR colonography data sets as a beneficial supplement to the exam. Copyright (c) 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.
2018-05-01
Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.
Rapacchi, Stanislas; Han, Fei; Natsuaki, Yutaka; Kroeker, Randall; Plotnik, Adam; Lehman, Evan; Sayre, James; Laub, Gerhard; Finn, J Paul; Hu, Peng
2014-01-01
Purpose We propose a compressed-sensing (CS) technique based on magnitude image subtraction for high spatial and temporal resolution dynamic contrast-enhanced MR angiography (CE-MRA). Methods Our technique integrates the magnitude difference image into the CS reconstruction to promote subtraction sparsity. Fully sampled Cartesian 3D CE-MRA datasets from 6 volunteers were retrospectively under-sampled and three reconstruction strategies were evaluated: k-space subtraction CS, independent CS, and magnitude subtraction CS. The techniques were compared in image quality (vessel delineation, image artifacts, and noise) and image reconstruction error. Our CS technique was further tested on 7 volunteers using a prospectively under-sampled CE-MRA sequence. Results Compared with k-space subtraction and independent CS, our magnitude subtraction CS provides significantly better vessel delineation and less noise at 4X acceleration, and significantly less reconstruction error at 4X and 8X (p<0.05 for all). On a 1–4 point image quality scale in vessel delineation, our technique scored 3.8±0.4 at 4X, 2.8±0.4 at 8X and 2.3±0.6 at 12X acceleration. Using our CS sequence at 12X acceleration, we were able to acquire dynamic CE-MRA with higher spatial and temporal resolution than current clinical TWIST protocol while maintaining comparable image quality (2.8±0.5 vs. 3.0±0.4, p=NS). Conclusion Our technique is promising for dynamic CE-MRA. PMID:23801456
NASA Astrophysics Data System (ADS)
Zhang, Bo; Zhang, Long; Ye, Zhongfu
2016-12-01
A novel sky-subtraction method based on non-negative matrix factorisation with sparsity is proposed in this paper. The proposed non-negative matrix factorisation with sparsity method is redesigned for sky-subtraction considering the characteristics of the skylights. It has two constraint terms, one for sparsity and the other for homogeneity. Different from the standard sky-subtraction techniques, such as the B-spline curve fitting methods and the Principal Components Analysis approaches, sky-subtraction based on non-negative matrix factorisation with sparsity method has higher accuracy and flexibility. The non-negative matrix factorisation with sparsity method has research value for the sky-subtraction on multi-object fibre spectroscopic telescope surveys. To demonstrate the effectiveness and superiority of the proposed algorithm, experiments are performed on Large Sky Area Multi-Object Fiber Spectroscopic Telescope data, as the mechanisms of the multi-object fibre spectroscopic telescopes are similar.
NASA Astrophysics Data System (ADS)
Mustak, S.
2013-09-01
The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.
Argimón, Silvia; Konganti, Kranti; Chen, Hao; Alekseyenko, Alexander V.; Brown, Stuart; Caufield, Page W.
2014-01-01
Comparative genomics is a popular method for the identification of microbial virulence determinants, especially since the sequencing of a large number of whole bacterial genomes from pathogenic and non-pathogenic strains has become relatively inexpensive. The bioinformatics pipelines for comparative genomics usually include gene prediction and annotation and can require significant computer power. To circumvent this, we developed a rapid method for genome-scale in silico subtractive hybridization, based on blastn and independent of feature identification and annotation. Whole genome comparisons by in silico genome subtraction were performed to identify genetic loci specific to Streptococcus mutans strains associated with severe early childhood caries (S-ECC), compared to strains isolated from caries-free (CF) children. The genome similarity of the 20 S. mutans strains included in this study, calculated by Simrank k-mer sharing, ranged from 79.5 to 90.9%, confirming this is a genetically heterogeneous group of strains. We identified strain-specific genetic elements in 19 strains, with sizes ranging from 200 bp to 39 kb. These elements contained protein-coding regions with functions mostly associated with mobile DNA. We did not, however, identify any genetic loci consistently associated with dental caries, i.e., shared by all the S-ECC strains and absent in the CF strains. Conversely, we did not identify any genetic loci specific with the healthy group. Comparison of previously published genomes from pathogenic and carriage strains of Neisseria meningitidis with our in silico genome subtraction yielded the same set of genes specific to the pathogenic strains, thus validating our method. Our results suggest that S. mutans strains derived from caries active or caries free dentitions cannot be differentiated based on the presence or absence of specific genetic elements. Our in silico genome subtraction method is available as the Microbial Genome Comparison (MGC) tool, with a user-friendly JAVA graphical interface. PMID:24291226
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Tawakkol, Shereen M.; Fahmy, Nesma M.; Shehata, Mostafa A.
2015-02-01
Simultaneous determination of mixtures of lidocaine hydrochloride (LH), flucortolone pivalate (FCP), in presence of chlorquinaldol (CQ) without prior separation steps was applied using either successive or progressive resolution techniques. According to the concentration of CQ the extent of overlapping changed so it can be eliminated from the mixture to get the binary mixture of LH and FCP using ratio subtraction method for partially overlapped spectra or constant value via amplitude difference followed by ratio subtraction or constant center followed by spectrum subtraction spectrum subtraction for severely overlapped spectra. Successive ratio subtraction was coupled with extended ratio subtraction, constant multiplication, derivative subtraction coupled constant multiplication, and spectrum subtraction can be applied for the analysis of partially overlapped spectra. On the other hand severely overlapped spectra can be analyzed by constant center and the novel methods namely differential dual wavelength (D1 DWL) for CQ, ratio difference and differential derivative ratio (D1 DR) for FCP, while LH was determined by applying constant value via amplitude difference followed by successive ratio subtraction, and successive derivative subtraction. The spectra of the cited drugs can be resolved and their concentrations are determined progressively from the same ratio spectrum using amplitude modulation method. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and were successfully applied for the analysis of pharmaceutical formulations containing the cited drugs with no interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with those of the official or reported methods; using student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.
Purification of photon subtraction from continuous squeezed light by filtering
NASA Astrophysics Data System (ADS)
Yoshikawa, Jun-ichi; Asavanant, Warit; Furusawa, Akira
2017-11-01
Photon subtraction from squeezed states is a powerful scheme to create good approximation of so-called Schrödinger cat states. However, conventional continuous-wave-based methods actually involve some impurity in squeezing of localized wave packets, even in the ideal case of no optical losses. Here, we theoretically discuss this impurity by introducing mode match of squeezing. Furthermore, here we propose a method to remove this impurity by filtering the photon-subtraction field. Our method in principle enables creation of pure photon-subtracted squeezed states, which was not possible with conventional methods.
NASA Astrophysics Data System (ADS)
Rassat, A.; Starck, J.-L.; Dupé, F.-X.
2013-09-01
Context. Although there is currently a debate over the significance of the claimed large-scale anomalies in the cosmic microwave background (CMB), their existence is not totally dismissed. In parallel to the debate over their statistical significance, recent work has also focussed on masks and secondary anisotropies as potential sources of these anomalies. Aims: In this work we investigate simultaneously the impact of the method used to account for masked regions as well as the impact of the integrated Sachs-Wolfe (ISW) effect, which is the large-scale secondary anisotropy most likely to affect the CMB anomalies. In this sense, our work is an update of previous works. Our aim is to identify trends in CMB data from different years and with different mask treatments. Methods: We reconstruct the ISW signal due to 2 Micron All-Sky Survey (2MASS) and NRAO VLA Sky Survey (NVSS) galaxies, effectively reconstructing the low-redshift ISW signal out to z ~ 1. We account for regions of missing data using the sparse inpainting technique. We test sparse inpainting of the CMB, large scale structure and ISW and find that it constitutes a bias-free reconstruction method suitable to study large-scale statistical isotropy and the ISW effect. Results: We focus on three large-scale CMB anomalies: the low quadrupole, the quadrupole/octopole alignment, and the octopole planarity. After sparse inpainting, the low quadrupole becomes more anomalous, whilst the quadrupole/octopole alignment becomes less anomalous. The significance of the low quadrupole is unchanged after subtraction of the ISW effect, while the trend amongst the CMB maps is that both the low quadrupole and the quadrupole/octopole alignment have reduced significance, yet other hypotheses remain possible as well (e.g. exotic physics). Our results also suggest that both of these anomalies may be due to the quadrupole alone. While the octopole planarity significance is reduced after inpainting and after ISW subtraction, however, we do not find that it was very anomalous to start with. In the spirit of participating in reproducible research, we make all codes and resulting products which constitute main results of this paper public here: http://www.cosmostat.org/anomaliesCMB.html
A new registration method with voxel-matching technique for temporal subtraction images
NASA Astrophysics Data System (ADS)
Itai, Yoshinori; Kim, Hyoungseop; Ishikawa, Seiji; Katsuragawa, Shigehiko; Doi, Kunio
2008-03-01
A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for enhancing interval changes on medical images by removing most of normal structures. One of the important problems in temporal subtraction is that subtraction images commonly include artifacts created by slight differences in the size, shape, and/or location of anatomical structures. In this paper, we developed a new registration method with voxel-matching technique for substantially removing the subtraction artifacts on the temporal subtraction image obtained from multiple-detector computed tomography (MDCT). With this technique, the voxel value in a warped (or non-warped) previous image is replaced by a voxel value within a kernel, such as a small cube centered at a given location, which would be closest (identical or nearly equal) to the voxel value in the corresponding location in the current image. Our new method was examined on 16 clinical cases with MDCT images. Preliminary results indicated that interval changes on the subtraction images were enhanced considerably, with a substantial reduction of misregistration artifacts. The temporal subtraction images obtained by use of the voxel-matching technique would be very useful for radiologists in the detection of interval changes on MDCT images.
Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data
NASA Astrophysics Data System (ADS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masui, K. W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.; Yadav, J.
2017-02-01
We present the first application of a new foreground removal pipeline to the current leading H I intensity mapping data set, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h-field data of the GBT observations previously presented in Mausui et al. and Switzer et al., covering about 41 deg2 at 0.6 < z < 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point-source contamination using an independent component analysis technique (FASTICA), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that FASTICA is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps are dominated by instrumental noise on small scales which FASTICA, as a conservative subtraction technique of non-Gaussian signals, cannot mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the singular value decomposition (SVD) method, and confirm that foreground subtraction with FASTICA is robust against 21 cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and FASTICA are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping data sets.
Silva, João Paulo Santos; Mônaco, Luciana da Mata; Paschoal, André Monteiro; Oliveira, Ícaro Agenor Ferreira de; Leoni, Renata Ferranti
2018-05-16
Arterial spin labeling (ASL) is an established magnetic resonance imaging (MRI) technique that is finding broader applications in functional studies of the healthy and diseased brain. To promote improvement in cerebral blood flow (CBF) signal specificity, many algorithms and imaging procedures, such as subtraction methods, were proposed to eliminate or, at least, minimize noise sources. Therefore, this study addressed the main considerations of how CBF functional connectivity (FC) is changed, regarding resting brain network (RBN) identification and correlations between regions of interest (ROI), by different subtraction methods and removal of residual motion artifacts and global signal fluctuations (RMAGSF). Twenty young healthy participants (13 M/7F, mean age = 25 ± 3 years) underwent an MRI protocol with a pseudo-continuous ASL (pCASL) sequence. Perfusion-based images were obtained using simple, sinc and running subtraction. RMAGSF removal was applied to all CBF time series. Independent Component Analysis (ICA) was used for RBN identification, while Pearson' correlation was performed for ROI-based FC analysis. Temporal signal-to-noise ratio (tSNR) was higher in CBF maps obtained by sinc subtraction, although RMAGSF removal had a significant effect on maps obtained with simple and running subtractions. Neither the subtraction method nor the RMAGSF removal directly affected the identification of RBNs. However, the number of correlated and anti-correlated voxels varied for different subtraction and filtering methods. In an ROI-to-ROI level, changes were prominent in FC values and their statistical significance. Our study showed that both RMAGSF filtering and subtraction method might influence resting-state FC results, especially in an ROI level, consequently affecting FC analysis and its interpretation. Taking our results and the whole discussion together, we understand that for an exploratory assessment of the brain, one could avoid removing RMAGSF to not bias FC measures, but could use sinc subtraction to minimize low-frequency contamination. However, CBF signal specificity and frequency range for filtering purposes still need to be assessed in future studies. Copyright © 2018 Elsevier Inc. All rights reserved.
Erasing the Milky Way: New Cleaning Technique Applied to GBT Intensity Mapping Data
NASA Technical Reports Server (NTRS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masi, K.W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.;
2016-01-01
We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013), covering about 41 square degrees at 0.6 less than z is less than 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contamination using an independent component analysis technique (fastica), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps is dominated by instrumental noise on small scales which fastica, as a conservative sub-traction technique of non-Gaussian signals, can not mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the Singular Value Decomposition (SVD) method, and confirm that foreground subtraction with fastica is robust against 21cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and fastica are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping datasets.
Subtractive Structural Modification of Morpho Butterfly Wings.
Shen, Qingchen; He, Jiaqing; Ni, Mengtian; Song, Chengyi; Zhou, Lingye; Hu, Hang; Zhang, Ruoxi; Luo, Zhen; Wang, Ge; Tao, Peng; Deng, Tao; Shang, Wen
2015-11-11
Different from studies of butterfly wings through additive modification, this work for the first time studies the property change of butterfly wings through subtractive modification using oxygen plasma etching. The controlled modification of butterfly wings through such subtractive process results in gradual change of the optical properties, and helps the further understanding of structural optimization through natural evolution. The brilliant color of Morpho butterfly wings is originated from the hierarchical nanostructure on the wing scales. Such nanoarchitecture has attracted a lot of research effort, including the study of its optical properties, its potential use in sensing and infrared imaging, and also the use of such structure as template for the fabrication of high-performance photocatalytic materials. The controlled subtractive processes provide a new path to modify such nanoarchitecture and its optical property. Distinct from previous studies on the optical property of the Morpho wing structure, this study provides additional experimental evidence for the origination of the optical property of the natural butterfly wing scales. The study also offers a facile approach to generate new 3D nanostructures using butterfly wings as the templates and may lead to simpler structure models for large-scale man-made structures than those offered by original butterfly wings. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
3D temporal subtraction on multislice CT images using nonlinear warping technique
NASA Astrophysics Data System (ADS)
Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio
2007-03-01
The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.
Estimating Noise in the Hydrogen Epoch of Reionization Array
NASA Astrophysics Data System (ADS)
Englund Mathieu, Philip; HERA Team
2017-01-01
The Hydrogen Epoch of Reionization Array (HERA) is a radio telescope dedicated to observing large scale structure during and prior to the epoch of reionization. Once completed, HERA will have unprecedented sensitivity to the 21-cm signal from hydrogen reionization. This poster will present time- and frequency-subtraction methods and results from a preliminary analysis of the noise characteristics of the nineteen-element pathfinder array.
Measurement of Device Parameters Using Image Recovery Techniques in Large-Scale IC Devices
NASA Technical Reports Server (NTRS)
Scheick, Leif; Edmonds, Larry
2004-01-01
Devices that respond to radiation on a cell level will produce histograms showing the relative frequency of cell damage as a function of damage. The measured distribution is the convolution of distributions from radiation responses, measurement noise, and manufacturing parameters. A method of extracting device characteristics and parameters from measured distributions via mathematical and image subtraction techniques is described.
[The backgroud sky subtraction around [OIII] line in LAMOST QSO spectra].
Shi, Zhi-Xin; Comte, Georges; Luo, A-Li; Tu, Liang-Ping; Zhao, Yong-Heng; Wu, Fu-Chao
2014-11-01
At present, most sky-subtraction methods focus on the full spectrum, not the particular location, especially for the backgroud sky around [OIII] line which is very important to low redshift quasars. A new method to precisely subtract sky lines in local region is proposed in the present paper, which sloves the problem that the width of Hβ-[OIII] line is effected by the backgroud sky subtraction. The exprimental results show that, for different redshift quasars, the spectral quality has been significantly improved using our method relative to the original batch program by LAMOST. It provides a complementary solution for the small part of LAMOST spectra which are not well handled by LAMOST 2D pipeline. Meanwhile, This method has been used in searching for candidates of double-peaked Active Galactic Nuclei.
[Correction of posttraumatic thoracolumbar kyphosis with modified pedicle subtraction osteotomy].
Chen, Fei; Kang, Yijun; Zhou, Bin; Dai, Zhehao
2016-11-28
To evaluate the efficacy and safety of modified pedicle subtraction osteotomy for treatment of thoracolumbar old fracture with kyphosis. Methods: From January 2003 to January 2013, 58 patients of thoracolumbar kyphosis, who underwent modified pedicle subtraction osteotomy, were reviewed. Among them, 45 cases underwent initial operation and 13 cases underwent revision surgery. Preoperative and postoperative kyphotic Cobb's angle, score of back pain, as well as the incidence of complication were accessed by using visual analogue scale (VAS) and Oswestry disability index (ODI). Results: Mean follow-up duration was 42 months (range, 24-60 months). Average operative time was 258 min (range, 190-430 min), while average bleeding was 950 mL (range, 600-1 600 mL). All the patients were significantly improved in function and self-image, and achieved kyphosis correction with 17.9°± 4.3°. VAS of low back pain was decreased by 3.1±0.6; ODI was dropped by 25.3%±5.5%. 3 patients (5.2%) suffered anterior thigh numbness and got recovery after 3 months of follow-up. Complications happened in 19 patients, including 12 with cerebrospinal fluid leak, 4 with superficial wound infection, and 3 with urinary tract infection. All these complications were managed properly and none of them underwent reoperation. Conclusion: Modified pedicle subtraction osteotomy is a safe and effective technique for the treatment of old fracture with kyphosis.
Differential cDNA cloning by enzymatic degrading subtraction (EDS).
Zeng, J; Gorski, R A; Hamer, D
1994-01-01
We describe a new method, called enzymatic degrading subtraction (EDS), for the construction of subtractive libraries from PCR amplified cDNA. The novel features of this method are that i) the tester DNA is blocked by thionucleotide incorporation; ii) the rate of hybridization is accelerated by phenol-emulsion reassociation; and iii) the driver cDNA and hybrid molecules are enzymatically removed by digestion with exonucleases III and VII rather than by physical partitioning. We demonstrate the utility of EDS by constructing a subtractive library enriched for cDNAs expressed in adult but not in embryonic rat brains. Images PMID:7971268
NASA Astrophysics Data System (ADS)
Sanlı, Ceyda; Saitoh, Kuniyasu; Luding, Stefan; van der Meer, Devaraj
2014-09-01
When a densely packed monolayer of macroscopic spheres floats on chaotic capillary Faraday waves, a coexistence of large scale convective motion and caging dynamics typical for glassy systems is observed. We subtract the convective mean flow using a coarse graining (homogenization) method and reveal subdiffusion for the caging time scales followed by a diffusive regime at later times. We apply the methods developed to study dynamic heterogeneity and show that the typical time and length scales of the fluctuations due to rearrangements of observed particle groups significantly increase when the system approaches its largest experimentally accessible packing concentration. To connect the system to the dynamic criticality literature, we fit power laws to our results. The resultant critical exponents are consistent with those found in densely packed suspensions of colloids.
Sanlı, Ceyda; Saitoh, Kuniyasu; Luding, Stefan; van der Meer, Devaraj
2014-09-01
When a densely packed monolayer of macroscopic spheres floats on chaotic capillary Faraday waves, a coexistence of large scale convective motion and caging dynamics typical for glassy systems is observed. We subtract the convective mean flow using a coarse graining (homogenization) method and reveal subdiffusion for the caging time scales followed by a diffusive regime at later times. We apply the methods developed to study dynamic heterogeneity and show that the typical time and length scales of the fluctuations due to rearrangements of observed particle groups significantly increase when the system approaches its largest experimentally accessible packing concentration. To connect the system to the dynamic criticality literature, we fit power laws to our results. The resultant critical exponents are consistent with those found in densely packed suspensions of colloids.
NASA Astrophysics Data System (ADS)
Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko
2012-12-01
A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech recognition and under various parameter estimation conditions.
Background derivation and image flattening: getimages
NASA Astrophysics Data System (ADS)
Men'shchikov, A.
2017-11-01
Modern high-resolution images obtained with space observatories display extremely strong intensity variations across images on all spatial scales. Source extraction in such images with methods based on global thresholding may bring unacceptably large numbers of spurious sources in bright areas while failing to detect sources in low-background or low-noise areas. It would be highly beneficial to subtract background and equalize the levels of small-scale fluctuations in the images before extracting sources or filaments. This paper describes getimages, a new method of background derivation and image flattening. It is based on median filtering with sliding windows that correspond to a range of spatial scales from the observational beam size up to a maximum structure width Xλ. The latter is a single free parameter of getimages that can be evaluated manually from the observed image ℐλ. The median filtering algorithm provides a background image \\tilde{Bλ} for structures of all widths below Xλ. The same median filtering procedure applied to an image of standard deviations 𝓓λ derived from a background-subtracted image \\tilde{Sλ} results in a flattening image \\tilde{Fλ}. Finally, a flattened detection image I{λD} = \\tilde{Sλ}/\\tilde{Fλ} is computed, whose standard deviations are uniform outside sources and filaments. Detecting sources in such greatly simplified images results in much cleaner extractions that are more complete and reliable. As a bonus, getimages reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images.
NASA Astrophysics Data System (ADS)
Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui
2015-03-01
A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.
Subtraction method in the Second Random Phase Approximation
NASA Astrophysics Data System (ADS)
Gambacurta, Danilo
2018-02-01
We discuss the subtraction method applied to the Second Random Phase Approximation (SRPA). This method has been proposed to overcome double counting and stability issues appearing in beyond mean-field calculations. We show that the subtraction procedure leads to a considerable reduction of the SRPA downwards shift with respect to the random phase approximation (RPA) spectra and to results that are weakly cutoff dependent. Applications to the isoscalar monopole and quadrupole response in 16O and to the low-lying dipole response in 48Ca are shown and discussed.
Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Horne, William C.
2015-01-01
An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.
Service Discovery Oriented Management System Construction Method
NASA Astrophysics Data System (ADS)
Li, Huawei; Ren, Ying
2017-10-01
In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.
Subtraction of cap-trapped full-length cDNA libraries to select rare transcripts.
Hirozane-Kishikawa, Tomoko; Shiraki, Toshiyuki; Waki, Kazunori; Nakamura, Mari; Arakawa, Takahiro; Kawai, Jun; Fagiolini, Michela; Hensch, Takao K; Hayashizaki, Yoshihide; Carninci, Piero
2003-09-01
The normalization and subtraction of highly expressed cDNAs from relatively large tissues before cloning dramatically enhanced the gene discovery by sequencing for the mouse full-length cDNA encyclopedia, but these methods have not been suitable for limited RNA materials. To normalize and subtract full-length cDNA libraries derived from limited quantities of total RNA, here we report a method to subtract plasmid libraries excised from size-unbiased amplified lambda phage cDNA libraries that avoids heavily biasing steps such as PCR and plasmid library amplification. The proportion of full-length cDNAs and the gene discovery rate are high, and library diversity can be validated by in silico randomization.
2010-01-01
Subtraction technique has been broadly applied for target gene discovery. However, most current protocols apply relative differential subtraction and result in great amount clone mixtures of unique and differentially expressed genes. This makes it more difficult to identify unique or target-orientated expressed genes. In this study, we developed a novel method for subtraction at mRNA level by integrating magnetic particle technology into driver preparation and tester–driver hybridization to facilitate uniquely expressed gene discovery between peanut immature pod and leaf through a single round subtraction. The resulting target clones were further validated through polymerase chain reaction screening using peanut immature pod and leaf cDNA libraries as templates. This study has resulted in identifying several genes expressed uniquely in immature peanut pod. These target genes can be used for future peanut functional genome and genetic engineering research. PMID:21406066
Lotfy, Hayam Mahmoud; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2014-05-21
Two smart and novel spectrophotometric methods namely; absorbance subtraction (AS) and amplitude modulation (AM) were developed and validated for the determination of a binary mixture of timolol maleate (TIM) and dorzolamide hydrochloride (DOR) in presence of benzalkonium chloride without prior separation, using unified regression equation. Additionally, simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of the binary mixture namely; simultaneous ratio subtraction (SRS), ratio difference (RD), ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), constant multiplication method (CM) and mean centering of ratio spectra (MCR). The proposed spectrophotometric procedures do not require any separation steps. Accuracy, precision and linearity ranges of the proposed methods were determined and the specificity was assessed by analyzing synthetic mixtures of both drugs. They were applied to their pharmaceutical formulation and the results obtained were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there is no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.
The Automatic Recognition of the Abnormal Sky-subtraction Spectra Based on Hadoop
NASA Astrophysics Data System (ADS)
An, An; Pan, Jingchang
2017-10-01
The skylines, superimposing on the target spectrum as a main noise, If the spectrum still contains a large number of high strength skylight residuals after sky-subtraction processing, it will not be conducive to the follow-up analysis of the target spectrum. At the same time, the LAMOST can observe a quantity of spectroscopic data in every night. We need an efficient platform to proceed the recognition of the larger numbers of abnormal sky-subtraction spectra quickly. Hadoop, as a distributed parallel data computing platform, can deal with large amounts of data effectively. In this paper, we conduct the continuum normalization firstly and then a simple and effective method will be presented to automatic recognize the abnormal sky-subtraction spectra based on Hadoop platform. Obtain through the experiment, the Hadoop platform can implement the recognition with more speed and efficiency, and the simple method can recognize the abnormal sky-subtraction spectra and find the abnormal skyline positions of different residual strength effectively, can be applied to the automatic detection of abnormal sky-subtraction of large number of spectra.
Subtraction method of computing QCD jet cross sections at NNLO accuracy
NASA Astrophysics Data System (ADS)
Trócsányi, Zoltán; Somogyi, Gábor
2008-10-01
We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.
NASA Astrophysics Data System (ADS)
Simiele, E.; Kapsch, R.-P.; Ankerhold, U.; Culberson, W.; DeWerd, L.
2018-04-01
The purpose of this work was to characterize intensity and spectral response changes in a plastic scintillation detector (PSD) as a function of magnetic field strength. Spectra measurements as a function of magnetic field strength were performed using an optical spectrometer. The response of both a PSD and PMMA fiber were investigated to isolate the changes in response from the scintillator and the noise signal as a function of magnetic field strength. All irradiations were performed in water at a photon beam energy of 6 MV. Magnetic field strengths of (0, ±0.35, ±0.70, ±1.05, and ±1.40) T were investigated. Four noise subtraction techniques were investigated to evaluate the impact on the resulting noise-subtracted scintillator response with magnetic field strength. The noise subtraction methods included direct spectral subtraction, the spectral method, and variants thereof. The PMMA fiber exhibited changes in response of up to 50% with magnetic field strength due to the directional light emission from \\breve{C} erenkov radiation. The PSD showed increases in response of up to 10% when not corrected for the noise signal, which agrees with previous investigations of scintillator response in magnetic fields. Decreases in the \\breve{C} erenkov light ratio with negative field strength were observed with a maximum change at ‑1.40 T of 3.2% compared to 0 T. The change in the noise-subtracted PSD response as a function of magnetic field strength varied with the noise subtraction technique used. Even after noise subtraction, the PSD exhibited changes in response of up to 5.5% over the four noise subtraction methods investigated.
A comparative study of additive and subtractive manufacturing for dental restorations.
Bae, Eun-Jeong; Jeong, Il-Do; Kim, Woong-Chul; Kim, Ji-Hwan
2017-08-01
Digital systems have recently found widespread application in the fabrication of dental restorations. For the clinical assessment of dental restorations fabricated digitally, it is necessary to evaluate their accuracy. However, studies of the accuracy of inlay restorations fabricated with additive manufacturing are lacking. The purpose of this in vitro study was to evaluate and compare the accuracy of inlay restorations fabricated by using recently introduced additive manufacturing with the accuracy of subtractive methods. The inlay (distal occlusal cavity) shape was fabricated using 3-dimensional image (reference data) software. Specimens were fabricated using 4 different methods (each n=10, total N=40), including 2 additive manufacturing methods, stereolithography apparatus and selective laser sintering; and 2 subtractive methods, wax and zirconia milling. Fabricated specimens were scanned using a dental scanner and then compared by overlapping reference data. The results were statistically analyzed using a 1-way analysis of variance (α=.05). Additionally, the surface morphology of 1 randomly (the first of each specimen) selected specimen from each group was evaluated using a digital microscope. The results of the overlap analysis of the dental restorations indicated that the root mean square (RMS) deviation observed in the restorations fabricated using the additive manufacturing methods were significantly different from those fabricated using the subtractive methods (P<.05). However, no significant differences were found between restorations fabricated using stereolithography apparatus and selective laser sintering, the additive manufacturing methods (P=.466). Similarly, no significant differences were found between wax and zirconia, the subtractive methods (P=.986). The observed RMS values were 106 μm for stereolithography apparatus, 113 μm for selective laser sintering, 116 μm for wax, and 119 μm for zirconia. Microscopic evaluation of the surface revealed a fine linear gap between the layers of restorations fabricated using stereolithography apparatus and a grooved hole with inconsistent weak scratches when fabricated using selective laser sintering. In the wax and zirconia restorations, possible traces of milling bur passes were observed. The results indicate that the accuracy of dental restorations fabricated using the additive manufacturing methods is higher than that of subtractive methods. Therefore, additive manufacturing methods are a viable alternative to subtractive methods. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Lotfy, Hayam M; Fayez, Yasmin M; Michael, Adel M; Nessim, Christine K
2016-02-15
Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD(1)) or second derivative (D(2)). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28μg/mL for mebeverine hydrochloride and 1-12μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Fayez, Yasmin M.; Michael, Adel M.; Nessim, Christine K.
2016-02-01
Smart, sensitive, simple and accurate spectrophotometric methods were developed and validated for the quantitative determination of a binary mixture of mebeverine hydrochloride (MVH) and chlordiazepoxide (CDZ) without prior separation steps via different manipulating pathways. These pathways were applied either on zero order absorption spectra namely, absorbance subtraction (AS) or based on the recovered zero order absorption spectra via a decoding technique namely, derivative transformation (DT) or via ratio spectra namely, ratio subtraction (RS) coupled with extended ratio subtraction (EXRS), spectrum subtraction (SS), constant multiplication (CM) and constant value (CV) methods. The manipulation steps applied on the ratio spectra are namely, ratio difference (RD) and amplitude modulation (AM) methods or applying a derivative to these ratio spectra namely, derivative ratio (DD1) or second derivative (D2). Finally, the pathway based on the ratio spectra of derivative spectra is namely, derivative subtraction (DS). The specificity of the developed methods was investigated by analyzing the laboratory mixtures and was successfully applied for their combined dosage form. The proposed methods were validated according to ICH guidelines. These methods exhibited linearity in the range of 2-28 μg/mL for mebeverine hydrochloride and 1-12 μg/mL for chlordiazepoxide. The obtained results were statistically compared with those of the official methods using Student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.
Continuous-variable measurement-device-independent quantum key distribution with photon subtraction
NASA Astrophysics Data System (ADS)
Ma, Hong-Xin; Huang, Peng; Bai, Dong-Yun; Wang, Shi-Yu; Bao, Wan-Su; Zeng, Gui-Hua
2018-04-01
It has been found that non-Gaussian operations can be applied to increase and distill entanglement between Gaussian entangled states. We show the successful use of the non-Gaussian operation, in particular, photon subtraction operation, on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. The proposed method can be implemented based on existing technologies. Security analysis shows that the photon subtraction operation can remarkably increase the maximal transmission distance of the CV-MDI-QKD protocol, which precisely make up for the shortcoming of the original CV-MDI-QKD protocol, and one-photon subtraction operation has the best performance. Moreover, the proposed protocol provides a feasible method for the experimental implementation of the CV-MDI-QKD protocol.
Efficiency and Flexibility of Indirect Addition in the Domain of Multi-Digit Subtraction
ERIC Educational Resources Information Center
Torbeyns, Joke; Ghesquiere, Pol; Verschaffel, Lieven
2009-01-01
This article discusses the characteristics of the indirect addition strategy (IA) in the domain of multi-digit subtraction. In two studies, adults' use of IA on three-digit subtractions with a small, medium, or large difference between the integers was analysed using the choice/no-choice method. Results from both studies indicate that adults…
Disentangling Random Motion and Flow in a Complex Medium
Koslover, Elena F.; Chan, Caleb K.; Theriot, Julie A.
2016-01-01
We describe a technique for deconvolving the stochastic motion of particles from large-scale fluid flow in a dynamic environment such as that found in living cells. The method leverages the separation of timescales to subtract out the persistent component of motion from single-particle trajectories. The mean-squared displacement of the resulting trajectories is rescaled so as to enable robust extraction of the diffusion coefficient and subdiffusive scaling exponent of the stochastic motion. We demonstrate the applicability of the method for characterizing both diffusive and fractional Brownian motion overlaid by flow and analytically calculate the accuracy of the method in different parameter regimes. This technique is employed to analyze the motion of lysosomes in motile neutrophil-like cells, showing that the cytoplasm of these cells behaves as a viscous fluid at the timescales examined. PMID:26840734
Subtraction coronary CT angiography using second-generation 320-detector row CT.
Yoshioka, Kunihiro; Tanaka, Ryoichi; Muranaka, Kenta; Sasaki, Tadashi; Ueda, Takanori; Chiba, Takuya; Takeda, Kouta; Sugawara, Tsuyoshi
2015-06-01
The purpose of this study was to explore the feasibility of subtraction coronary computed tomography angiography (CCTA) by second-generation 320-detector row CT in patients with severe coronary artery calcification using invasive coronary angiography (ICA) as the gold standard. This study was approved by the institutional board, and all subjects provided written consent. Twenty patients with calcium scores of >400 underwent conventional CCTA and subtraction CCTA followed by ICA. A total of 82 segments were evaluated for image quality using a 4-point scale and the presence of significant (>50 %) luminal stenosis by two independent readers. The average image quality was 2.3 ± 0.8 with conventional CCTA and 3.2 ± 0.6 with subtraction CCTA (P < 0.001). The percentage of segments with non-diagnostic image quality was 43.9 % on conventional CCTA versus 8.5 % on subtraction CCTA (P = 0.004). The segment-based diagnostic accuracy for detecting significant stenosis according to ICA revealed an area under the receiver operating characteristics curve of 0.824 (95 % confidence interval [CI], 0.750-0.899) for conventional CCTA and 0.936 (95 % CI 0.889-0.936) for subtraction CCTA (P = 0.001). The sensitivity, specificity, positive predictive value, and negative predictive value for conventional CCTA were 88.2, 62.5, 62.5, and 88.2 %, respectively, and for subtraction CCTA they were 94.1, 85.4, 82.1, and 95.3 %, respectively. As compared to conventional, subtraction CCTA using a second-generation 320-detector row CT showed improvement in diagnostic accuracy at segment base analysis in patients with severe calcifications.
N -jettiness subtractions for g g →H at subleading power
NASA Astrophysics Data System (ADS)
Moult, Ian; Rothen, Lorena; Stewart, Iain W.; Tackmann, Frank J.; Zhu, Hua Xing
2018-01-01
N -jettiness subtractions provide a general approach for performing fully-differential next-to-next-to-leading order (NNLO) calculations. Since they are based on the physical resolution variable N -jettiness, TN , subleading power corrections in τ =TN/Q , with Q a hard interaction scale, can also be systematically computed. We study the structure of power corrections for 0-jettiness, T0, for the g g →H process. Using the soft-collinear effective theory we analytically compute the leading power corrections αsτ ln τ and αs2τ ln3τ (finding partial agreement with a previous result in the literature), and perform a detailed numerical study of the power corrections in the g g , g q , and q q ¯ channels. This includes a numerical extraction of the αsτ and αs2τ ln2τ corrections, and a study of the dependence on the T0 definition. Including such power suppressed logarithms significantly reduces the size of missing power corrections, and hence improves the numerical efficiency of the subtraction method. Having a more detailed understanding of the power corrections for both q q ¯ and g g initiated processes also provides insight into their universality, and hence their behavior in more complicated processes where they have not yet been analytically calculated.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young
2014-03-01
This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.
Evaluation of the morphology structure of meibomian glands based on mask dodging method
NASA Astrophysics Data System (ADS)
Yan, Huangping; Zuo, Yingbo; Chen, Yisha; Chen, Yanping
2016-10-01
Low contrast and non-uniform illumination of infrared (IR) meibography images make the detection of meibomian glands challengeable. An improved Mask dodging algorithm is proposed. To overcome the shortage of low contrast using traditional Mask dodging method, a scale factor is used to enhance the image after subtracting background image from an original one. Meibomian glands are detected and the ratio of the meibomian gland area to the measurement area is calculated. The results show that the improved Mask algorithm has ideal dodging effect, which can eliminate non-uniform illumination and improve contrast of meibography images effectively.
New subtraction algorithms for evaluation of lesions on dynamic contrast-enhanced MR mammography.
Choi, Byung Gil; Kim, Hak Hee; Kim, Euy Neyng; Kim, Bum-soo; Han, Ji-Youn; Yoo, Seung-Schik; Park, Seog Hee
2002-12-01
We report new subtraction algorithms for the detection of lesions in dynamic contrast-enhanced MR mammography(CE MRM). Twenty-five patients with suspicious breast lesions underwent dynamic CE MRM using 3D fast low-angle shot. After the acquisition of the T1-weighted scout images, dynamic images were acquired six times after the bolus injection of contrast media. Serial subtractions, step-by-step subtractions, and reverse subtractions, were performed. Two radiologists attempted to differentiate benign from malignant lesion in consensus. The sensitivity, specificity, and accuracy of the method leading to the differentiation of malignant tumor from benign lesions were 85.7, 100, and 96%, respectively. Subtraction images allowed for better visualization of the enhancement as well as its temporal pattern than visual inspection of dynamic images alone. Our findings suggest that the new subtraction algorithm is adequate for screening malignant breast lesions and can potentially replace the time-intensity profile analysis on user-selected regions of interest.
Young children's use of derived fact strategies for addition and subtraction
Dowker, Ann
2014-01-01
Forty-four children between 6;0 and 7;11 took part in a study of derived fact strategy use. They were assigned to addition and subtraction levels on the basis of calculation pretests. They were then given Dowker's (1998) test of derived fact strategies in addition, involving strategies based on the Identity, Commutativity, Addend +1, Addend −1, and addition/subtraction Inverse principles; and test of derived fact strategies in subtraction, involving strategies based on the Identity, Minuend +1, Minuend −1, Subtrahend +1, Subtrahend −1, Complement and addition/subtraction Inverse principles. The exact arithmetic problems given varied according to the child's previously assessed calculation level and were selected to be just a little too difficult for the child to solve unaided. Children were given the answer to a problem and then asked to solve another problem that could be solved quickly by using this answer, together with the principle being assessed. The children also took the WISC Arithmetic subtest. Strategies differed greatly in difficulty, with Identity being the easiest, and the Inverse and Complement principles being most difficult. The Subtrahend +1 and Subtrahend −1 problems often elicited incorrect strategies based on an overextension of the principles of addition to subtraction. It was concluded that children may have difficulty with understanding and applying the relationships between addition and subtraction. Derived fact strategy use was significantly related to both calculation level and to WISC Arithmetic scaled score. PMID:24431996
Effect of scaling and root planing on alveolar bone as measured by subtraction radiography.
Hwang, You-Jeong; Fien, Matthew Jonas; Lee, Sam-Sun; Kim, Tae-Il; Seol, Yang-Jo; Lee, Yong-Moo; Ku, Young; Rhyu, In-Chul; Chung, Chong-Pyoung; Han, Soo-Boo
2008-09-01
Scaling and root planing of diseased periodontal pockets is fundamental to the treatment of periodontal disease. Although various clinical parameters have been used to assess the efficacy of this therapy, radiographic analysis of changes in bone density following scaling and root planing has not been extensively researched. In this study, digital subtraction radiography was used to analyze changes that occurred in the periodontal hard tissues following scaling and root planing. Thirteen subjects with a total of 39 sites that presented with >3 mm of vertical bone loss were included in this study. Clinical examinations were performed and radiographs were taken prior to treatment and were repeated 6 months following scaling and root planing. Radiographic analysis was performed with computer-assisted radiographic evaluation software. Three regions of interest (ROI) were defined as the most coronal, middle, and apical portions of each defect. A fourth ROI was used for each site as a control region and was placed at a distant, untreated area. Statistical analysis was carried out to evaluate changes in the mean gray level at the coronal, middle, and apical region of each treated defect. Digital subtraction radiography revealed an increase in radiographic density in 101 of the 117 test regions (83.3%). A 256 gray level was used, and a value >128 was assumed to represent a density gain in the ROI. The average gray level increase was 18.65. Although the coronal, middle, and apical regions displayed increases in bone density throughout this study, the bone density of the apical ROI (gray level = 151.27 +/- 20.62) increased significantly more than the bone density of the coronal ROI (gray level = 139.19 +/- 21.78). A significant increase in bone density was seen in probing depths >5 mm compared to those <5 mm in depth. No significant difference was found with regard to bone-density changes surrounding single- versus multiple-rooted teeth. Scaling and root planing of diseased periodontal pockets can significantly increase radiographic alveolar bone density as demonstrated through the use of digital subtraction radiography.
NASA Astrophysics Data System (ADS)
Emam, Aml A.; Abdelaleem, Eglal A.; Naguib, Ibrahim A.; Abdallah, Fatma F.; Ali, Nouruddin W.
2018-03-01
Furosemide and spironolactone are commonly prescribed antihypertensive drugs. Canrenone is the main degradation product and main metabolite of spironolactone. Ratio subtraction and extended ratio subtraction spectrophotometric methods were previously applied for quantitation of only binary mixtures. An extension of the above mentioned methods; successive ratio subtraction, is introduced in the presented work for quantitative determination of ternary mixtures exemplified by furosemide, spironolactone and canrenone. Manipulating the ratio spectra of the ternary mixture allowed their determination at 273.6 nm, 285 nm and 240 nm and in the concentration ranges of (2-16 μg mL- 1), (4-32 μg mL- 1) and (1-18 μg mL- 1) for furosemide, spironolactone and canrenone, respectively. Method specificity was ensured by the application to laboratory prepared mixtures. The introduced method was ensured to be accurate and precise. Validation of the developed method was done with respect to ICH guidelines and its validity was further ensured by the application to the pharmaceutical formulation. Statistical comparison between the obtained results and those obtained from the reported HPLC method was achieved concerning student's t-test and F ratio test where no significant difference was observed.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan
2014-01-01
Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823
Bjourson, A J; Stone, C E; Cooper, J E
1992-01-01
A novel subtraction hybridization procedure, incorporating a combination of four separation strategies, was developed to isolate unique DNA sequences from a strain of Rhizobium leguminosarum bv. trifolii. Sau3A-digested DNA from this strain, i.e., the probe strain, was ligated to a linker and hybridized in solution with an excess of pooled subtracter DNA from seven other strains of the same biovar which had been restricted, ligated to a different, biotinylated, subtracter-specific linker, and amplified by polymerase chain reaction to incorporate dUTP. Subtracter DNA and subtracter-probe hybrids were removed by phenol-chloroform extraction of a streptavidin-biotin-DNA complex. NENSORB chromatography of the sequences remaining in the aqueous layer captured biotinylated subtracter DNA which may have escaped removal by phenol-chloroform treatment. Any traces of contaminating subtracter DNA were removed by digestion with uracil DNA glycosylase. Finally, remaining sequences were amplified by polymerase chain reaction with a probe strain-specific primer, labelled with 32P, and tested for specificity in dot blot hybridizations against total genomic target DNA from each strain in the subtracter pool. Two rounds of subtraction-amplification were sufficient to remove cross-hybridizing sequences and to give a probe which hybridized only with homologous target DNA. The method is applicable to the isolation of DNA and RNA sequences from both procaryotic and eucaryotic cells. Images PMID:1637166
An automated subtraction of NLO EW infrared divergences
NASA Astrophysics Data System (ADS)
Schönherr, Marek
2018-02-01
In this paper a generalisation of the Catani-Seymour dipole subtraction method to next-to-leading order electroweak calculations is presented. All singularities due to photon and gluon radiation off both massless and massive partons in the presence of both massless and massive spectators are accounted for. Particular attention is paid to the simultaneous subtraction of singularities of both QCD and electroweak origin which are present in the next-to-leading order corrections to processes with more than one perturbative order contributing at Born level. Similarly, embedding non-dipole-like photon splittings in the dipole subtraction scheme discussed. The implementation of the formulated subtraction scheme in the framework of the Sherpa Monte-Carlo event generator, including the restriction of the dipole phase space through the α -parameters and expanding its existing subtraction for NLO QCD calculations, is detailed and numerous internal consistency checks validating the obtained results are presented.
Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction
NASA Astrophysics Data System (ADS)
Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong
2017-08-01
We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.
Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong
2015-01-01
In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896
Response to Comment on "Does the Earth Have an Adaptive Infrared Iris?"
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Chou, Ming-Dah; Lindzen, Richard S.; Hou, Arthur Y.
2001-01-01
In his comment on Lindzen et al., Harrison found that the amount of high-level clouds, A, and the sea-surface temperature beneath clouds, T, averaged over a large oceanic domain in the western Pacific have secular linear trends of opposite signs over a period of 20 months. He found that when the linear trends are subtracted from the data, the correlation between the residual A and T is much reduced. His estimates of the confidence levels for the correlation indicate, moreover, that this correlation is not statistically significant. The domain-averaged A and, to a lesser degree, T, have distinct intra-seasonal and seasonal variations. These variations are influenced by the large-scale wind and temperature distributions and by the seasonal variation of insolation. To separate the local effect from the effect of slowly changing large-scale conditions, rather than subtracting 20-month linear trends from the series, which has the potential to spuriously extrapolate intra-seasonal and seasonal variations to even longer time scales, we subtracted 30-day running means of A and T from each time series; in effect, the data were high-pass filtered. The number of points (days), N, is reduced by this process from the original value of 510 to 480.
Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe
2013-08-01
Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.
Deblurring in digital tomosynthesis by iterative self-layer subtraction
NASA Astrophysics Data System (ADS)
Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung
2010-04-01
Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.
The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.
Pooley, R A; McKinney, J M; Miller, D A
2001-01-01
A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin
2011-03-01
Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.
An investigation of self-subtraction holography in LiNbO3
NASA Technical Reports Server (NTRS)
Vahey, D. W.; Kenan, R. P.; Hartman, N. F.; Sherman, R. C.
1981-01-01
A sample having self subtraction characteristics that were very promising was tested in depth: hologram formation times were on the order of 150 sec, the null signal was less than 2.5% of the peak signal, and no fatigue nor instability was detected over the span of the experiments. Another sample, fabricated with, at most, slight modifications did not perform nearly as well. In all samples, attempts to improve self subtraction characteristics by various thermal treatments had no effects or adverse effects, with one exception in which improvement was noted after a time delay of several days. A theory developed to describe self subtraction showed the observed decrease in beam intensity with time, but the shape of the predicted decay curve was oscillatory in contrast to the exponential like decay observed. The theory was also inadequate to account for the experimental sensitivity of self subtraction to the Bragg angle of the hologram. It is concluded that self subtraction is a viable method for optical processing systems requiring background discrimination.
Temporal subtraction of chest radiographs compensating pose differences
NASA Astrophysics Data System (ADS)
von Berg, Jens; Dworzak, Jalda; Klinder, Tobias; Manke, Dirk; Kreth, Adrian; Lamecker, Hans; Zachow, Stefan; Lorenz, Cristian
2011-03-01
Temporal subtraction techniques using 2D image registration improve the detectability of interval changes from chest radiographs. Although such methods are well known for some time they are not widely used in radiologic practice. The reason is the occurrence of strong pose differences between two acquisitions with a time interval of months to years in between. Such strong perspective differences occur in a reasonable number of cases. They cannot be compensated by available image registration methods and thus mask interval changes to be undetectable. In this paper a method is proposed to estimate a 3D pose difference by the adaptation of a 3D rib cage model to both projections. The difference between both is then compensated for, thus producing a subtraction image with virtually no change in pose. The method generally assumes that no 3D image data is available from the patient. The accuracy of pose estimation is validated with chest phantom images acquired under controlled geometric conditions. A subtle interval change simulated by a piece of plastic foam attached to the phantom becomes visible in subtraction images generated with this technique even at strong angular pose differences like an anterior-posterior inclination of 13 degrees.
Ohara, Yoshikazu; Horinouchi, Satoshi; Hashimoto, Makoto; Shintaku, Yohei; Yamanaka, Kazushi
2011-08-01
To improve the selectivity of closed cracks for objects other than cracks in ultrasonic imaging, we propose an extension of a novel imaging method, namely, subharmonic phased array for crack evaluation (SPACE) as well as another approach using the subtraction of responses at different external loads. By applying external static or dynamic loads to closed cracks, the contact state in the cracks varies, resulting in an intensity change of responses at cracks. In contrast, objects other than cracks are independent of external load. Therefore, only cracks can be extracted by subtracting responses at different loads. In this study, we performed fundamental experiments on a closed fatigue crack formed in an aluminum alloy compact tension (CT) specimen using the proposed method. We examined the static load dependence of SPACE images and the dynamic load dependence of linear phased array (PA) images by simulating the external loads with a servohydraulic fatigue testing machine. By subtracting the images at different external loads, we show that this method is useful in extracting only the intensity change of responses related to closed cracks, while canceling the responses of objects other than cracks. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Yijia; Zhang, Yichen; Xu, Bingjie; Yu, Song; Guo, Hong
2018-04-01
The method of improving the performance of continuous-variable quantum key distribution protocols by postselection has been recently proposed and verified. In continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocols, the measurement results are obtained from untrusted third party Charlie. There is still not an effective method of improving CV-MDI QKD by the postselection with untrusted measurement. We propose a method to improve the performance of coherent-state CV-MDI QKD protocol by virtual photon subtraction via non-Gaussian postselection. The non-Gaussian postselection of transmitted data is equivalent to an ideal photon subtraction on the two-mode squeezed vacuum state, which is favorable to enhance the performance of CV-MDI QKD. In CV-MDI QKD protocol with non-Gaussian postselection, two users select their own data independently. We demonstrate that the optimal performance of the renovated CV-MDI QKD protocol is obtained with the transmitted data only selected by Alice. By setting appropriate parameters of the virtual photon subtraction, the secret key rate and tolerable excess noise are both improved at long transmission distance. The method provides an effective optimization scheme for the application of CV-MDI QKD protocols.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
Bi, Huan -Yu; Wu, Xing -Gang; Ma, Yang; ...
2015-06-26
The Principle of Maximum Conformality (PMC) eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I); the other, more recent, method (PMC-II) uses the R δ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfymore » all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio R e+e– and the Higgs partial width I'(H→bb¯). Both methods lead to the same resummed (‘conformal’) series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {β i}-terms in the pQCD expansion are taken into account. In addition, we show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.« less
Y-MP floating point and Cholesky factorization
NASA Technical Reports Server (NTRS)
Carter, Russell
1991-01-01
The floating point arithmetics implemented in the Cray 2 and Cray Y-MP computer systems are nearly identical, but large scale computations performed on the two systems have exhibited significant differences in accuracy. The difference in accuracy is analyzed for Cholesky factorization algorithm, and it is found that the source of the difference is the subtract magnitude operation of the Cray Y-MP. The results from numerical experiments for a range of problem sizes are presented, and an efficient method for improving the accuracy of the factorization obtained on the Y-MP is presented.
[Development of a digital chest phantom for studies on energy subtraction techniques].
Hayashi, Norio; Taniguchi, Anna; Noto, Kimiya; Shimosegawa, Masayuki; Ogura, Toshihiro; Doi, Kunio
2014-03-01
Digital chest phantoms continue to play a significant role in optimizing imaging parameters for chest X-ray examinations. The purpose of this study was to develop a digital chest phantom for studies on energy subtraction techniques under ideal conditions without image noise. Computed tomography (CT) images from the LIDC (Lung Image Database Consortium) were employed to develop a digital chest phantom. The method consisted of the following four steps: 1) segmentation of the lung and bone regions on CT images; 2) creation of simulated nodules; 3) transformation to attenuation coefficient maps from the segmented images; and 4) projection from attenuation coefficient maps. To evaluate the usefulness of digital chest phantoms, we determined the contrast of the simulated nodules in projection images of the digital chest phantom using high and low X-ray energies, soft tissue images obtained by energy subtraction, and "gold standard" images of the soft tissues. Using our method, the lung and bone regions were segmented on the original CT images. The contrast of simulated nodules in soft tissue images obtained by energy subtraction closely matched that obtained using the gold standard images. We thus conclude that it is possible to carry out simulation studies based on energy subtraction techniques using the created digital chest phantoms. Our method is potentially useful for performing simulation studies for optimizing the imaging parameters in chest X-ray examinations.
Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique
NASA Astrophysics Data System (ADS)
Nagashima, Hiroyuki; Harakawa, Tetsumi
We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.
NASA Astrophysics Data System (ADS)
Pinter, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Zahorecz, S.; Tóth, L. V.
2018-05-01
Investigating the distant extragalactic Universe requires a subtraction of the Galactic foreground. One of the major difficulties deriving the fine structure of the galactic foreground is the embedded foreground and background point sources appearing in the given fields. It is especially so in the infrared. We report our study subtracting point sources from Herschel images with Kriging, an interpolation method where the interpolated values are modelled by a Gaussian process governed by prior covariances. Using the Kriging method on Herschel multi-wavelength observations the structure of the Galactic foreground can be studied with much higher resolution than previously, leading to a better foreground subtraction at the end.
NASA Astrophysics Data System (ADS)
Rickard, Scott
Electromagnets are a crucial component in a wide range of more complex electrical devices due to their ability to turn electrical energy into mechanical energy and vice versa. The trend for electronics becoming smaller and lighter has led to increased interest in using flat, planar electromagnetic coils, which have been shown to perform better at scaled down sizes. The two-dimensional geometry of a planar electromagnetic coil yields itself to be produced by a roll-to-roll additive manufacturing process. The emergence of the printed electronics field, which uses traditional printing processes to pattern functional inks, has led to new methods of mass-producing basic electrical components. The ability to print a planar electromagnetic coil using printed electronics could rival the traditional subtractive and semi-subtractive PCB process of manufacturing. The ability to print lightweight planar electromagnetic coils on flexible substrates could lead to their inclusion into intelligent packaging applications and could have specific use in actuating devices, transformers, and electromagnetic induction applications such as energy harvesting or wireless charging. In attempts to better understand the limitations of printing planar electromagnetic coils, the effect that the design parameters of the planar coils have on the achievable magnetic field strength were researched. A comparison between prototyping methods of digital extrusion and manufacturing scale flexographic printing are presented, discussing consistency in the printed coils and their performance in generating magnetic fields. A method to predict the performance of these planar coils is introduced to allow for design within required needs of an application. Results from the research include a demonstration of a printed coil being used in a flat speaker design, working off of actuating principles.
[X-ray semiotics of sialolithiasis in functional digital subtraction sialography].
Iudin, L A; Kondrashin, S A; Afanas'ev, V V; Shchipskiĭ, A V
1995-01-01
Twenty-seven patients with sialolithiasis were examined using functional subtraction sialography developed by the authors. Differential diagnostic signs characterizing the degree of involvement of the salivary gland were defined. High efficacy of the method helps correctly plan the treatment strategy.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor
2013-04-01
We finish the definition of a subtraction scheme for computing NNLO corrections to QCD jet cross sections. In particular, we perform the integration of the soft-type contributions to the doubly unresolved counterterms via the method of Mellin-Barnes representations. With these final ingredients in place, the definition of the scheme is complete and the computation of fully differential rates for electron-positron annihilation into two and three jets at NNLO accuracy becomes feasible.
ERIC Educational Resources Information Center
Soydan, Sema; Quadir, Seher Ersoy
2013-01-01
Principal aim of this study is to show the effectiveness of the program prepared by researchers in order to enable 6 year-old children attending pre-school educational institutions to effectively gain addition subtraction skills through a drama-related method. The work group in the research comprised of 80 kids who continued their education in…
Mazurek, Artur; Jamroz, Jerzy
2015-04-15
In food analysis, a method for determination of vitamin C should enable measuring of total content of ascorbic acid (AA) and dehydroascorbic acid (DHAA) because both chemical forms exhibit biological activity. The aim of the work was to confirm applicability of HPLC-DAD method for analysis of total content of vitamin C (TC) and ascorbic acid in various types of food by determination of validation parameters such as: selectivity, precision, accuracy, linearity and limits of detection and quantitation. The results showed that the method applied for determination of TC and AA was selective, linear and precise. Precision of DHAA determination by the subtraction method was also evaluated. It was revealed that the results of DHAA determination obtained by the subtraction method were not precise which resulted directly from the assumption of this method and the principles of uncertainty propagation. The proposed chromatographic method should be recommended for routine determinations of total vitamin C in various food. Copyright © 2014 Elsevier Ltd. All rights reserved.
Scale dependencies of proton spin constituents with a nonperturbative αs
NASA Astrophysics Data System (ADS)
Jia, Shaoyang; Huang, Feng
2012-11-01
By introducing the contribution from dynamically generated gluon mass, we present a brand new parametrized form of QCD beta function to get an inferred limited running behavior of QCD coupling constant αs. This parametrized form is regarded as an essential factor to determine the scale dependencies of the proton spin constituents at the very low scale. In order to compare with experimental results directly, we work within the gauge-invariant framework to decompose the proton spin. Utilizing the updated next-to-next-leading-order evolution equations for angular momentum observables within a modified minimal subtraction scheme, we indicate that gluon contribution to proton spin cannot be ignored. Specifically, by assuming asymptotic limits of the total quark/gluon angular momentum valid, respectively, the scale dependencies of quark angular momentum Jq and gluon angular momentum Jg down to Q2˜1GeV2 are presented, which are comparable with the preliminary analysis of deeply virtual Compton scattering experiments by HERMES and JLab. After solving scale dependencies of quark spin ΔΣq, orbital angular momenta of quarks Lq are given by subtraction, presenting a holistic picture of proton spin partition within up and down quarks at a low scale.
Ito, Y; Hasegawa, S; Yamaguchi, H; Yoshioka, J; Uehara, T; Nishimura, T
2000-01-01
Clinical studies have shown discrepancies in the distribution of thallium-201 and iodine 123-beta-methyl-iodophenylpentadecanoic acid (BMIPP) in patients with hypertrophic cardiomyopathy (HCM). Myocardial uptake of fluorine 18 deoxyglucose (FDG) is increased in the hypertrophic area in HCM. We examined whether the distribution of a Tl-201/BMIPP subtraction polar map correlates with that of an FDG polar map. We normalized to maximum count each Tl-201 and BMIPP bull's-eye polar map of 6 volunteers and obtained a standard Tl-201/BMIPP subtraction polar map by subtracting a normalized BMIPP bull's-eye polar map from a normalized Tl-201 bull's-eye polar map. The Tl-201/BMIPP subtraction polar map was then applied to 8 patients with HCM (mean age 65+/-12 years) to evaluate the discrepancy between Tl-201 and BMIPP distribution. We compared the Tl-201/BMIPP subtraction polar map with an FDG polar map. In patients with HCM, the Tl-201/BMIPP subtraction polar map showed a focal uptake pattern in the hypertrophic area similar to that of the FDG polar map. By quantitative analysis, the severity score of the Tl-201/BMIPP subtraction polar map was significantly correlated with the percent dose uptake of the FDG polar map. These results suggest that this new quantitative method may be an alternative to FDG positron emission tomography for the routine evaluation of HCM.
Central Stars of Planetary Nebulae in the LMC
NASA Technical Reports Server (NTRS)
Bianchi, Luciana
2004-01-01
In FUSE cycle 2's program B001 we studied Central Stars of Planetary Nebulae (CSPN) in the Large Magellanic Could. All FUSE observations have been successfully completed and have been reduced, analyzed and published. The analysis and the results are summarized below. The FUSE data were reduced using the latest available version of the FUSE calibration pipeline (CALFUSE v2.2.2). The flux of these LMC post-AGB objects is at the threshold of FUSE's sensitivity, and thus special care in the background subtraction was needed during the reduction. Because of their faintness, the targets required many orbit-long exposures, each of which typically had low (target) count-rates. Each calibrated extracted sequence was checked for unacceptable count-rate variations (a sign of detector drift), misplaced extraction windows, and other anomalies. All the good calibrated exposures were combined using FUSE pipeline routines. The default FUSE pipeline attempts to model the background measured off-target and subtracts it from the target spectrum. We found that, for these faint objects, the background appeared to be over-estimated by this method, particularly at shorter wavelengths (i.e., < 1000 A). We therefore tried two other reductions. In the first method, subtraction of the measured background is turned off and and the background is taken to be the model scattered-light scaled by the exposure time. In the second one, the first few steps of the pipeline were run on the individual exposures (correcting for effects unique to each exposure such as Doppler shift, grating motions, etc). Then the photon lists from the individual exposures were combined, and the remaining steps of the pipeline run on the combined file. Thus, more total counts for both the target and background allowed for a better extraction.
NASA Astrophysics Data System (ADS)
Muranaka, Noriaki; Date, Kei; Tokumaru, Masataka; Imanishi, Shigeru
In recent years, the traffic accident occurs frequently with explosion of traffic density. Therefore, we think that the safe and comfortable transportation system to defend the pedestrian who is the traffic weak is necessary. First, we detect and recognize the pedestrian (the crossing person) by the image processing. Next, we inform all the drivers of the right or left turn that the pedestrian exists by the sound and the image and so on. By prompting a driver to do safe driving in this way, the accident to the pedestrian can decrease. In this paper, we are using a background subtraction method for the movement detection of the movement object. In the background subtraction method, the update method in the background was important, and as for the conventional way, the threshold values of the subtraction processing and background update were identical. That is, the mixing rate of the input image and the background image of the background update was a fixation value, and the fine tuning which corresponded to the environment change of the weather was difficult. Therefore, we propose the update method of the background image that the estimated mistake is difficult to be amplified. We experiment and examines in the comparison about five cases of sunshine, cloudy, evening, rain, sunlight change, except night. This technique can set separately the threshold values of the subtraction processing and background update processing which suited the environmental condition of the weather and so on. Therefore, the fine tuning becomes possible freely in the mixing rate of the input image and the background image of the background update. Because the setting of the parameter which suited an environmental condition becomes important to minimize mistaking percentage, we examine about the setting of a parameter.
[Construction of fetal mesenchymal stem cell cDNA subtractive library].
Yang, Li; Wang, Dong-Mei; Li, Liang; Bai, Ci-Xian; Cao, Hua; Li, Ting-Yu; Pei, Xue-Tao
2002-04-01
To identify differentially expressed genes between fetal mesenchymal stem cell (MSC) and adult MSC, especially specified genes expressed in fetal MSC, a cDNA subtractive library of fetal MSC was constructed using suppression subtractive hybridization (SSH) technique. At first, total RNA was isolated from fetal and adult MSC. Using SMART PCR synthesis method, single-strand and double-strand cDNAs were synthesized. After Rsa I digestion, fetal MSC cDNAs were divided into two groups and ligated to adaptor 1 and adaptor 2 respectively. Results showed that the amplified library contains 890 clones. Analysis of 890 clones with PCR demonstrated that 768 clones were positive. The positive rate is 86.3%. The size of inserted fragments in these positive clones was between 0.2 - 1 kb, with an average of 400 - 600 bp. SSH is a convenient and effective method for screening differentially expressed genes. The constructed cDNA subtractive library of fetal MSC cDNA lays solid foundation for screening and cloning new and specific function related genes of fetal MSC.
NASA Astrophysics Data System (ADS)
Zaghary, Wafaa A.; Mowaka, Shereen; Hassan, Mostafa A.; Ayoub, Bassam M.
2017-11-01
Different simple spectrophotometric methods were developed for simultaneous determination of alogliptin and metformin manipulating their ratio spectra with successful application on recently approved combination, Kazano® tablets. Spiking was implemented to detect alogliptin in spite of its low contribution in the pharmaceutical formulation as low quantity in comparison to metformin. Linearity was acceptable over the concentration range of 2.5-25.0 μg/mL and 2.5-15.0 μg/mL for alogliptin and metformin, respectively using derivative ratio, ratio subtraction coupled with extended ratio subtraction and spectrum subtraction coupled with constant multiplication. The optimized methods were compared using one-way analysis of variance (ANOVA) and proved to be accurate for assay of the investigated drugs in their pharmaceutical dosage form.
NASA Astrophysics Data System (ADS)
Sangadji, Iriansyah; Arvio, Yozika; Indrianto
2018-03-01
to understand by analyzing the pattern of changes in value movements that can dynamically vary over a given period with relative accuracy, an equipment is required based on the utilization of technical working principles or specific analytical method. This will affect the level of validity of the output that will occur from this system. Subtractive clustering is based on the density (potential) size of data points in a space (variable). The basic concept of subtractive clustering is to determine the regions in a variable that has high potential for the surrounding points. In this paper result is segmentation of behavior pattern based on quantity value movement. It shows the number of clusters is formed and that has many members.
NASA Technical Reports Server (NTRS)
Hermance, J. F. (Principal Investigator)
1982-01-01
The two stages of analysis of MAGSAT magnetic data which are designed to evaluate electromagnetic induction effects are described. The first stage consists of comparison of data from contiguous orbit passes over large scale geologic boundaries, such as ocean-land interfaces, at several levels of magnetic disturbance. The purpose of these comparisons is to separate induction effects from effects of lithospheric magnetization. The procdure for reducing the data includes: (1) identifying and subtracting quiet time effects; (2) modelling and subtracting first order ring current effects; and (3) projecting an orbit track onto a map as a nearly straight line so it can serve as an axis on which to plot the corresponding orbit pass data in the context of geography. The second stage consists of comparison of MAGSAT data with standard hourly observatory data. The purpose is to constrain the time evolution of ionospheric and magnetospheric current systems. Qualitative features of the ground based dataset are discussed. Methods for reducing the ground based data are described.
getimages: Background derivation and image flattening method
NASA Astrophysics Data System (ADS)
Men'shchikov, Alexander
2017-05-01
getimages performs background derivation and image flattening for high-resolution images obtained with space observatories. It is based on median filtering with sliding windows corresponding to a range of spatial scales from the observational beam size up to a maximum structure width X. The latter is a single free parameter of getimages that can be evaluated manually from the observed image. The median filtering algorithm provides a background image for structures of all widths below X. The same median filtering procedure applied to an image of standard deviations derived from a background-subtracted image results in a flattening image. Finally, a flattened image is computed by dividing the background-subtracted by the flattening image. Standard deviations in the flattened image are now uniform outside sources and filaments. Detecting structures in such radically simplified images results in much cleaner extractions that are more complete and reliable. getimages also reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images. The code (a Bash script) uses FORTRAN utilities from getsources (ascl:1507.014), which must be installed.
Zaghary, Wafaa A; Mowaka, Shereen; Hassan, Mostafa A; Ayoub, Bassam M
2017-11-05
Different simple spectrophotometric methods were developed for simultaneous determination of alogliptin and metformin manipulating their ratio spectra with successful application on recently approved combination, Kazano® tablets. Spiking was implemented to detect alogliptin in spite of its low contribution in the pharmaceutical formulation as low quantity in comparison to metformin. Linearity was acceptable over the concentration range of 2.5-25.0μg/mL and 2.5-15.0μg/mL for alogliptin and metformin, respectively using derivative ratio, ratio subtraction coupled with extended ratio subtraction and spectrum subtraction coupled with constant multiplication. The optimized methods were compared using one-way analysis of variance (ANOVA) and proved to be accurate for assay of the investigated drugs in their pharmaceutical dosage form. Copyright © 2017 Elsevier B.V. All rights reserved.
Leung, Chung-Chu
2006-03-01
Digital subtraction radiography requires close matching of the contrast in each pair of X-ray images to be subtracted. Previous studies have shown that nonparametric contrast/brightness correction methods using the cumulative density function (CDF) and its improvements, which are based on gray-level transformation associated with the pixel histogram, perform well in uniform contrast/brightness difference conditions. However, for radiographs with nonuniform contrast/ brightness, the CDF produces unsatisfactory results. In this paper, we propose a new approach in contrast correction based on the generalized fuzzy operator with least square method. The result shows that 50% of the contrast/brightness errors can be corrected using this approach when the contrast/brightness difference between a radiographic pair is 10 U. A comparison of our approach with that of CDF is presented, and this modified GFO method produces better contrast normalization results than the CDF approach.
Walker-Samuel, Simon; Davies, Nathan; Halligan, Steve; Lythgoe, Mark F.
2016-01-01
Purpose To validate caval subtraction two-dimensional (2D) phase-contrast magnetic resonance (MR) imaging measurements of total liver blood flow (TLBF) and hepatic arterial fraction in an animal model and evaluate consistency and reproducibility in humans. Materials and Methods Approval from the institutional ethical committee for animal care and research ethics was obtained. Fifteen Sprague-Dawley rats underwent 2D phase-contrast MR imaging of the portal vein (PV) and infrahepatic and suprahepatic inferior vena cava (IVC). TLBF and hepatic arterial flow were estimated by subtracting infrahepatic from suprahepatic IVC flow and PV flow from estimated TLBF, respectively. Direct PV transit-time ultrasonography (US) and fluorescent microsphere measurements of hepatic arterial fraction were the standards of reference. Thereafter, consistency of caval subtraction phase-contrast MR imaging–derived TLBF and hepatic arterial flow was assessed in 13 volunteers (mean age, 28.3 years ± 1.4) against directly measured phase-contrast MR imaging PV and proper hepatic arterial inflow; reproducibility was measured after 7 days. Bland-Altman analysis of agreement and coefficient of variation comparisons were undertaken. Results There was good agreement between PV flow measured with phase-contrast MR imaging and that measured with transit-time US (mean difference, −3.5 mL/min/100 g; 95% limits of agreement [LOA], ±61.3 mL/min/100 g). Hepatic arterial fraction obtained with caval subtraction agreed well with those with fluorescent microspheres (mean difference, 4.2%; 95% LOA, ±20.5%). Good consistency was demonstrated between TLBF in humans measured with caval subtraction and direct inflow phase-contrast MR imaging (mean difference, −1.3 mL/min/100 g; 95% LOA, ±23.1 mL/min/100 g). TLBF reproducibility at 7 days was similar between the two methods (95% LOA, ±31.6 mL/min/100 g vs ±29.6 mL/min/100 g). Conclusion Caval subtraction phase-contrast MR imaging is a simple and clinically viable method for measuring TLBF and hepatic arterial flow. Online supplemental material is available for this article. PMID:27171018
Chouhan, Manil D; Mookerjee, Rajeshwar P; Bainbridge, Alan; Walker-Samuel, Simon; Davies, Nathan; Halligan, Steve; Lythgoe, Mark F; Taylor, Stuart A
2016-09-01
Purpose To validate caval subtraction two-dimensional (2D) phase-contrast magnetic resonance (MR) imaging measurements of total liver blood flow (TLBF) and hepatic arterial fraction in an animal model and evaluate consistency and reproducibility in humans. Materials and Methods Approval from the institutional ethical committee for animal care and research ethics was obtained. Fifteen Sprague-Dawley rats underwent 2D phase-contrast MR imaging of the portal vein (PV) and infrahepatic and suprahepatic inferior vena cava (IVC). TLBF and hepatic arterial flow were estimated by subtracting infrahepatic from suprahepatic IVC flow and PV flow from estimated TLBF, respectively. Direct PV transit-time ultrasonography (US) and fluorescent microsphere measurements of hepatic arterial fraction were the standards of reference. Thereafter, consistency of caval subtraction phase-contrast MR imaging-derived TLBF and hepatic arterial flow was assessed in 13 volunteers (mean age, 28.3 years ± 1.4) against directly measured phase-contrast MR imaging PV and proper hepatic arterial inflow; reproducibility was measured after 7 days. Bland-Altman analysis of agreement and coefficient of variation comparisons were undertaken. Results There was good agreement between PV flow measured with phase-contrast MR imaging and that measured with transit-time US (mean difference, -3.5 mL/min/100 g; 95% limits of agreement [LOA], ±61.3 mL/min/100 g). Hepatic arterial fraction obtained with caval subtraction agreed well with those with fluorescent microspheres (mean difference, 4.2%; 95% LOA, ±20.5%). Good consistency was demonstrated between TLBF in humans measured with caval subtraction and direct inflow phase-contrast MR imaging (mean difference, -1.3 mL/min/100 g; 95% LOA, ±23.1 mL/min/100 g). TLBF reproducibility at 7 days was similar between the two methods (95% LOA, ±31.6 mL/min/100 g vs ±29.6 mL/min/100 g). Conclusion Caval subtraction phase-contrast MR imaging is a simple and clinically viable method for measuring TLBF and hepatic arterial flow. Online supplemental material is available for this article.
Effective image differencing with convolutional neural networks for real-time transient hunting
NASA Astrophysics Data System (ADS)
Sedaghat, Nima; Mahabal, Ashish
2018-06-01
Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
Image Processing Of Images From Peripheral-Artery Digital Subtraction Angiography (DSA) Studies
NASA Astrophysics Data System (ADS)
Wilson, David L.; Tarbox, Lawrence R.; Cist, David B.; Faul, David D.
1988-06-01
A system is being developed to test the possibility of doing peripheral, digital subtraction angiography (DSA) with a single contrast injection using a moving gantry system. Given repositioning errors that occur between the mask and contrast-containing images, factors affecting the success of subtractions following image registration have been investigated theoretically and experimentally. For a 1 mm gantry displacement, parallax and geometric image distortion (pin-cushion) both give subtraction errors following registration that are approximately 25% of the error resulting from no registration. Image processing techniques improve the subtractions. The geometric distortion effect is reduced using a piece-wise, 8 parameter unwarping method. Plots of image similarity measures versus pixel shift are well behaved and well fit by a parabola, leading to the development of an iterative, automatic registration algorithm that uses parabolic prediction of the new minimum. The registration algorithm converges quickly (less than 1 second on a MicroVAX) and is relatively immune to the region of interest (ROI) selected.
Misconception on Addition and Subtraction of Fraction at Primary School Students in Fifth-Grade
NASA Astrophysics Data System (ADS)
Trivena, V.; Ningsih, A. R.; Jupri, A.
2017-09-01
This study aims to investigate the mastery concept of the student in mathematics learning especially in addition and subtraction of fraction at primary school level. By using qualitative research method, the data were collected from 23 grade five students (10-11-year-old). Instruments included a test, that is accompanied by Certainty Response Index (CRI) and interview with students and teacher. The result of the test has been obtained, then processed by analyzing the student’s answers for each item and then grouped by the CRI categories that combined with the results of the interview with students and teacher. The results showed that student’s mastery-concept on additional and subtraction dominated by category ‘misconception’. So, we can say that mastery-concept on addition and subtraction of fraction at fifth-grade students is still low. Finally, the impact can make most of primary student think that learning addition and subtraction of fraction in mathematics is difficult.
Method to acquire regions of fruit, branch and leaf from image of red apple in orchard
NASA Astrophysics Data System (ADS)
Lv, Jidong; Xu, Liming
2017-07-01
This work proposed a method to acquire regions of fruit, branch and leaf from red apple image in orchard. To acquire fruit image, R-G image was extracted from the RGB image for corrosive working, hole filling, subregion removal, expansive working and opening operation in order. Finally, fruit image was acquired by threshold segmentation. To acquire leaf image, fruit image was subtracted from RGB image before extracting 2G-R-B image. Then, leaf image was acquired by subregion removal and threshold segmentation. To acquire branch image, dynamic threshold segmentation was conducted in the R-G image. Then, the segmented image was added to fruit image to acquire adding fruit image which was subtracted from RGB image with leaf image. Finally, branch image was acquired by opening operation, subregion removal and threshold segmentation after extracting the R-G image from the subtracting image. Compared with previous methods, more complete image of fruit, leaf and branch can be acquired from red apple image with this method.
Suppressing multiples using an adaptive multichannel filter based on L1-norm
NASA Astrophysics Data System (ADS)
Shi, Ying; Jing, Hongliang; Zhang, Wenwu; Ning, Dezhi
2017-08-01
Adaptive subtraction is an important link for removing surface-related multiples in the wave equation-based method. In this paper, we propose an adaptive multichannel subtraction method based on the L1-norm. We achieve enhanced compensation for the mismatch between the input seismogram and the predicted multiples in terms of the amplitude, phase, frequency band, and travel time. Unlike the conventional L2-norm, the proposed method does not rely on the assumption that the primary and the multiples are orthogonal, and also takes advantage of the fact that the L1-norm is more robust when dealing with outliers. In addition, we propose a frequency band extension via modulation to reconstruct the high frequencies to compensate for the frequency misalignment. We present a parallel computing scheme to accelerate the subtraction algorithm on graphic processing units (GPUs), which significantly reduces the computational cost. The synthetic and field seismic data tests show that the proposed method effectively suppresses the multiples.
Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline
Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.
2016-09-28
A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.
Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.
A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.
B-spline based image tracking by detection
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman
2016-05-01
Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.
Design Study: Integer Subtraction Operation Teaching Learning Using Multimedia in Primary School
ERIC Educational Resources Information Center
Aris, Rendi Muhammad; Putri, Ratu Ilma Indra
2017-01-01
This study aims to develop a learning trajectory to help students understand concept of subtraction of integers using multimedia in the fourth grade. This study is thematic integrative learning in Curriculum 2013 PMRI based. The method used is design research consists of three stages; preparing for the experiment, design experiment, retrospective…
ERIC Educational Resources Information Center
Duke, Roger; Graham, Alan; Johnston-Wilder, Sue
2007-01-01
This article describes a recent and successful initiative on teaching place value and the decomposition method of subtraction to pupils having difficulty with this technique in the 9-12-year age range. The aim of the research was to explore whether using the metaphor of selling chews (i.e., sweets) in a tuck shop and developing this into an iconic…
Adult Learners' Knowledge of Fraction Addition and Subtraction
ERIC Educational Resources Information Center
Muckridge, Nicole A.
2017-01-01
The purpose of this study was to examine adult developmental mathematics (ADM) students' knowledge of fraction addition and subtraction as it relates to their demonstrated fraction schemes and ability to disembed in multiplicative contexts with whole numbers. The study was conducted using a mixed methods sequential explanatory design. In the first…
NASA Astrophysics Data System (ADS)
Ayoub, B. M.
2017-11-01
Two simple spectrophotometric methods were developed for determination of empagliflozin and metformin by manipulating their ratio spectra with application on a recently approved pharmaceutical combination, Synjardy® tablets. A spiking technique was used to increase the concentration of empagliflozin after extraction from the tablets to allow its simultaneous determination with metformin. Validation parameters according to ICH guidelines were acceptable over the concentration range of 2-12 μg/mL for both drugs using constant multiplication and spectrum subtraction methods. The optimized methods are suitable for QC labs.
NASA Astrophysics Data System (ADS)
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-01
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.
Le Blanche, Alain-Ferdinand; Tassart, Marc; Deux, Jean-François; Rossert, Jérôme; Bigot, Jean-Michel; Boudghene, Frank
2002-10-01
The aim of our study was to evaluate the feasibility, safety, and potential role of the contrast agent gadoterate meglumine for digital subtraction angiography as a single diagnostic procedure or before percutaneous transluminal angioplasty of malfunctioning native dialysis fistulas. Over a 20-month period, 23 patients (15 women, eight men) with an age range of 42-87 years (mean, 63 years) having end-stage renal insufficiency and with recent hemodialysis fistula surgical placement underwent gadoterate-enhanced digital subtraction angiography with a digital 1024 x 1024 matrix. Opacification was performed on the forearm, arm, and chest with the patient in the supine position using an injection (retrograde, n = 14; anterograde, n = 8; arterial, n = 1) of gadoterate meglumine into the perianastomotic fistula segment at a rate of 3 mL/sec for a total volume ranging from 24 to 32 mL. Percutaneous transluminal angioplasty was performed in three patients and required an additional 8 mL per procedure. Examinations were compared using a 3-step confidence scale and a two-radiologist agreement (Cohen's kappa statistic) for diagnostic and opacification quality. Tolerability was evaluated on the basis of serum creatinine levels and the development of complications. No impairment of renal function was found in the 15 patients who were not treated with hemodialysis. Serum creatinine level change varied from -11.9% to 11.6%. All studies were of diagnostic quality. The presence of stenosis (n = 14) or thrombosis (n = 3) in arteriovenous fistulas was shown with good interobserver agreement (kappa = 0.71-0.80) in relation to opacification quality (kappa = 0.59-0.84). No pain, neurologic complications, or allergiclike reactions occurred. Three percutaneous transluminal angioplasty procedures (brachiocephalic, n = 2; radiocephalic, n = 1) were successfully performed. Gadoterate-enhanced digital subtraction angiography is an effective and safe method to assess causes of malfunction of hemodialysis fistulas. It can also be used to plan and perform percutaneous transluminal angioplasty.
Ma, Guangming; Yu, Yong; Duan, Haifeng; Dou, Yuequn; Jia, Yongjun; Zhang, Xirong; Yang, Chuangbo; Chen, Xiaoxia; Han, Dong; Guo, Changyi; He, Taiping
2018-06-01
To investigate the application of low radiation and contrast dose spectral CT angiology using rapid kV-switching technique in the head and neck with subtraction method for bone removal. This prospective study was approved by the local ethics committee. 64 cases for head and neck CT angiology were randomly divided into Groups A (n = 32) and B (n = 32). Group A underwent unenhanced CT with 100 kVp, 200 mA and contrast-enhanced CT with spectral CT mode with body mass index-dependent low dose protocols. Group B used conventional helical scanning with 120 kVp, auto mA for noise index of 12 HU (Hounsfield unit) for both the unenhanced and contrast-enhanced CT. Subtraction images were formed by subtracting the unenhanced images from enhanced images (with the 65 keV-enhanced spectral CT image in Group A). CT numbers and their standard deviations in aortic arch, carotid arteries, middle cerebral artery and air were measured in the subtraction images. The signal-to-noise ratio and contrast-to-noise ratio for the common and internal carotid arteries and middle cerebral artery were calculated. Image quality in terms of bone removal effect was evaluated by two experienced radiologists independently and blindly using a 4-point system. Radiation dose and total iodine load were recorded. Measurements were statistically compared between the two groups. The two groups had same demographic results. There was no difference in the CT number, signal-to-noise and contrast-to-noise ratio values for carotid arteries and middle cerebral artery in the subtraction images between the two groups (p > 0.05). However, the bone removal effect score [median (min-max)] in Group A [4 (3-4)] was rated better than in Group B [3 (2-4)] (p < 0.001), with excellent agreement between the two observers (κ > 0.80). The radiation dose in Group A (average of 2.64 mSv) was 57% lower than the 6.18 mSv in Group B (p < 0.001). The total iodine intake in Group A was 13.5g, 36% lower than the 21g in Group B. Spectral CT imaging with rapid kV-switching in the subtraction angiography in head and neck provides better bone removal with significantly reduced radiation and contrast dose compared with conventional subtraction method. Advances in knowledge: This novel method provides better bone removal with significant radiation and contrast dose reduction compared with the conventional subtraction CT, and maybe used clinically to protect the thyroid gland and ocular lenses from unnecessary high radiation.
Abt, Nicholas B; Lehar, Mohamed; Guajardo, Carolina Trevino; Penninger, Richard T; Ward, Bryan K; Pearl, Monica S; Carey, John P
2016-04-01
Whether the round window membrane (RWM) is permeable to iodine-based contrast agents (IBCA) is unknown; therefore, our goal was to determine if IBCAs could diffuse through the RWM using CT volume acquisition imaging. Imaging of hydrops in the living human ear has attracted recent interest. Intratympanic (IT) injection has shown gadolinium's ability to diffuse through the RWM, enhancing the perilymphatic space. Four unfixed human cadaver temporal bones underwent intratympanic IBCA injection using three sequentially studied methods. The first method was direct IT injection. The second method used direct RWM visualization via tympanomeatal flap for IBCA-soaked absorbable gelatin pledget placement. In the third method, the middle ear was filled with contrast after flap elevation. Volume acquisition CT images were obtained immediately postexposure, and at 1-, 6-, and 24-hour intervals. Postprocessing was accomplished using color ramping and subtraction imaging. After the third method, positive RWM and perilymphatic enhancement were observed with endolymph sparing. Gray scale and color ramp multiplanar reconstructions displayed increased signal within the cochlea compared with precontrast imaging. The cochlea was measured for attenuation differences compared with pure water, revealing a preinjection average of -1,103 HU and a postinjection average of 338 HU. Subtraction imaging shows enhancement remaining within the cochlear space, Eustachian tube, middle ear epithelial lining, and mastoid. Iohexol iodine contrast is able to diffuse across the RWM. Volume acquisition CT imaging was able to detect perilymphatic enhancement at 0.5-mm slice thickness. The clinical application of IBCA IT injection seems promising but requires further safety studies.
NASA Astrophysics Data System (ADS)
Essam, Hebatallah M.; Abd-El Rahman, Mohamed K.
2015-04-01
Two smart, specific, accurate and precise spectrophotometric methods manipulating ratio spectra are developed for simultaneous determination of Methocarbamol (METH) and Paracetamol (PAR) in their combined pharmaceutical formulation without preliminary separation. Method A, is an extended ratio subtraction one (EXRSM) coupled with ratio subtraction method (RSM), which depends on subtraction of the plateau values from the ratio spectrum. Method B is a ratio difference spectrophotometric one (RDM) which measures the difference in amplitudes of ratio spectra between 278 and 286 nm for METH and 247 and 260 nm for PAR. The calibration curves are linear over the concentration range of 10-100 μg mL-1 and 2-20 μg mL-1 for METH and PAR, respectively. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. Both methods were applied successfully for the determination of the selected drugs in their combined dosage form. Furthermore, validation was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that both methods can be competitively applied in quality control laboratories.
NASA Astrophysics Data System (ADS)
Rahman, Taibur; Renaud, Luke; Heo, Deuk; Renn, Michael; Panat, Rahul
2015-10-01
The fabrication of 3D metal-dielectric structures at sub-mm length scale is highly important in order to realize low-loss passives and GHz wavelength antennas with applications in wearable and Internet-of-Things (IoT) devices. The inherent 2D nature of lithographic processes severely limits the available manufacturing routes to fabricate 3D structures. Further, the lithographic processes are subtractive and require the use of environmentally harmful chemicals. In this letter, we demonstrate an additive manufacturing method to fabricate 3D metal-dielectric structures at sub-mm length scale. A UV curable dielectric is dispensed from an Aerosol Jet system at 10-100 µm length scale and instantaneously cured to build complex 3D shapes at a length scale <1 mm. A metal nanoparticle ink is then dispensed over the 3D dielectric using a combination of jetting action and tilted dispense head, also using the Aerosol Jet technique and at a length scale 10-100 µm, followed by the nanoparticle sintering. Simulation studies are carried out to demonstrate the feasibility of using such structures as mm-wave antennas. The manufacturing method described in this letter opens up the possibility of fabricating an entirely new class of custom-shaped 3D structures at a sub-mm length scale with potential applications in 3D antennas and passives.
Power Spectral Density Error Analysis of Spectral Subtraction Type of Speech Enhancement Methods
NASA Astrophysics Data System (ADS)
Händel, Peter
2006-12-01
A theoretical framework for analysis of speech enhancement algorithms is introduced for performance assessment of spectral subtraction type of methods. The quality of the enhanced speech is related to physical quantities of the speech and noise (such as stationarity time and spectral flatness), as well as to design variables of the noise suppressor. The derived theoretical results are compared with the outcome of subjective listening tests as well as successful design strategies, performed by independent research groups.
Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias
2015-01-01
Introduction Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. Methods We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Results Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Conclusions Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative rotational digital subtraction angiography. PMID:25803318
Farouk, M; Elaziz, Omar Abd; Tawakkol, Shereen M; Hemdan, A; Shehata, Mostafa A
2014-04-05
Four simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the determination of Benazepril (BENZ) alone and in combination with Amlodipine (AML) in pharmaceutical dosage form. The first method is pH induced difference spectrophotometry, where BENZ can be measured in presence of AML as it showed maximum absorption at 237nm and 241nm in 0.1N HCl and 0.1N NaOH, respectively, while AML has no wavelength shift in both solvents. The second method is the new Extended Ratio Subtraction Method (EXRSM) coupled to Ratio Subtraction Method (RSM) for determination of both drugs in commercial dosage form. The third and fourth methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 2-30μg/mL for BENZ in difference and extended ratio subtraction spectrophotometric method, and 5-30 for AML in EXRSM method, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits. Copyright © 2013 Elsevier B.V. All rights reserved.
El-Bardicy, Mohammad G; Lotfy, Hayam M; El-Sayed, Mohammad A; El-Tarras, Mohammad F
2008-01-01
Ratio subtraction and isosbestic point methods are 2 innovating spectrophotometric methods used to determine vincamine in the presence of its acid degradation product and a mixture of cinnarizine (CN) and nicergoline (NIC). Linear correlations were obtained in the concentration range from 8-40 microg/mL for vincamine (I), 6-22 microg/mL for CN (II), and 6-36 microg/mL for NIC (III), with mean accuracies 99.72 +/- 0.917% for I, 99.91 +/- 0.703% for II, and 99.58 +/- 0.847 and 99.83 +/- 1.039% for III. The ratio subtraction method was utilized for the analysis of laboratory-prepared mixtures containing different ratios of vincamine and its degradation product, and it was valid in the presence of up to 80% degradation product. CN and NIC in synthetic mixtures were analyzed by the 2 proposed methods with the total content of the mixture determined at their respective isosbestic points of 270.2 and 235.8 nm, and the content of CN was determined by the ratio subtraction method. The proposed method was validated and found to be suitable as a stability-indicating assay method for vincamine in pharmaceutical formulations. The standard addition technique was applied to validate the results and to ensure the specificity of the proposed methods.
NASA Astrophysics Data System (ADS)
Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua
2017-03-01
A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.
Comparing transformation methods for DNA microarray data
Thygesen, Helene H; Zwinderman, Aeilko H
2004-01-01
Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method. PMID:15202953
Comparing transformation methods for DNA microarray data.
Thygesen, Helene H; Zwinderman, Aeilko H
2004-06-17
When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-05
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.
Factors affecting volume calculation with single photon emission tomography (SPECT) method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T.H.; Lee, K.H.; Chen, D.C.P.
1985-05-01
Several factors may influence the calculation of absolute volumes (VL) from SPECT images. The effect of these factors must be established to optimize the technique. The authors investigated the following on the VL calculations: % of background (BG) subtraction, reconstruction filters, sample activity, angular sampling and edge detection methods. Transaxial images of a liver-trunk phantom filled with Tc-99m from 1 to 3 ..mu..Ci/cc were obtained in 64x64 matrix with a Siemens Rota Camera and MDS computer. Different reconstruction filters including Hanning 20,32, 64 and Butterworth 20, 32 were used. Angular samplings were performed in 3 and 6 degree increments. ROI'smore » were drawn manually and with an automatic edge detection program around the image after BG subtraction. VL's were calculated by multiplying the number of pixels within the ROI by the slice thickness and the x- and y- calibrations of each pixel. One or 2 pixel per slice thickness was applied in the calculation. An inverse correlation was found between the calculated VL and the % of BG subtraction (r=0.99 for 1,2,3 ..mu..Ci/cc activity). Based on the authors' linear regression analysis, the correct liver VL was measured with about 53% BG subtraction. The reconstruction filters, slice thickness and angular sampling had only minor effects on the calculated phantom volumes. Detection of the ROI automatically by the computer was not as accurate as the manual method. The authors conclude that the % of BG subtraction appears to be the most important factor affecting the VL calculation. With good quality control and appropriate reconstruction factors, correct VL calculations can be achieved with SPECT.« less
García, José R.; Singh, Ankur; García, Andrés J.
2016-01-01
In the pursuit to develop enhanced technologies for cellular bioassays as well as understand single cell interactions with its underlying substrate, the field of biotechnology has extensively utilized lithographic techniques to spatially pattern proteins onto surfaces in user-defined geometries. Microcontact printing (μCP) remains an incredibly useful patterning method due to its inexpensive nature, scalability, and the lack of considerable use of specialized clean room equipment. However, as new technologies emerge that necessitate various nano-sized areas of deposited proteins, traditional microcontact printing methods may not be able to supply users with the needed resolution size. Recently, our group developed a modified “subtractive microcontact printing” method which still retains many of the benefits offered by conventional μCP. Using this technique, we have been able to reach resolution sizes of fibronectin as small as 250 nm in largely spaced arrays for cell culture. In this communication, we present a detailed description of our subtractive μCP procedure that expands on many of the little tips and tricks that together make this procedure an easy and effective method for controlling protein patterning. PMID:24439290
Abt, Nicholas B.; Lehar, Mohamed; Guajardo, Carolina Trevino; Penninger, Richard T.; Ward, Bryan K.; Pearl, Monica S.; Carey, John P.
2016-01-01
Hypothesis Whether the RWM is permeable to iodine-based contrast agents (IBCA) is unknown; therefore, our goal was to determine if IBCAs could diffuse through the RWM using CT volume acquisition imaging. Introduction Imaging of hydrops in the living human ear has attracted recent interest. Intratympanic (IT) injection has shown gadolinium's ability to diffuse through the round window membrane (RWM), enhancing the perilymphatic space. Methods Four unfixed human cadaver temporal bones underwent intratympanic IBCA injection using three sequentially studied methods. The first method was direct IT injection. The second method used direct RWM visualization via tympanomeatal flap for IBCA-soaked absorbable gelatin pledget placement. In the third method, the middle ear was filled with contrast after flap elevation. Volume acquisition CT images were obtained immediately post-exposure, and at 1, 6, and 24 hour intervals. Post-processing was accomplished using color ramping and subtraction imaging. Results Following the third method, positive RWM and perilymphatic enhancement were seen with endolymph sparing. Gray scale and color ramp multiplanar reconstructions displayed increased signal within the cochlea compared to pre-contrast imaging. The cochlea was measured for attenuation differences compared to pure water, revealing a pre-injection average of −1,103 HU and a post-injection average of 338 HU. Subtraction imaging shows enhancement remaining within the cochlear space, Eustachian tube, middle ear epithelial lining, and mastoid. Conclusions Iohexol iodine contrast is able to diffuse across the RWM. Volume acquisition CT imaging was able to detect perilymphatic enhancement at 0.5mm slice thickness. The clinical application of IBCA IT injection appears promising but requires further safety studies. PMID:26859543
Higgs boson decay into b-quarks at NNLO accuracy
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán
2015-04-01
We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.
NASA Technical Reports Server (NTRS)
Kashlinsky, A.; Arendt, R. G.; Ashby, M. L. N.; Fazio, G. G.; Mather, J.; Moseley, S. H.
2012-01-01
We extend the previous measurements of CIB fluctuations to angular scales of less than or equal to 1 degree new data obtained in the course of the 2,000+ hour Spitzer Extended Deep Survey. Two fields with completed observations of approximately equal to 12 hr/pixel are analyzed for source-subtracted CIB fluctuations at 3.6 and 4.5 micrometers. The fields, EGS and UDS, cover a total area of approximately 0.25 deg and lie at high Galactic and Ecliptic latitudes, thus minimizing cirrus and zodiacal light contributions to the fluctuations. The observations have been conducted at 3 distinct epochs separated by about 6 months. As in our previous studies, the fields were assembled using the self-calibration method which is uniquely suitable for probing faint diffuse backgrounds. The assembled fields were cleaned off the bright sources down to the low shot noise levels corresponding to AB mag approximately equal to 25, Fourier-transformed and their power spectra evaluated. The noise was estimated from the time-differenced data and subtracted from the signal isolating the fluctuations remaining above the noise levels. The power spectra of the source-subtracted fields remain identical (within the observational uncertainties) for the three epochs of observations indicating that zodiacal light contributes negligibly to the fluctuations. By comparing to the measurements for the same regions at 8 micrometers we demonstrate that Galactic cirrus cannot account for the levels of the fluctuations either. The signal appears isotropically distributed on the sky as required by its origin in the CIB fluctuations. This measurement thus extends our earlier results to the important range of sub-degree scales. We find that the CIB fluctuations continue to diverge to more than 10 times those of known galaxy populations on angular scales out to less than or equal to 1 degree. The low shot noise levels remaining in the diffuse maps indicate that the large scale fluctuations arise from spatial clustering of faint sources well within the confusion noise. The spatial spectrum of these fluctuations is in reasonable agreement with simple fitting assuming that they originate in early populations spatially distributed according to the standard cosmological model (ACDM) at epochs coinciding with the first stars era. The alternative to this identification would require a new population never observed before, nor expected on theoretical grounds, but if true this would represent an important discovery in its own right.
An Experimental Implementation of Chemical Subtraction
Chen, Shao-Nong; Turner, Allison; Jaki, Birgit U.; Nikolic, Dejan; van Breemen, Richard B.; Friesen, J. Brent; Pauli, Guido F.
2008-01-01
A preparative analytical method was developed to selectively remove (“chemically subtract”) a single compound from a complex mixture, such as a natural extract or fraction, in a single step. The proof of concept is demonstrated by the removal of pure benzoic acid (BA) from cranberry (Vaccinium macrocarpon Ait.) juice fractions that exhibit anti-adhesive effects vs. uropathogenic E. coli. Chemical subtraction of BA, representing a major constituent of the fractions, eliminates the potential in vitro interference of the bacteriostatic effect of BA on the E. coli anti-adherence action measured in bioassays. Upon BA removal, the anti-adherent activity of the fraction was fully retained, 36% inhibition of adherence in the parent fraction at 100 ug/mL increased to 58% in the BA-free active fraction. The method employs countercurrent chromatography (CCC) and operates loss-free for both the subtracted and the retained portions as only liquid-liquid partitioning is involved. While the high purity (97.47% by quantitative 1H NMR) of the subtracted BA confirms the selectivity of the method, one minor impurity was determined to be scopoletin by HR-ESI-MS and (q)HNMR and represents the first coumarin reported from cranberries. A general concept for the selective removal of phytoconstituents by CCC is presented, which has potential broad applicability in the biological evaluation of medicinal plant extracts and complex pharmaceutical preparations. PMID:18234463
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, E; Lasio, G; Yi, B
2014-06-01
Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less
Efficient Computation of Difference Vibrational Spectra in Isothermal-Isobaric Ensemble.
Joutsuka, Tatsuya; Morita, Akihiro
2016-11-03
Difference spectroscopy between two close systems is widely used to augment its selectivity to the different parts of the observed system, though the molecular dynamics calculation of tiny difference spectra would be computationally extraordinary demanding by subtraction of two spectra. Therefore, we have proposed an efficient computational algorithm of difference spectra without resorting to the subtraction. The present paper reports our extension of the theoretical method in the isothermal-isobaric (NPT) ensemble. The present theory expands our applications of analysis including pressure dependence of the spectra. We verified that the present theory yields accurate difference spectra in the NPT condition as well, with remarkable computational efficiency over the straightforward subtraction by several orders of magnitude. This method is further applied to vibrational spectra of liquid water with varying pressure and succeeded in reproducing tiny difference spectra by pressure change. The anomalous pressure dependence is elucidated in relation to other properties of liquid water.
Shao, Yu; Chang, Chip-Hong
2007-08-01
We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.
Cloning and Characterizing Genes Involved in Monoterpene Induced Mammary Tumor Regression
1998-05-01
Monoterpene -induced/repressed genes were identified in regressing rat mammary carcinomas treated with dietary limonene using a newly developed method...termed subtractive display. The subtractive display screen identified 42 monoterpene -induced genes comprising 9 known genes and 33 unidentified genes...as well as 58 monoterpene -repressed genes comprising 1 known gene and 57 unidentified genes. Several of the identified differentially expressed
Optical constants of solid ammonia in the infrared
NASA Technical Reports Server (NTRS)
Robertson, C. W.; Downing, H. D.; Curnutte, B.; Williams, D.
1975-01-01
No direct measurements of the refractive index for solid ammonia could be obtained because of failures in attempts to map the reflection spectrum. Kramers-Kronig techniques were, therefore, used in the investigation. The subtractive Kramers-Kronig techniques employed are similar to those discussed by Ahrenkiel (1971). The subtractive method provides a more-rapid convergence than the conventional techniques when data are available over a limited spectral range.
Peng, Shu-Hui; Shen, Chao-Yu; Wu, Ming-Chi; Lin, Yue-Der; Huang, Chun-Huang; Kang, Ruei-Jin; Tyan, Yeu-Sheng; Tsao, Teng-Fu
2013-08-01
Time-of-flight (TOF) magnetic resonance (MR) angiography is based on flow-related enhancement using the T1-weighted spoiled gradient echo, or the fast low-angle shot gradient echo sequence. However, materials with short T1 relaxation times may show hyperintensity signals and contaminate the TOF images. The objective of our study was to determine whether subtraction three-dimensional (3D) TOF MR angiography improves image quality in brain and temporal bone diseases with unwanted contaminations with short T1 relaxation times. During the 12-month study period, patients who had masses with short T1 relaxation times noted on precontrast T1-weighted brain MR images and 24 healthy volunteers were scanned using conventional and subtraction 3D TOF MR angiography. The qualitative evaluation of each MR angiogram was based on signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and scores in three categories, namely, (1) presence of misregistration artifacts, (2) ability to display arterial anatomy selectively (without contamination by materials with short T1 relaxation times), and (3) arterial flow-related enhancement. We included 12 patients with intracranial hematomas, brain tumors, or middle-ear cholesterol granulomas. Subtraction 3D TOF MR angiography yielded higher CNRs between the area of the basilar artery (BA) and normal-appearing parenchyma of the brain and lower SNRs in the area of the BA compared with the conventional technique (147.7 ± 77.6 vs. 130.6 ± 54.2, p < 0.003 and 162.5 ± 79.9 vs. 194.3 ± 62.3, p < 0.001, respectively) in all 36 cases. The 3D subtraction angiography did not deteriorate image quality with misregistration artifacts and showed a better selective display of arteries (p < 0.0001) and arterial flow-related enhancement (p < 0.044) than the conventional method. Subtraction 3D TOF MR angiography is more appropriate than the conventional method in improving the image quality in brain and temporal bone diseases with unwanted contaminations with short T1 relaxation times. Copyright © 2013. Published by Elsevier B.V.
Temporal subtraction contrast-enhanced dedicated breast CT
NASA Astrophysics Data System (ADS)
Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.
2016-09-01
The development of a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, intensity difference adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using normalized cross correlation (NCC), symmetric uncertainty coefficient, normalized mutual information (NMI), mean square error (MSE) and target registration error (TRE). The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE (0-16%), NCC (0-6%), NMI (0-13%) and TRE (0-34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake.
Temporal subtraction contrast-enhanced dedicated breast CT
Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.
2016-01-01
Purpose To develop a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. Methods An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, Intensity Difference Adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using Normalized Cross Correlation (NCC), Symmetric Uncertainty Coefficient (SUC), Normalized Mutual Information (NMI), Mean Square Error (MSE) and Target Registration Error (TRE). Results The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE(0–16%), NCC (0–6%), NMI (0–13%) and TRE (0–34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Conclusion Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake. PMID:27494376
Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed
2016-05-15
Three different spectrophotometric methods were applied for the quantitative analysis of flucloxacillin and amoxicillin in their binary mixture, namely, ratio subtraction, absorbance subtraction and amplitude modulation. A comparative study was done listing the advantages and the disadvantages of each method. All the methods were validated according to the ICH guidelines and the obtained accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can be used for the routine analysis of flucloxacillin and amoxicillin in their binary mixtures. Copyright © 2016 Elsevier B.V. All rights reserved.
3D chemical imaging in the laboratory by hyperspectral X-ray computed tomography
Egan, C. K.; Jacques, S. D. M.; Wilson, M. D.; Veale, M. C.; Seller, P.; Beale, A. M.; Pattrick, R. A. D.; Withers, P. J.; Cernik, R. J.
2015-01-01
We report the development of laboratory based hyperspectral X-ray computed tomography which allows the internal elemental chemistry of an object to be reconstructed and visualised in three dimensions. The method employs a spectroscopic X-ray imaging detector with sufficient energy resolution to distinguish individual elemental absorption edges. Elemental distributions can then be made by K-edge subtraction, or alternatively by voxel-wise spectral fitting to give relative atomic concentrations. We demonstrate its application to two material systems: studying the distribution of catalyst material on porous substrates for industrial scale chemical processing; and mapping of minerals and inclusion phases inside a mineralised ore sample. The method makes use of a standard laboratory X-ray source with measurement times similar to that required for conventional computed tomography. PMID:26514938
Power counting and modes in SCET
NASA Astrophysics Data System (ADS)
Goerke, Raymond; Luke, Michael
2018-02-01
We present a formulation of soft-collinear effective theory (SCET) in the two-jet sector as a theory of decoupled sectors of QCD coupled to Wilson lines. The formulation is manifestly boost-invariant, does not require the introduction of ultrasoft modes at the hard matching scale Q, and has manifest power counting in inverse powers of Q. The spurious infrared divergences which arise in SCET when ultrasoft modes are not included in loops disappear when the overlap between the sectors is correctly subtracted, in a manner similar to the familiar zero-bin subtraction of SCET. We illustrate this approach by analyzing deep inelastic scattering in the endpoint region in SCET and comment on other applications.
NASA Astrophysics Data System (ADS)
Lau, Kristen C.; Lee, Hyo Min; Singh, Tanushriya; Maidment, Andrew D. A.
2015-03-01
Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) uses an iodinated contrast agent to image the three-dimensional breast vasculature. The University of Pennsylvania has an ongoing DE CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 post-contrast). DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. Temporal subtraction of the post-contrast DE images from the pre-contrast DE image is performed to analyze iodine uptake. Our previous work investigated image registration methods to correct for patient motion, enhancing the evaluation of vascular kinetics. In this project we investigate a segmentation algorithm which identifies blood vessels in the breast from our temporal DE subtraction images. Anisotropic diffusion filtering, Gabor filtering, and morphological filtering are used for the enhancement of vessel features. Vessel labeling methods are then used to distinguish vessel and background features successfully. Statistical and clinical evaluations of segmentation accuracy in DE-CBT images are ongoing.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Hegazy, Maha Abdel Monem
2013-09-01
Four simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of simvastatin (SM) and ezetimibe (EZ) namely; extended ratio subtraction (EXRSM), simultaneous ratio subtraction (SRSM), ratio difference (RDSM) and absorption factor (AFM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined, and the methods were validated and the specificity was assessed by analyzing synthetic mixtures containing the cited drugs. The four methods were applied for the determination of the cited drugs in tablets and the obtained results were statistically compared with each other and with those of a reported HPLC method. The comparison showed that there is no significant difference between the proposed methods and the reported method regarding both accuracy and precision.
Motion compensation in digital subtraction angiography using graphics hardware.
Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim
2006-07-01
An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.
García, José R; Singh, Ankur; García, Andrés J
2014-01-01
In the pursuit to develop enhanced technologies for cellular bioassays as well as understand single cell interactions with its underlying substrate, the field of biotechnology has extensively utilized lithographic techniques to spatially pattern proteins onto surfaces in user-defined geometries. Microcontact printing (μCP) remains an incredibly useful patterning method due to its inexpensive nature, scalability, and the lack of considerable use of specialized clean room equipment. However, as new technologies emerge that necessitate various nano-sized areas of deposited proteins, traditional μCP methods may not be able to supply users with the needed resolution size. Recently, our group developed a modified "subtractive μCP" method which still retains many of the benefits offered by conventional μCP. Using this technique, we have been able to reach resolution sizes of fibronectin as small as 250 nm in largely spaced arrays for cell culture. In this communication, we present a detailed description of our subtractive μCP procedure that expands on many of the little tips and tricks that together make this procedure an easy and effective method for controlling protein patterning. © 2014 Elsevier Inc. All rights reserved.
Low frequency ac waveform generator
Bilharz, O.W.
1983-11-22
Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stablization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.
NASA Astrophysics Data System (ADS)
Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Takada, Etsuo; Endo, Tokiko; Morita, Takako
2007-03-01
The comparison of left and right mammograms is a common technique used by radiologists for the detection and diagnosis of masses. In mammography, computer-aided detection (CAD) schemes using bilateral subtraction technique have been reported. However, in breast ultrasonography, there are no reports on CAD schemes using comparison of left and right breasts. In this study, we propose a scheme of false positive reduction based on bilateral subtraction technique in whole breast ultrasound images. Mass candidate regions are detected by using the information of edge directions. Bilateral breast images are registered with reference to the nipple positions and skin lines. A false positive region is detected based on a comparison of the average gray values of a mass candidate region and a region with the same position and same size as the candidate region in the contralateral breast. In evaluating the effectiveness of the false positive reduction method, three normal and three abnormal bilateral pairs of whole breast images were employed. These abnormal breasts included six masses larger than 5 mm in diameter. The sensitivity was 83% (5/6) with 13.8 (165/12) false positives per breast before applying the proposed reduction method. By applying the method, false positives were reduced to 4.5 (54/12) per breast without removing a true positive region. This preliminary study indicates that the bilateral subtraction technique is effective for improving the performance of a CAD scheme in whole breast ultrasound images.
Lotfy, Hayam M; Mohamed, Dalia; Mowaka, Shereen
2015-01-01
Simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the oral antidiabetic drugs; sitagliptin phosphate (STG) and metformin hydrochloride (MET) in combined pharmaceutical formulations. Three methods were manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and a novel approach of induced amplitude modulation (IAM) methods. The first two methods were used for determination of STG, while MET was directly determined by measuring its absorbance at λmax 232 nm. However, (IAM) was used for the simultaneous determination of both drugs. Moreover, another three methods were developed based on derivative spectroscopy followed by mathematical manipulation steps namely; amplitude factor (P-factor), amplitude subtraction (AS) and modified amplitude subtraction (MAS). In addition, in this work the novel sample enrichment technique named spectrum addition was adopted. The proposed spectrophotometric methods did not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined pharmaceutical formulations. Standard deviation values were less than 1.5 in the assay of raw materials and tablets. The obtained results were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there was no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
Using Cross Correlation for Evaluating Shape Models of Asteroids
NASA Astrophysics Data System (ADS)
Palmer, Eric; Weirich, John; Barnouin, Olivier; Campbell, Tanner; Lambert, Diane
2017-10-01
The Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer (OSIRIS-REx) sample return mission to Bennu will be using optical navigation during its proximity operations. Optical navigation is heavily dependent upon having an accurate shape model to calculate the spacecraft's position and pointing. In support of this, we have conducted extensive testing of the accuracy and precision of shape models. OSIRIS-REx will be using the shape models generated by stereophotoclinometry (Gaskell, 2008). The most typical technique to evaluate models is to subtract two shape models and produce the differences in the height of each node between the two models. During flight, absolute accuracy cannot be determined; however, our testing allowed us to characterize both systematic and non-systematic errors. We have demonstrated that SPC provides an accurate and reproducible shape model (Weirich, et al., 2017), but also that shape model subtraction only tells part of the story. Our advanced shape model evaluation uses normalized cross-correlation to show a different aspect of quality of the shape model. In this method, we generate synthetic images using the shape model and calculate their cross-correlation with images of the truth asteroid. This technique tests both the shape model's representation of the topographic features (size, shape, depth and relative position), but also estimates of the surface's albedo. This albedo can be used to determine both Bond and geometric albedo of the surface (Palmer, et al., 2014). A high correlation score between the model's synthetic images and the truth images shows that the local topography and albedo has been well represented over the length scale of the image. A global evaluation, such as global shape and size, is best shown by shape model subtraction.
`Dem DEMs: Comparing Methods of Digital Elevation Model Creation
NASA Astrophysics Data System (ADS)
Rezza, C.; Phillips, C. B.; Cable, M. L.
2017-12-01
Topographic details of Europa's surface yield implications for large-scale processes that occur on the moon, including surface strength, modification, composition, and formation mechanisms for geologic features. In addition, small scale details presented from this data are imperative for future exploration of Europa's surface, such as by a potential Europa Lander mission. A comparison of different methods of Digital Elevation Model (DEM) creation and variations between them can help us quantify the relative accuracy of each model and improve our understanding of Europa's surface. In this work, we used data provided by Phillips et al. (2013, AGU Fall meeting, abs. P34A-1846) and Schenk and Nimmo (2017, in prep.) to compare DEMs that were created using Ames Stereo Pipeline (ASP), SOCET SET, and Paul Schenk's own method. We began by locating areas of the surface with multiple overlapping DEMs, and our initial comparisons were performed near the craters Manannan, Pwyll, and Cilix. For each region, we used ArcGIS to draw profile lines across matching features to determine elevation. Some of the DEMs had vertical or skewed offsets, and thus had to be corrected. The vertical corrections were applied by adding or subtracting the global minimum of the data set to create a common zero-point. The skewed data sets were corrected by rotating the plot so that it had a global slope of zero and then subtracting for a zero-point vertical offset. Once corrections were made, we plotted the three methods on one graph for each profile of each region. Upon analysis, we found relatively good feature correlation between the three methods. The smoothness of a DEM depends on both the input set of images and the stereo processing methods used. In our comparison, the DEMs produced by SOCET SET were less smoothed than those from ASP or Schenk. Height comparisons show that ASP and Schenk's model appear similar, alternating in maximum height. SOCET SET has more topographic variability due to its decreased smoothing, which is borne out by preliminary offset calculations. In the future, we plan to expand upon this preliminary work with more regions of Europa, continue quantifying the height differences and relative accuracy of each method, and generate more DEMs to expand our available comparison regions.
Modeling Self-subtraction in Angular Differential Imaging: Application to the HD 32297 Debris Disk
NASA Astrophysics Data System (ADS)
Esposito, Thomas M.; Fitzgerald, Michael P.; Graham, James R.; Kalas, Paul
2014-01-01
We present a new technique for forward-modeling self-subtraction of spatially extended emission in observations processed with angular differential imaging (ADI) algorithms. High-contrast direct imaging of circumstellar disks is limited by quasi-static speckle noise, and ADI is commonly used to suppress those speckles. However, the application of ADI can result in self-subtraction of the disk signal due to the disk's finite spatial extent. This signal attenuation varies with radial separation and biases measurements of the disk's surface brightness, thereby compromising inferences regarding the physical processes responsible for the dust distribution. To compensate for this attenuation, we forward model the disk structure and compute the form of the self-subtraction function at each separation. As a proof of concept, we apply our method to 1.6 and 2.2 μm Keck adaptive optics NIRC2 scattered-light observations of the HD 32297 debris disk reduced using a variant of the "locally optimized combination of images" algorithm. We are able to recover disk surface brightness that was otherwise lost to self-subtraction and produce simplified models of the brightness distribution as it appears with and without self-subtraction. From the latter models, we extract radial profiles for the disk's brightness, width, midplane position, and color that are unbiased by self-subtraction. Our analysis of these measurements indicates a break in the brightness profile power law at r ≈ 110 AU and a disk width that increases with separation from the star. We also verify disk curvature that displaces the midplane by up to 30 AU toward the northwest relative to a straight fiducial midplane.
A comparison of change detection methods using multispectral scanner data
Seevers, Paul M.; Jones, Brenda K.; Qiu, Zhicheng; Liu, Yutong
1994-01-01
Change detection methods were investigated as a cooperative activity between the U.S. Geological Survey and the National Bureau of Surveying and Mapping, People's Republic of China. Subtraction of band 2, band 3, normalized difference vegetation index, and tasseled cap bands 1 and 2 data from two multispectral scanner images were tested using two sites in the United States and one in the People's Republic of China. A new statistical method also was tested. Band 2 subtraction gives the best results for detecting change from vegetative cover to urban development. The statistical method identifies areas that have changed and uses a fast classification algorithm to classify the original data of the changed areas by land cover type present for each image date.
NASA Technical Reports Server (NTRS)
Mcginnies, W. G. (Principal Investigator); Conn, J. S.; Haase, E. F.; Lepley, L. K.; Musick, H. B.; Foster, K. E.
1975-01-01
The author has identified the following significant results. Research results include a method for determining the reflectivities of natural areas from ERTS data taking into account sun angle and atmospheric effects on the radiance seen by the satellite sensor. Ground truth spectral signature data for various types of scenes, including ground with and without annuals, and various shrubs were collected. Large areas of varnished desert pavement are visible and mappable on ERTS and high altitude aircraft imagery. A large scale and a small scale vegetation pattern were found to be correlated with presence of desert pavement. A comparison of radiometric data with video recordings shows quantitatively that for most areas of desert vegetation, soils are the most influential factor in determining the signature of a scene. Additive and subtractive image processing techniques were applied in the dark room to enhance vegetational aspects of ERTS.
Automatic background updating for video-based vehicle detection
NASA Astrophysics Data System (ADS)
Hu, Chunhai; Li, Dongmei; Liu, Jichuan
2008-03-01
Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The widely used video-based vehicle detection technique is the background subtraction method. The key problem of this method is how to subtract and update the background effectively. In this paper an efficient background updating scheme based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and fast enough to satisfy the real-time constraints of vehicle detection.
Yan, Hongping; Wang, Cheng; McCarn, Allison R; Ade, Harald
2013-04-26
A practical and accurate method to obtain the index of refraction, especially the decrement δ, across the carbon 1s absorption edge is demonstrated. The combination of absorption spectra scaled to the Henke atomic scattering factor database, the use of the doubly subtractive Kramers-Kronig relations, and high precision specular reflectivity measurements from thin films allow the notoriously difficult-to-measure δ to be determined with high accuracy. No independent knowledge of the film thickness or density is required. High confidence interpolation between relatively sparse measurements of δ across an absorption edge is achieved. Accurate optical constants determined by this method are expected to greatly improve the simulation and interpretation of resonant soft x-ray scattering and reflectivity data. The method is demonstrated using poly(methyl methacrylate) and should be extendable to all organic materials.
Hu, Wenhao; Wang, Bin; Run, Hongyu; Zhang, Xuesong; Wang, Yan
2016-10-12
It is estimated that upwards of 50,000 individuals suffer traumatic fracture of the spine each year, and the instability of the fractured vertebra and/or the local deformity results in pain and, if kyphosis increases, neurological impairment can occur. There is a significant controversy over the ideal management. The purpose of the study is to present clinical and radiographic results of pedicle subtraction osteotomy and disc resection with cage placement in correcting post-traumatic thoracolumbar kyphosis. From May 2010 to May 2013, 46 consecutive patients experiencing post-traumatic thoracolumbar kyphosis underwent the technique of one-stage pedicle subtraction osteotomy and disc resection with cage placement and long-segment fixation. Pelvic incidence (PI), pelvic tilt (PT), sagittal vertical axis (SVA), and sagittal Cobb angle were measured to evaluate the sagittal balance. Oswestry disability index (ODI), visual analog scale (VAS), and general complications were recorded. The average surgical time was 260 min (240-320 min). The mean intraoperative blood loss was 643 ml (400-1200 ml). The maximum correction angle was 58° with an average of 47°, and the SVA improved from +10.7 ± 3.5 cm (+7.2 to + 17.1 cm) to +4.1 ± 2.7 cm (+3.2 to + 7.6 cm) at final follow-up (p < 0.01). PT reduced from preoperative 27.2 ± 5.3° to postoperative 15.2 ± 4.7° (p < 0.01). The VAS changed from preoperative 7.8 ± 1.6 (5.0-9.0) to 3.2 ± 1.8 (2.0-5.0) (p < 0.01). Clinical symptoms and neurological function were significantly improved at the final follow-up. All patients completed follow-up of 41 months on average. Pedicle subtraction osteotomy and disc resection with cage placement and long-segment fixation are effective and safe methods to treat thoracolumbar post-traumatic kyphosis.
Albin, Thomas J
2013-01-01
Designers and ergonomists occasionally must produce anthropometric models of workstations with only summary percentile data available regarding the intended users. Until now the only option available was adding or subtracting percentiles of the anthropometric elements, e.g. heights and widths, used in the model, despite the known resultant errors in the estimate of the percent of users accommodated. This paper introduces a new method, the Median Correlation Method (MCM) that reduces the error. Compare the relative accuracy of MCM to combining percentiles for anthropometric models comprised of all possible pairs of five anthropometric elements. Describe the mathematical basis of the greater accuracy of MCM. MCM is described. 95th percentile accommodation percentiles are calculated for the sums and differences of all combinations of five anthropometric elements by combining percentiles and using MCM. The resulting estimates are compared with empirical values of the 95th percentiles, and the relative errors are reported. The MCM method is shown to be significantly more accurate than adding percentiles. MCM is demonstrated to have a mathematical advantage estimating accommodation relative to adding or subtracting percentiles. The MCM method should be used in preference to adding or subtracting percentiles when limited data prevent more sophisticated anthropometric models.
NASA Technical Reports Server (NTRS)
McGhee, D. S.
2004-01-01
Launch vehicles consume large quantities of propellant quickly, causing the mass properties and structural dynamics of the vehicle to change dramatically. Currently, structural load assessments account for this change with a large collection of structural models representing various propellant fill levels. This creates a large database of models complicating the delivery of reduced models and requiring extensive work for model changes. Presented here is a method to account for these mass changes in a more efficient manner. The method allows for the subtraction of propellant mass as the propellant is used in the simulation. This subtraction is done in the modal domain of the vehicle generalized model. Additional computation required is primarily for constructing the used propellant mass matrix from an initial propellant model and further matrix multiplications and subtractions. An additional eigenvalue solution is required to uncouple the new equations of motion; however, this is a much simplier calculation starting from a system that is already substantially uncoupled. The method was successfully tested in a simulation of Saturn V loads. Results from the method are compared to results from separate structural models for several propellant levels, showing excellent agreement. Further development to encompass more complicated propellant models, including slosh dynamics, is possible.
Image Processing for Educators in Global Hands-On Universe
NASA Astrophysics Data System (ADS)
Miller, J. P.; Pennypacker, C. R.; White, G. L.
2006-08-01
A method of image processing to find time-varying objects is being developed for the National Virtual Observatory as part of Global Hands-On Universe(tm) (Lawrence Hall of Science; University of California, Berkeley). Objects that vary in space or time are of prime importance in modern astronomy and astrophysics. Such objects include active galactic nuclei, variable stars, supernovae, or moving objects across a field of view such as an asteroid, comet, or extrasolar planet transiting its parent star. The search for these objects is undertaken by acquiring an image of the region of the sky where they occur followed by a second image taken at a later time. Ideally, both images are taken with the same telescope using the same filter and charge-coupled device. The two images are aligned and subtracted with the subtracted image revealing any changes in light during the time period between the two images. We have used a method of Christophe Alard using the image processing software IDL Version 6.2 (Research Systems, Inc.) with the exception of the background correction, which is done on the two images prior to the subtraction. Testing has been extensive, using images provided by a number of National Virtual Observatory and collaborating projects. They include the Supernovae Trace Cosmic Expansion (Cerro Tololo Inter-American Observatory), Supernovae/ Acceleration Program (Lawrence Berkeley National Laboratory), Lowell Observatory Near-Earth Object Search (Lowell Observatory), and the Centre National de la Recherche Scientifique (Paris, France). Further testing has been done with students, including a May 2006 two week program at the Lawrence Berkeley National Laboratory. Students from Hardin-Simmons University (Abilene, TX) and Jackson State University (Jackson, MS) used the subtraction method to analyze images from the Cerro Tololo Inter-American Observatory (CTIO) searching for new asteroids and Kuiper Belt objects. In October 2006 students from five U.S. high schools will use the subtraction method in an asteroid search campaign using CTIO images with 7-day follow-up images to be provided by the Las Cumbres Observatory (Santa Barbara, CA). During the Spring 2006 semester, students from Cape Fear High School used the method to search for near-Earth objects and supernovae. Using images from the Astronomical Research Institute (Charleston, IL) the method contributed to the original discovery of two supernovae, SN 2006al and SN 2006bi.
NASA Astrophysics Data System (ADS)
Bolzoni, Paolo; Somogyi, Gábor; Trócsányi, Zoltán
2011-01-01
We perform the integration of all iterated singly-unresolved subtraction terms, as defined in ref. [1], over the two-particle factorized phase space. We also sum over the unresolved parton flavours. The final result can be written as a convolution (in colour space) of the Born cross section and an insertion operator. We spell out the insertion operator in terms of 24 basic integrals that are defined explicitly. We compute the coefficients of the Laurent expansion of these integrals in two different ways, with the method of Mellin-Barnes representations and sector decomposition. Finally, we present the Laurent-expansion of the full insertion operator for the specific examples of electron-positron annihilation into two and three jets.
Aalto, Sargo; Wallius, Esa; Näätänen, Petri; Hiltunen, Jaana; Metsähonkala, Liisa; Sipilä, Hannu; Karlsson, Hasse
2005-09-01
A methodological study on subject-specific regression analysis (SSRA) exploring the correlation between the neural response and the subjective evaluation of emotional experience in eleven healthy females is presented. The target emotions, i.e., amusement and sadness, were induced using validated film clips, regional cerebral blood flow (rCBF) was measured using positron emission tomography (PET), and the subjective intensity of the emotional experience during the PET scanning was measured using a category ratio (CR-10) scale. Reliability analysis of the rating data indicated that the subjects rated the intensity of their emotional experience fairly consistently on the CR-10 scale (Cronbach alphas 0.70-0.97). A two-phase random-effects analysis was performed to ensure the generalizability and inter-study comparability of the SSRA results. Random-effects SSRAs using Statistical non-Parametric Mapping 99 (SnPM99) showed that rCBF correlated with the self-rated intensity of the emotional experience mainly in the brain regions that were identified in the random-effects subtraction analyses using the same imaging data. Our results give preliminary evidence of a linear association between the neural responses related to amusement and sadness and the self-evaluated intensity of the emotional experience in several regions involved in the emotional response. SSRA utilizing subjective evaluation of emotional experience turned out a feasible and promising method of analysis. It allows versatile exploration of the neurobiology of emotions and the neural correlates of actual and individual emotional experience. Thus, SSRA might be able to catch the idiosyncratic aspects of the emotional response better than traditional subtraction analysis.
Myohara, Maroko; Niva, Cintia Carla; Lee, Jae Min
2006-08-01
To identify genes specifically activated during annelid regeneration, suppression subtractive hybridization was performed with cDNAs from regenerating and intact Enchytraeus japonensis, a terrestrial oligochaete that can regenerate a complete organism from small body fragments within 4-5 days. Filter array screening subsequently revealed that about 38% of the forward-subtracted cDNA clones contained genes that were upregulated during regeneration. Two hundred seventy-nine of these clones were sequenced and found to contain 165 different sequences (79 known and 86 unknown). Nine clones were fully sequenced and four of these sequences were matched to known genes for glutamine synthetase, glucosidase 1, retinal protein 4, and phosphoribosylaminoimidazole carboxylase, respectively. The remaining five clones encoded an unknown open-reading frame. The expression levels of these genes were highest during blastema formation. Our present results, therefore, demonstrate the great potential of annelids as a new experimental subject for the exploration of unknown genes that play critical roles in animal regeneration.
Kido, S; Kuriyama, K; Hosomi, N; Inoue, E; Kuroda, C; Horai, T
2000-02-01
This study endeavored to clarify the usefulness of single-exposure dual-energy subtraction computed radiography (CR) of the chest and the ability of soft-copy images to detect low-contrast simulated pulmonary nodules. Conventional and bone-subtracted CR images of 25 chest phantom image sets with a low-contrast nylon nodule and 25 without a nodule were interpreted by 12 observers (6 radiologists, 6 chest physicians) who rated each on a continuous confidence scale and marked the position of the nodule if one was present. Hard-copy images were 7 x 7-inch laser-printed CR films, and soft-copy images were displayed on a 21-inch noninterlaced color CRT monitor with an optimized dynamic range. Soft-copy images were adjusted to the same size as hard-copy images and were viewed under darkened illumination in the reading room. No significant differences were found between hard- and soft-copy images. In conclusion, the soft-copy images were found to be useful in detecting low-contrast simulated pulmonary nodules.
THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au
Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less
Computer-aided diagnosis and artificial intelligence in clinical imaging.
Shiraishi, Junji; Li, Qiang; Appelbaum, Daniel; Doi, Kunio
2011-11-01
Computer-aided diagnosis (CAD) is rapidly entering the radiology mainstream. It has already become a part of the routine clinical work for the detection of breast cancer with mammograms. The computer output is used as a "second opinion" in assisting radiologists' image interpretations. The computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification via the use of tools such as artificial neural networks (ANN). In this article, we will explore these and other current processes that have come to be referred to as "artificial intelligence." One element of CAD, temporal subtraction, has been applied for enhancing interval changes and for suppressing unchanged structures (eg, normal structures) between 2 successive radiologic images. To reduce misregistration artifacts on the temporal subtraction images, a nonlinear image warping technique for matching the previous image to the current one has been developed. Development of the temporal subtraction method originated with chest radiographs, with the method subsequently being applied to chest computed tomography (CT) and nuclear medicine bone scans. The usefulness of the temporal subtraction method for bone scans was demonstrated by an observer study in which reading times and diagnostic accuracy improved significantly. An additional prospective clinical study verified that the temporal subtraction image could be used as a "second opinion" by radiologists with negligible detrimental effects. ANN was first used in 1990 for computerized differential diagnosis of interstitial lung diseases in CAD. Since then, ANN has been widely used in CAD schemes for the detection and diagnosis of various diseases in different imaging modalities, including the differential diagnosis of lung nodules and interstitial lung diseases in chest radiography, CT, and position emission tomography/CT. It is likely that CAD will be integrated into picture archiving and communication systems and will become a standard of care for diagnostic examinations in daily clinical work. Copyright © 2011 Elsevier Inc. All rights reserved.
Tengs, Torstein; Zhang, Haibo; Holst-Jensen, Arne; Bohlin, Jon; Butenko, Melinka A; Kristoffersen, Anja Bråthen; Sorteberg, Hilde-Gunn Opsahl; Berdal, Knut G
2009-10-08
When generating a genetically modified organism (GMO), the primary goal is to give a target organism one or several novel traits by using biotechnology techniques. A GMO will differ from its parental strain in that its pool of transcripts will be altered. Currently, there are no methods that are reliably able to determine if an organism has been genetically altered if the nature of the modification is unknown. We show that the concept of computational subtraction can be used to identify transgenic cDNA sequences from genetically modified plants. Our datasets include 454-type sequences from a transgenic line of Arabidopsis thaliana and published EST datasets from commercially relevant species (rice and papaya). We believe that computational subtraction represents a powerful new strategy for determining if an organism has been genetically modified as well as to define the nature of the modification. Fewer assumptions have to be made compared to methods currently in use and this is an advantage particularly when working with unknown GMOs.
Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang
2018-01-01
A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.
Quantitation of Fine Displacement in Echography
NASA Astrophysics Data System (ADS)
Masuda, Kohji; Ishihara, Ken; Yoshii, Ken; Furukawa, Toshiyuki; Kumagai, Sadatoshi; Maeda, Hajime; Kodama, Shinzo
1993-05-01
A High-speed Digital Subtraction Echography was developed to visualize the fine displacement of human internal organs. This method indicates differences in position through time series images of high-frame-rate echography. Fine displacement less than ultrasonic wavelength can be observed. This method, however, lacks the ability to quantitatively measure displacement length. The subtraction between two successive images was affected by displacement direction in spite of the displacement length being the same. To solve this problem, convolution of an echogram with Gaussian distribution was used. To express displacement length as brightness quantitatively, normalization using a brightness gradient was applied. The quantitation algorithm was applied to successive B-mode images. Compared to the simply subtracted images, quantitated images express more precisely the motion of organs. Expansion of the carotid artery and fine motion of ventricular walls can be visualized more easily. Displacement length can be quantitated with wavelength. Under more static conditions, this system quantitates displacement length that is much less than wavelength.
Microchannel plate cross-talk mitigation for spatial autocorrelation measurements
NASA Astrophysics Data System (ADS)
Lipka, Michał; Parniak, Michał; Wasilewski, Wojciech
2018-05-01
Microchannel plates (MCP) are the basis for many spatially resolved single-particle detectors such as ICCD or I-sCMOS cameras employing image intensifiers (II), MCPs with delay-line anodes for the detection of cold gas particles or Cherenkov radiation detectors. However, the spatial characterization provided by an MCP is severely limited by cross-talk between its microchannels, rendering MCP and II ill-suited for autocorrelation measurements. Here, we present a cross-talk subtraction method experimentally exemplified for an I-sCMOS based measurement of pseudo-thermal light second-order intensity autocorrelation function at the single-photon level. The method merely requires a dark counts measurement for calibration. A reference cross-correlation measurement certifies the cross-talk subtraction. While remaining universal for MCP applications, the presented cross-talk subtraction, in particular, simplifies quantum optical setups. With the possibility of autocorrelation measurements, the signal needs no longer to be divided into two camera regions for a cross-correlation measurement, reducing the experimental setup complexity and increasing at least twofold the simultaneously employable camera sensor region.
Tengs, Torstein; Zhang, Haibo; Holst-Jensen, Arne; Bohlin, Jon; Butenko, Melinka A; Kristoffersen, Anja Bråthen; Sorteberg, Hilde-Gunn Opsahl; Berdal, Knut G
2009-01-01
Background When generating a genetically modified organism (GMO), the primary goal is to give a target organism one or several novel traits by using biotechnology techniques. A GMO will differ from its parental strain in that its pool of transcripts will be altered. Currently, there are no methods that are reliably able to determine if an organism has been genetically altered if the nature of the modification is unknown. Results We show that the concept of computational subtraction can be used to identify transgenic cDNA sequences from genetically modified plants. Our datasets include 454-type sequences from a transgenic line of Arabidopsis thaliana and published EST datasets from commercially relevant species (rice and papaya). Conclusion We believe that computational subtraction represents a powerful new strategy for determining if an organism has been genetically modified as well as to define the nature of the modification. Fewer assumptions have to be made compared to methods currently in use and this is an advantage particularly when working with unknown GMOs. PMID:19814792
A New Moving Object Detection Method Based on Frame-difference and Background Subtraction
NASA Astrophysics Data System (ADS)
Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong
2017-09-01
Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.
VizieR Online Data Catalog: NGC 1068 deep millimeter spectroscopy observations (Qiu+, 2018)
NASA Astrophysics Data System (ADS)
Qiu, J.; Wang, J.; Shi, Y.; Zhang, J.; Fang, M.; Li, F.
2018-05-01
We present the results of 1-3mm molecular line survey towards NGC1068. We show the final averaged spectra after eliminating bad scans and subtracting baselines of order 0 to the individual spectra. Y-axis is in antenna temperature scale. (3 data files).
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Tawakkol, Shereen M.; Fahmy, Nesma M.; Shehata, Mostafa A.
2014-03-01
A novel spectrophotometric technique was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This technique was called successive spectrophotometric resolution technique. The technique was based on either the successive ratio subtraction or successive derivative subtraction. The mathematical explanation of the procedure was illustrated. In order to evaluate the applicability of the methods a model data as well as an experimental data were tested. The results from experimental data related to the simultaneous spectrophotometric determination of lidocaine hydrochloride (LH), calcium dobesilate (CD) and dexamethasone acetate (DA); in the presence of hydroquinone (HQ), the degradation product of calcium dobesilate were discussed. The proposed drugs were determined at their maxima 202 nm, 305 nm, 239 nm and 225 nm for LH, CD, DA and HQ respectively; by successive ratio subtraction coupled with constant multiplication method to obtain the zero order absorption spectra, while by applying successive derivative subtraction they were determined at their first derivative spectra at 210 nm for LH, 320 nm or P292-320 for CD, 256 nm or P225-252 for DA and P220-233 for HQ respectively. The calibration curves were linear over the concentration range of 2-20 μg/mL for both LH and DA, 6-50 μg/mL for CD, and 3-40 μg/mL for HQ. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs with no interference from other dosage form additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with those of the official BP methods for LH, DA, and CD, and with the official USP method for HQ; using student t-test, F-test, and one way ANOVA, showing no significant difference with respect to accuracy and precision.
Dynamic cone beam CT angiography of carotid and cerebral arteries using canine model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Weixing; Zhao Binghui; Conover, David
2012-01-15
Purpose: This research is designed to develop and evaluate a flat-panel detector-based dynamic cone beam CT system for dynamic angiography imaging, which is able to provide both dynamic functional information and dynamic anatomic information from one multirevolution cone beam CT scan. Methods: A dynamic cone beam CT scan acquired projections over four revolutions within a time window of 40 s after contrast agent injection through a femoral vein to cover the entire wash-in and wash-out phases. A dynamic cone beam CT reconstruction algorithm was utilized and a novel recovery method was developed to correct the time-enhancement curve of contrast flow.more » From the same data set, both projection-based subtraction and reconstruction-based subtraction approaches were utilized and compared to remove the background tissues and visualize the 3D vascular structure to provide the dynamic anatomic information. Results: Through computer simulations, the new recovery algorithm for dynamic time-enhancement curves was optimized and showed excellent accuracy to recover the actual contrast flow. Canine model experiments also indicated that the recovered time-enhancement curves from dynamic cone beam CT imaging agreed well with that of an IV-digital subtraction angiography (DSA) study. The dynamic vascular structures reconstructed using both projection-based subtraction and reconstruction-based subtraction were almost identical as the differences between them were comparable to the background noise level. At the enhancement peak, all the major carotid and cerebral arteries and the Circle of Willis could be clearly observed. Conclusions: The proposed dynamic cone beam CT approach can accurately recover the actual contrast flow, and dynamic anatomic imaging can be obtained with high isotropic 3D resolution. This approach is promising for diagnosis and treatment planning of vascular diseases and strokes.« less
Hu, Guohong; Wang, Hui-Yun; Greenawalt, Danielle M.; Azaro, Marco A.; Luo, Minjie; Tereshchenko, Irina V.; Cui, Xiangfeng; Yang, Qifeng; Gao, Richeng; Shen, Li; Li, Honghua
2006-01-01
Microarray-based analysis of single nucleotide polymorphisms (SNPs) has many applications in large-scale genetic studies. To minimize the influence of experimental variation, microarray data usually need to be processed in different aspects including background subtraction, normalization and low-signal filtering before genotype determination. Although many algorithms are sophisticated for these purposes, biases are still present. In the present paper, new algorithms for SNP microarray data analysis and the software, AccuTyping, developed based on these algorithms are described. The algorithms take advantage of a large number of SNPs included in each assay, and the fact that the top and bottom 20% of SNPs can be safely treated as homozygous after sorting based on their ratios between the signal intensities. These SNPs are then used as controls for color channel normalization and background subtraction. Genotype calls are made based on the logarithms of signal intensity ratios using two cutoff values, which were determined after training the program with a dataset of ∼160 000 genotypes and validated by non-microarray methods. AccuTyping was used to determine >300 000 genotypes of DNA and sperm samples. The accuracy was shown to be >99%. AccuTyping can be downloaded from . PMID:16982644
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carvalho, Paulo R. S.; Leite, Marcelo M.
2013-09-15
We introduce a simpler although unconventional minimal subtraction renormalization procedure in the case of a massive scalar λφ{sup 4} theory in Euclidean space using dimensional regularization. We show that this method is very similar to its counterpart in massless field theory. In particular, the choice of using the bare mass at higher perturbative order instead of employing its tree-level counterpart eliminates all tadpole insertions at that order. As an application, we compute diagrammatically the critical exponents η and ν at least up to two loops. We perform an explicit comparison with the Bogoliubov-Parasyuk-Hepp-Zimmermann (BPHZ) method at the same loop order,more » show that the proposed method requires fewer diagrams and establish a connection between the two approaches.« less
Masakiyo, Yoshiaki; Yoshida, Akihiro; Shintani, Yasuyuki; Takahashi, Yusuke; Ansai, Toshihiro; Takehara, Tadamichi
2010-06-01
Prevotella intermedia and Prevotella nigrescens, which are often isolated from periodontal sites, were once considered two different genotypes of P. intermedia. Although the genomic sequence of P. intermedia was determined recently, little is known about the genetic differences between P. intermedia and P. nigrescens. The subtractive hybridization technique is a powerful method for generating a set of DNA fragments differing between two closely related bacterial strains or species. We used subtractive hybridization to identify the DNA regions specific to P. intermedia ATCC 25611 and P. nigrescens ATCC 25261. Using this method, four P. intermedia ATCC 25611-specific and three P. nigrescens ATCC 25261-specific regions were determined. From the species-specific regions, insertion sequence (IS) elements were isolated for P. intermedia. IS elements play an important role in the pathogenicity of bacteria. For the P. intermedia-specific regions, the genes adenine-specific DNA-methyltransferase and 8-amino-7-oxononanoate synthase were isolated. The P. nigrescens-specific region contained a Flavobacterium psychrophilum SprA homologue, a cell-surface protein involved in gliding motility, Prevotella melaninogenica ATCC 25845 glutathione peroxide, and Porphyromonas gingivalis ATCC 33277 leucyl-tRNA synthetase. The results demonstrate that the subtractive hybridization technique was useful for distinguishing between the two closely related species. Furthermore, this technique will contribute to our understanding of the virulence of these species. 2009 Elsevier Ltd. All rights reserved.
Marker, Ryan J; Maluf, Katrina S
2014-12-01
Electromyography (EMG) recordings from the trapezius are often contaminated by the electrocardiography (ECG) signal, making it difficult to distinguish low-level muscle activity from muscular rest. This study investigates the influence of ECG contamination on EMG amplitude and frequency estimations in the upper trapezius during muscular rest and low-level contractions. A new method of ECG contamination removal, filtered template subtraction (FTS), is described and compared to 30 Hz high-pass filter (HPF) and averaged template subtraction (ATS) methods. FTS creates a unique template of each ECG artifact using a low-pass filtered copy of the contaminated signal, which is subtracted from contaminated periods in the original signal. ECG contamination results in an over-estimation of EMG amplitude during rest in the upper trapezius, with negligible effects on amplitude and frequency estimations during low-intensity isometric contractions. FTS and HPF successfully removed ECG contamination from periods of muscular rest, yet introduced errors during muscle contraction. Conversely, ATS failed to fully remove ECG contamination during muscular rest, yet did not introduce errors during muscle contraction. The relative advantages and disadvantages of different ECG contamination removal methods should be considered in the context of the specific motor tasks that require analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Breaking the diffraction barrier using coherent anti-Stokes Raman scattering difference microscopy.
Wang, Dong; Liu, Shuanglong; Chen, Yue; Song, Jun; Liu, Wei; Xiong, Maozhen; Wang, Guangsheng; Peng, Xiao; Qu, Junle
2017-05-01
We propose a method to improve the resolution of coherent anti-Stokes Raman scattering microscopy (CARS), and present a theoretical model. The proposed method, coherent anti-Stokes Raman scattering difference microscopy (CARS-D), is based on the intensity difference between two differently acquired images. One being the conventional CARS image, and the other obtained when the sample is illuminated by a doughnut shaped spot. The final super-resolution CARS-D image is constructed by intensity subtraction of these two images. However, there is a subtractive factor between them, and the theoretical model sets this factor to obtain the best imaging effect.
P300: Waves Identification with and without Subtraction of Traces
Romero, Ana Carla Leite; Reis, Ana Cláudia Mirândola Barbosa; Oliveira, Anna Caroline Silva de; Oliveira Simões, Humberto de; Oliveira Junqueira, Cinthia Amorim de; Frizzo, Ana Cláudia Figueiredo
2017-01-01
Introduction The P300 test requires well-defined and unique criteria, in addition to training for the examiners, for a uniform analysis of studies and to avoid variations and errors in the interpretation of measurement results. Objectives The objective of this study is to verify whether there are differences in P300 with and without subtraction of traces of standard and nonstandard stimuli. Method We conducted this study in collaboration with two research electrophysiology laboratories. From Laboratory 1, we selected 40 tests of subjects between 7–44 years, from Laboratory 2, we selected 83 tests of subjects between 18–44 years. We first performed the identification with the nonstandard stimuli; then, we subtracted the nonstandard stimuli from the standard stimuli. The examiners identified the waves, performing a descriptive and comparative analysis of traces with and without subtraction. Results After a comparative analysis of the traces with and without subtraction, there was no significant difference when compared with analysis of traces in both laboratories, within the conditions, of right ears ( p = 0.13 and 0.28 for differences between latency and amplitude measurements) and left ears ( p = 0.15 and 0.09 for differences between latency and amplitude measurements) from Laboratory 1. As for Laboratory 2, when investigating both ears, results did not identify significant differences ( p = 0.098 and 0.28 for differences between latency and amplitude measurements). Conclusion There was no difference verified in traces with and without subtraction. We suggest the identification of this potential performed through nonstandard stimuli. PMID:29018497
Four-State Continuous-Variable Quantum Key Distribution with Photon Subtraction
NASA Astrophysics Data System (ADS)
Li, Fei; Wang, Yijun; Liao, Qin; Guo, Ying
2018-06-01
Four-state continuous-variable quantum key distribution (CVQKD) is one of the discretely modulated CVQKD which generates four nonorthogonal coherent states and exploits the sign of the measured quadrature of each state to encode information rather than uses the quadrature \\hat {x} or \\hat {p} itself. It has been proven that four-state CVQKD is more suitable than Gaussian modulated CVQKD in terms of transmission distance. In this paper, we propose an improved four-state CVQKD using an non-Gaussian operation, photon subtraction. A suitable photon-subtraction operation can be exploited to improve the maximal transmission of CVQKD in point-to-point quantum communication since it provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance. Furthermore, by taking finite-size effect into account we obtain a tighter bound of the secure distance, which is more practical than that obtained in the asymptotic limit.
A Novel Technique to Detect Code for SAC-OCDMA System
NASA Astrophysics Data System (ADS)
Bharti, Manisha; Kumar, Manoj; Sharma, Ajay K.
2018-04-01
The main task of optical code division multiple access (OCDMA) system is the detection of code used by a user in presence of multiple access interference (MAI). In this paper, new method of detection known as XOR subtraction detection for spectral amplitude coding OCDMA (SAC-OCDMA) based on double weight codes has been proposed and presented. As MAI is the main source of performance deterioration in OCDMA system, therefore, SAC technique is used in this paper to eliminate the effect of MAI up to a large extent. A comparative analysis is then made between the proposed scheme and other conventional detection schemes used like complimentary subtraction detection, AND subtraction detection and NAND subtraction detection. The system performance is characterized by Q-factor, BER and received optical power (ROP) with respect to input laser power and fiber length. The theoretical and simulation investigations reveal that the proposed detection technique provides better quality factor, security and received power in comparison to other conventional techniques. The wide opening of eye in case of proposed technique also proves its robustness.
NASA Astrophysics Data System (ADS)
Snyder, Jeff; Hanstock, Chris C.; Wilman, Alan H.
2009-10-01
A general in vivo magnetic resonance spectroscopy editing technique is presented to detect weakly coupled spin systems through subtraction, while preserving singlets through addition, and is applied to the specific brain metabolite γ-aminobutyric acid (GABA) at 4.7 T. The new method uses double spin echo localization (PRESS) and is based on a constant echo time difference spectroscopy approach employing subtraction of two asymmetric echo timings, which is normally only applicable to strongly coupled spin systems. By utilizing flip angle reduction of one of the two refocusing pulses in the PRESS sequence, we demonstrate that this difference method may be extended to weakly coupled systems, thereby providing a very simple yet effective editing process. The difference method is first illustrated analytically using a simple two spin weakly coupled spin system. The technique was then demonstrated for the 3.01 ppm resonance of GABA, which is obscured by the strong singlet peak of creatine in vivo. Full numerical simulations, as well as phantom and in vivo experiments were performed. The difference method used two asymmetric PRESS timings with a constant total echo time of 131 ms and a reduced 120° final pulse, providing 25% GABA yield upon subtraction compared to two short echo standard PRESS experiments. Phantom and in vivo results from human brain demonstrate efficacy of this method in agreement with numerical simulations.
PCA-based approach for subtracting thermal background emission in high-contrast imaging data
NASA Astrophysics Data System (ADS)
Hunziker, S.; Quanz, S. P.; Amara, A.; Meyer, M. R.
2018-03-01
Aims.Ground-based observations at thermal infrared wavelengths suffer from large background radiation due to the sky, telescope and warm surfaces in the instrument. This significantly limits the sensitivity of ground-based observations at wavelengths longer than 3 μm. The main purpose of this work is to analyse this background emission in infrared high-contrast imaging data as illustrative of the problem, show how it can be modelled and subtracted and demonstrate that it can improve the detection of faint sources, such as exoplanets. Methods: We used principal component analysis (PCA) to model and subtract the thermal background emission in three archival high-contrast angular differential imaging datasets in the M' and L' filter. We used an M' dataset of β Pic to describe in detail how the algorithm works and explain how it can be applied. The results of the background subtraction are compared to the results from a conventional mean background subtraction scheme applied to the same dataset. Finally, both methods for background subtraction are compared by performing complete data reductions. We analysed the results from the M' dataset of HD 100546 only qualitatively. For the M' band dataset of β Pic and the L' band dataset of HD 169142, which was obtained with an angular groove phase mask vortex vector coronagraph, we also calculated and analysed the achieved signal-to-noise ratio (S/N). Results: We show that applying PCA is an effective way to remove spatially and temporarily varying thermal background emission down to close to the background limit. The procedure also proves to be very successful at reconstructing the background that is hidden behind the point spread function. In the complete data reductions, we find at least qualitative improvements for HD 100546 and HD 169142, however, we fail to find a significant increase in S/N of β Pic b. We discuss these findings and argue that in particular datasets with strongly varying observing conditions or infrequently sampled sky background will benefit from the new approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, R; Bednarek, D; Rudin, S
2015-06-15
Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less
Ding, Huanjun; Molloi, Sabee
2017-08-01
To investigate the feasibility of accurate quantification of iodine mass thickness in contrast-enhanced spectral mammography. A computer simulation model was developed to evaluate the performance of a photon-counting spectral mammography system in the application of contrast-enhanced spectral mammography. A figure-of-merit (FOM), which was defined as the decomposed iodine signal-to-noise ratio (SNR) with respect to the square root of the mean glandular dose (MGD), was chosen to optimize the imaging parameters, in terms of beam energy, splitting energy, and prefiltrations for breasts of various thicknesses and densities. Experimental phantom studies were also performed using a beam energy of 40 kVp and a splitting energy of 34 keV with 3 mm Al prefiltration. A two-step calibration method was investigated to quantify the iodine mass thickness, and was validated using phantoms composed of a mixture of glandular and adipose materials, for various breast thicknesses and densities. Finally, the traditional dual-energy log-weighted subtraction method was also studied as a comparison. The measured iodine signal from both methods was compared to the known value to characterize the quantification accuracy and precision. The optimal imaging parameters, which lead to the highest FOM, were found at a beam energy between 42 and 46 kVp with a splitting energy at 34 keV. The optimal tube voltage decreased as the breast thickness or the Al prefiltration increased. The proposed quantification method was able to measure iodine mass thickness on phantoms of various thicknesses and densities with high accuracy. The root-mean-square (RMS) error for cm-scale lesion phantoms was estimated to be 0.20 mg/cm 2 . The precision of the technique, characterized by the standard deviation of the measurements, was estimated to be 0.18 mg/cm 2 . The traditional weighted subtraction method also predicted a linear correlation between the measured signal and the known iodine mass thickness. However, the correlation slope and offset values were strongly dependent on the total breast thickness and density. The results of this study suggest that iodine mass thickness for cm-scale lesions can be accurately quantified with contrast-enhanced spectral mammography. The quantitative information can potentially improve the differential power for malignancy. © 2017 American Association of Physicists in Medicine.
Confining potential in momentum space
NASA Technical Reports Server (NTRS)
Norbury, John W.; Kahana, David E.; Maung, Khin Maung
1992-01-01
A method is presented for the solution in momentum space of the bound state problem with a linear potential in r space. The potential is unbounded at large r leading to a singularity at small q. The singularity is integrable, when regulated by exponentially screening the r-space potential, and is removed by a subtraction technique. The limit of zero screening is taken analytically, and the numerical solution of the subtracted integral equation gives eigenvalues and wave functions in good agreement with position space calculations.
An Effectiveness Index and Profile for Instructional Media.
ERIC Educational Resources Information Center
Bond, Jack H.
A scale was developed for judging the relative value of various media in teaching children. Posttest scores were partitioned into several components: error, prior knowledge, guessing, and gain from the learning exercise. By estimating the amounts of prior knowledge, guessing, and error, and then subtracting these from the total score, an index of…
NASA Astrophysics Data System (ADS)
Réfy, D. I.; Brix, M.; Gomes, R.; Tál, B.; Zoletnik, S.; Dunai, D.; Kocsis, G.; Kálvin, S.; Szabolics, T.; JET Contributors
2018-04-01
Diagnostic alkali atom (e.g., lithium) beams are routinely used to diagnose magnetically confined plasmas, namely, to measure the plasma electron density profile in the edge and the scrape off layer region. A light splitting optics system was installed into the observation system of the lithium beam emission spectroscopy diagnostic at the Joint European Torus (JET) tokamak, which allows simultaneous measurement of the beam light emission with a spectrometer and a fast avalanche photodiode (APD) camera. The spectrometer measurement allows density profile reconstruction with ˜10 ms time resolution, absolute position calculation from the Doppler shift, spectral background subtraction as well as relative intensity calibration of the channels for each discharge. The APD system is capable of measuring light intensities on the microsecond time scale. However ˜100 μs integration is needed to have an acceptable signal to noise ratio due to moderate light levels. Fast modulation of the beam up to 30 kHz is implemented which allows background subtraction on the 100 μs time scale. The measurement covers the 0.9 < ρpol < 1.1 range with 6-10 mm optical resolution at the measurement location which translates to 3-5 mm radial resolution at the midplane due to flux expansion. An automated routine has been developed which performs the background subtraction, the relative calibration, and the comprehensive error calculation, runs a Bayesian density reconstruction code, and loads results to the JET database. The paper demonstrates the capability of the APD system by analyzing fast phenomena like pellet injection and edge localized modes.
Low frequency AC waveform generator
Bilharz, Oscar W.
1986-01-01
Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stabilization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform itself. The cosine is synthesized by squaring the triangular waveform, raising the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin Mingde; Marshall, Craig T.; Qi, Yi
Purpose: The use of preclinical rodent models of disease continues to grow because these models help elucidate pathogenic mechanisms and provide robust test beds for drug development. Among the major anatomic and physiologic indicators of disease progression and genetic or drug modification of responses are measurements of blood vessel caliber and flow. Moreover, cardiopulmonary blood flow is a critical indicator of gas exchange. Current methods of measuring cardiopulmonary blood flow suffer from some or all of the following limitations--they produce relative values, are limited to global measurements, do not provide vasculature visualization, are not able to measure acute changes, aremore » invasive, or require euthanasia. Methods: In this study, high-spatial and high-temporal resolution x-ray digital subtraction angiography (DSA) was used to obtain vasculature visualization, quantitative blood flow in absolute metrics (ml/min instead of arbitrary units or velocity), and relative blood volume dynamics from discrete regions of interest on a pixel-by-pixel basis (100x100 {mu}m{sup 2}). Results: A series of calibrations linked the DSA flow measurements to standard physiological measurement using thermodilution and Fick's method for cardiac output (CO), which in eight anesthetized Fischer-344 rats was found to be 37.0{+-}5.1 ml/min. Phantom experiments were conducted to calibrate the radiographic density to vessel thickness, allowing a link of DSA cardiac output measurements to cardiopulmonary blood flow measurements in discrete regions of interest. The scaling factor linking relative DSA cardiac output measurements to the Fick's absolute measurements was found to be 18.90xCO{sub DSA}=CO{sub Fick}. Conclusions: This calibrated DSA approach allows repeated simultaneous visualization of vasculature and measurement of blood flow dynamics on a regional level in the living rat.« less
NASA Astrophysics Data System (ADS)
Alinea, Allan L.; Kubota, Takahiro
2018-03-01
We perform adiabatic regularization of power spectrum in nonminimally coupled general single-field inflation with varying speed of sound. The subtraction is performed within the framework of earlier study by Urakawa and Starobinsky dealing with the canonical inflation. Inspired by Fakir and Unruh's model on nonminimally coupled chaotic inflation, we find upon imposing near scale-invariant condition, that the subtraction term exponentially decays with the number of e -folds. As in the result for the canonical inflation, the regularized power spectrum tends to the "bare" power spectrum as the Universe expands during (and even after) inflation. This work justifies the use of the "bare" power spectrum in standard calculation in the most general context of slow-roll single-field inflation involving nonminimal coupling and varying speed of sound.
Wu, Yanwei; Guo, Pan; Chen, Siying; Chen, He; Zhang, Yinchao
2017-04-01
Auto-adaptive background subtraction (AABS) is proposed as a denoising method for data processing of the coherent Doppler lidar (CDL). The method is proposed specifically for a low-signal-to-noise-ratio regime, in which the drifting power spectral density of CDL data occurs. Unlike the periodogram maximum (PM) and adaptive iteratively reweighted penalized least squares (airPLS), the proposed method presents reliable peaks and is thus advantageous in identifying peak locations. According to the analysis results of simulated and actually measured data, the proposed method outperforms the airPLS method and the PM algorithm in the furthest detectable range. The proposed method improves the detection range approximately up to 16.7% and 40% when compared to the airPLS method and the PM method, respectively. It also has smaller mean wind velocity and standard error values than the airPLS and PM methods. The AABS approach improves the quality of Doppler shift estimates and can be applied to obtain the whole wind profiling by the CDL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappelluti, N.; Urry, M.; Arendt, R.
2017-09-20
We present new measurements of the large-scale clustering component of the cross-power spectra of the source-subtracted Spitzer -IRAC cosmic infrared background and Chandra -ACIS cosmic X-ray background surface brightness fluctuations Our investigation uses data from the Chandra Deep Field South, Hubble Deep Field North, Extended Groth Strip/AEGIS field, and UDS/SXDF surveys, comprising 1160 Spitzer hours and ∼12 Ms of Chandra data collected over a total area of 0.3 deg{sup 2}. We report the first (>5 σ ) detection of a cross-power signal on large angular scales >20″ between [0.5–2] keV and the 3.6 and 4.5 μ m bands, at ∼5more » σ and 6.3 σ significance, respectively. The correlation with harder X-ray bands is marginally significant. Comparing the new observations with existing models for the contribution of the known unmasked source population at z < 7, we find an excess of about an order of magnitude at 5 σ confidence. We discuss possible interpretations for the origin of this excess in terms of the contribution from accreting early black holes (BHs), including both direct collapse BHs and primordial BHs, as well as from scattering in the interstellar medium and intra-halo light.« less
Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan
2014-08-20
In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.
Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang
2018-01-01
A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies. PMID:29466394
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Hegazy, Maha A.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-01-01
A comparative study of smart spectrophotometric techniques for the simultaneous determination of Omeprazole (OMP), Tinidazole (TIN) and Doxycycline (DOX) without prior separation steps is developed. These techniques consist of several consecutive steps utilizing zero/or ratio/or derivative spectra. The proposed techniques adopt nine simple different methods, namely direct spectrophotometry, dual wavelength, first derivative-zero crossing, amplitude factor, spectrum subtraction, ratio subtraction, derivative ratio-zero crossing, constant center, and successive derivative ratio method. The calibration graphs are linear over the concentration range of 1-20 μg/mL, 5-40 μg/mL and 2-30 μg/mL for OMP, TIN and DOX, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and successfully applied to commercial pharmaceutical preparation. The methods that are validated according to the ICH guidelines, accuracy, precision, and repeatability, were found to be within the acceptable limits.
I.v. and intraarterial hybrid digital subtraction angiography: clinical evaluation.
Foley, W D; Beres, J; Smith, D F; Bell, R M; Milde, M W; Lipchik, E O
1986-09-01
Temporal/energy (hybrid) subtraction is a technique for removing soft-tissue motion artifact from digital subtraction angiograms. The diagnostic utility of hybrid subtraction for i.v. and intraarterial angiography was assessed in the first 9 months of operation of a dedicated production system. In i.v. carotid arteriography (N = 127), hybrid subtraction (H) provided a double-profile projection of the carotid bifurcation in an additional 14% of studies, compared with temporal subtraction (T) alone (H79:T48, p less than 0.001). However, a change in estimated percent stenosis or additional diagnostic information occurred in only 2% of studies. In i.v. abdominal arteriography (N = 23), hybrid subtraction, compared with temporal subtraction, provided a diagnostic examination in an additional 14% of studies (H20:T17); however, this difference is not statistically significant. An additional three i.v. abdominal angiograms were nondiagnostic. In intraarterial abdominal (N = 98) and pelvic (N = 60) angiography, hybrid subtraction provided a diagnostic examination in an additional 5% of studies (abdomen H94:T90, pelvis H58:T56); this difference was not statistically significant. An additional 5% of all intraarterial abdominal and pelvic digital subtraction angiographic studies were considered nondiagnostic. Hybrid subtraction provides a double-profile view of the carotid bifurcation in a significant number of patients. However, apart from some potential for improved i.v. abdominal arteriography, hybrid subtraction does not result in significant improvement in comparison to conventional temporal-subtraction techniques.
Maps based on 53 GHz (5.7 mm wavelength)
NASA Technical Reports Server (NTRS)
2002-01-01
Maps based on 53 GHz (5.7 mm wavelength) observations made with the DMR over the entire 4-year mission (top) on a scale from 0 - 4 K, showing the near-uniformity of the CMB brightness, (middle) on a scale intended to enhance the contrast due to the dipole described in the slide 19 caption, and (bottom) following subtraction of the dipole component. Emission from the Milky Way Galaxy is evident in the bottom image. See slide 19 caption for information about map smoothing and projection.
Multiresolution image registration in digital x-ray angiography with intensity variation modeling.
Nejati, Mansour; Pourghassem, Hossein
2014-02-01
Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hara, Takeshi; Tagami, Motoki; Muramatsu, Chicako; Kaneda, Takashi; Katsumata, Akitoshi; Fujita, Hiroshi
2013-02-01
Inflammation in paranasal sinus sometimes becomes chronic to take long terms for the treatment. The finding is important for the early treatment, but general dentists may not recognize the findings because they focus on teeth treatments. The purpose of this study was to develop a computer-aided detection (CAD) system for the inflammation in paranasal sinus on dental panoramic radiographs (DPRs) by using the mandible contour and to demonstrate the potential usefulness of the CAD system by means of receiver operating characteristic analysis. The detection scheme consists of 3 steps: 1) Contour extraction of mandible, 2) Contralateral subtraction, and 3) Automated detection. The Canny operator and active contour model were applied to extract the edge at the first step. At the subtraction step, the right region of the extracted contour image was flipped to compare with the left region. Mutual information between two selected regions was obtained to estimate the shift parameters of image registration. The subtraction images were generated based on the shift parameter. Rectangle regions of left and right paranasal sinus on the subtraction image were determined based on the size of mandible. The abnormal side of the regions was determined by taking the difference between the averages of each region. Thirteen readers were responded to all cases without and with the automated results. The averaged AUC of all readers was increased from 0.69 to 0.73 with statistical significance (p=0.032) when the automated detection results were provided. In conclusion, the automated detection method based on contralateral subtraction technique improves readers' interpretation performance of inflammation in paranasal sinus on DPRs.
Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias
2015-01-01
Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative rotational digital subtraction angiography.
NASA Astrophysics Data System (ADS)
Biernaux, J.; Magain, P.; Hauret, C.
2017-08-01
Context. Strong gravitational lensing gives access to the total mass distribution of galaxies. It can unveil a great deal of information about the lenses' dark matter content when combined with the study of the lenses' light profile. However, gravitational lensing galaxies, by definition, appear surrounded by lensed signal, both point-like and diffuse, that is irrelevant to the lens flux. Therefore, the observer is most often restricted to studying the innermost portions of the galaxy, where classical fitting methods show some instabilities. Aims: We aim at subtracting that lensed signal and at characterising some lenses' light profile by computing their shape parameters (half-light radius, ellipticity, and position angle). Our objective is to evaluate the total integrated flux in an aperture the size of the Einstein ring in order to obtain a robust estimate of the quantity of ordinary (luminous) matter in each system. Methods: We are expanding the work we started in a previous paper that consisted in subtracting point-like lensed images and in independently measuring each shape parameter. We improve it by designing a subtraction of the diffuse lensed signal, based only on one simple hypothesis of symmetry. We apply it to the cases where it proves to be necessary. This extra step improves our study of the shape parameters and we refine it even more by upgrading our half-light radius measurement method. We also calculate the impact of our specific image processing on the error bars. Results: The diffuse lensed signal subtraction makes it possible to study a larger portion of relevant galactic flux, as the radius of the fitting region increases by on average 17%. We retrieve new half-light radii values that are on average 11% smaller than in our previous work, although the uncertainties overlap in most cases. This shows that not taking the diffuse lensed signal into account may lead to a significant overestimate of the half-light radius. We are also able to measure the flux within the Einstein radius and to compute secure error bars to all of our results.
Dissociating functional brain networks by decoding the between-subject variability
Seghier, Mohamed L.; Price, Cathy J.
2009-01-01
In this study we illustrate how the functional networks involved in a single task (e.g. the sensory, cognitive and motor components) can be segregated without cognitive subtractions at the second-level. The method used is based on meaningful variability in the patterns of activation between subjects with the assumption that regions belonging to the same network will have comparable variations from subject to subject. fMRI data were collected from thirty nine healthy volunteers who were asked to indicate with a button press if visually presented words were semantically related or not. Voxels were classified according to the similarity in their patterns of between-subject variance using a second-level unsupervised fuzzy clustering algorithm. The results were compared to those identified by cognitive subtractions of multiple conditions tested in the same set of subjects. This illustrated that the second-level clustering approach (on activation for a single task) was able to identify the functional networks observed using cognitive subtractions (e.g. those associated with vision, semantic associations or motor processing). In addition the fuzzy clustering approach revealed other networks that were not dissociated by the cognitive subtraction approach (e.g. those associated with high- and low-level visual processing and oculomotor movements). We discuss the potential applications of our method which include the identification of “hidden” or unpredicted networks as well as the identification of systems level signatures for different subgroupings of clinical and healthy populations. PMID:19150501
Mohamed, Heba M; Lamie, Nesrine T
2016-02-15
Telmisartan (TL), Hydrochlorothiazide (HZ) and Amlodipine besylate (AM) are co-formulated together for hypertension management. Three smart, specific and precise spectrophotometric methods were applied and validated for simultaneous determination of the three cited drugs. Method A is the ratio isoabsorptive point and ratio difference in subtracted spectra (RIDSS) which is based on dividing the ternary mixture of the studied drugs by the spectrum of AM to get the division spectrum, from which concentration of AM can be obtained by measuring the amplitude values in the plateau region at 360nm. Then the amplitude value of the plateau region was subtracted from the division spectrum and HZ concentration was obtained by measuring the difference in amplitude values at 278.5 and 306nm (corresponding to zero difference of TL) while the total concentration of HZ and TL in the mixture was measured at their isoabsorptive point in the division spectrum at 278.5nm (Aiso). TL concentration is then obtained by subtraction. Method B; double divisor ratio spectra derivative spectrophotometry (RS-DS) and method C; mean centering of ratio spectra (MCR) spectrophotometric methods. The proposed methods did not require any initial separation steps prior the analysis of the three drugs. A comparative study was done between the three methods regarding their; simplicity, sensitivity and limitations. Specificity was investigated by analyzing the synthetic mixtures containing different ratios of the three studied drugs and their tablets dosage form. Statistical comparison of the obtained results with those found by the official methods was done, differences were non-significant in regard to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for TL, HZ and AM. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mohamed, Heba M.; Lamie, Nesrine T.
2016-02-01
Telmisartan (TL), Hydrochlorothiazide (HZ) and Amlodipine besylate (AM) are co-formulated together for hypertension management. Three smart, specific and precise spectrophotometric methods were applied and validated for simultaneous determination of the three cited drugs. Method A is the ratio isoabsorptive point and ratio difference in subtracted spectra (RIDSS) which is based on dividing the ternary mixture of the studied drugs by the spectrum of AM to get the division spectrum, from which concentration of AM can be obtained by measuring the amplitude values in the plateau region at 360 nm. Then the amplitude value of the plateau region was subtracted from the division spectrum and HZ concentration was obtained by measuring the difference in amplitude values at 278.5 and 306 nm (corresponding to zero difference of TL) while the total concentration of HZ and TL in the mixture was measured at their isoabsorptive point in the division spectrum at 278.5 nm (Aiso). TL concentration is then obtained by subtraction. Method B; double divisor ratio spectra derivative spectrophotometry (RS-DS) and method C; mean centering of ratio spectra (MCR) spectrophotometric methods. The proposed methods did not require any initial separation steps prior the analysis of the three drugs. A comparative study was done between the three methods regarding their; simplicity, sensitivity and limitations. Specificity was investigated by analyzing the synthetic mixtures containing different ratios of the three studied drugs and their tablets dosage form. Statistical comparison of the obtained results with those found by the official methods was done, differences were non-significant in regard to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for TL, HZ and AM.
Watanabe, Taisuke; Isobe, Kazushige; Suzuki, Taiji; Kawabata, Hideo; Nakamura, Masayuki; Tsukioka, Tsuneyuki; Okudera, Toshimitsu; Okudera, Hajime; Uematsu, Kohya; Okuda, Kazuhiro; Nakata, Koh; Kawase, Tomoyuki
2017-01-01
Platelet concentrates should be quality-assured of purity and identity prior to clinical use. Unlike for the liquid form of platelet-rich plasma, platelet counts cannot be directly determined in solid fibrin clots and are instead calculated by subtracting the counts in other liquid or semi-clotted fractions from those in whole blood samples. Having long suspected the validity of this method, we herein examined the possible loss of platelets in the preparation process. Blood samples collected from healthy male donors were immediately centrifuged for advanced platelet-rich fibrin (A-PRF) and concentrated growth factors (CGF) according to recommended centrifugal protocols. Blood cells in liquid and semi-clotted fractions were directly counted. Platelets aggregated on clot surfaces were observed by scanning electron microscopy. A higher centrifugal force increased the numbers of platelets and platelet aggregates in the liquid red blood cell fraction and the semi-clotted red thrombus in the presence and absence of the anticoagulant, respectively. Nevertheless, the calculated platelet counts in A-PRF/CGF preparations were much higher than expected, rendering the currently accepted subtraction method inaccurate for determining platelet counts in fibrin clots. To ensure the quality of solid types of platelet concentrates chairside in a timely manner, a simple and accurate platelet-counting method should be developed immediately. PMID:29563413
Destructive effect of HIFU on rabbit embedded endometrial carcinoma tissues and their vascularities
Guan, Liming; Xu, Gang
2017-01-01
Objectives To evaluate damage effect of High-intensity focused ultrasound on early stage endometrial cancer tissues and their vascularities. Materials and Methods Rabbit endometrial cancer models were established via tumor blocks implantation for a prospective control study. Ultrasonic ablation efficacy was evaluated by pathologic and imaging changes. The target lesions of experimental rabbits before and after ultrasonic ablation were observed after autopsy. The slides were used for hematoxylin-eosin staining, elastic fiber staining and endothelial cell staining; the slides were observed by optical microscopy. One slide was observed by electron microscopy. Then the target lesions of experimental animals with ultrasonic ablation were observed by vascular imaging, one group was visualized by digital subtract angiography, one group was quantified by color Doppler flow imaging, and one group was detected by dye perfusion. SPSS 19.0 software was used for statistical analyses. Results Histological examination indicated that High-intensity focused ultrasound caused the tumor tissues and their vascularities coagulative necrosis. Tumor vascular structure components including elastic fiber, endothelial cells all were destroyed by ultrasonic ablation. Digital subtract angiography showed tumor vascular shadow were dismissed after ultrasonic ablation. After ultrasonic ablation, gray-scale of tumor nodules enhanced in ultrasonography, tumor peripheral and internal blood flow signals disappeared or significantly reduced in color Doppler flow imaging. Vascular perfusion performed after ultrasonic ablation, tumor vessels could not filled by dye liquid. Conclusion High-intensity focused ultrasound as a noninvasive method can destroy whole endometrial cancer cells and their supplying vascularities, which maybe an alternative approach of targeted therapy and new antiangiogenic strategy for endometrial cancer. PMID:28121624
NASA Astrophysics Data System (ADS)
Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming
2016-12-01
The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.
Thin-film chip-to-substrate interconnect and methods for making same
Tuckerman, D.B.
1988-06-06
Integrated circuit chips are electrically connected to a silicon wafer interconnection substrate. Thin film wiring is fabricated down bevelled edges of the chips. A subtractive wire fabrication method uses a series of masks and etching steps to form wires in a metal layer. An additive method direct laser writes or deposits very thin lines which can then be plated up to form wires. A quasi-additive or subtractive/additive method forms a pattern of trenches to expose a metal surface which can nucleate subsequent electrolytic deposition of wires. Low inductance interconnections on a 25 micron pitch (1600 wires on a 1 cm square chip) can be produced. The thin film hybrid interconnect eliminates solder joints or welds, and minimizes the levels of metallization. Advantages include good electrical properties, very high wiring density, excellent backside contact, compactness, and high thermal and mechanical reliability. 6 figs.
Thin-film chip-to-substrate interconnect and methods for making same
Tuckerman, David B.
1991-01-01
Integrated circuit chips are electrically connected to a silica wafer interconnection substrate. Thin film wiring is fabricated down bevelled edges of the chips. A subtractive wire fabrication method uses a series of masks and etching steps to form wires in a metal layer. An additive method direct laser writes or deposits very thin metal lines which can then be plated up to form wires. A quasi-additive or subtractive/additive method forms a pattern of trenches to expose a metal surface which can nucleate subsequent electrolytic deposition of wires. Low inductance interconnections on a 25 micron pitch (1600 wires on a 1 cm square chip) can be produced. The thin film hybrid interconnect eliminates solder joints or welds, and minimizes the levels of metallization. Advantages include good electrical properties, very high wiring density, excellent backside contact, compactness, and high thermal and mechanical reliability.
Lotfy, Hayam M; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2015-09-05
Smart spectrophotometric methods have been applied and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and prednisolone acetate (PA) without preliminary separation. Two novel methods have been developed; the first method depends upon advanced absorbance subtraction (AAS), while the other method relies on advanced amplitude modulation (AAM); in addition to the well established dual wavelength (DW), ratio difference (RD) and constant center coupled with spectrum subtraction (CC-SS) methods. Accuracy, precision and linearity ranges of these methods were determined. Moreover, selectivity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied to the assay of drugs in their pharmaceutical formulations. No interference was observed from common additives and the validity of the methods was tested. The obtained results have been statistically compared to that of official spectrophotometric methods to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
NASA Technical Reports Server (NTRS)
Carey, Sean J.; Shipman, R. F.; Clark, F. O.
1996-01-01
We present large scale images of the infrared emission of the region around the Pleiades using the ISSA data product from the IRAS mission. Residual Zodiacal background and a discontinuity in the image due to the scanning strategy of the satellite necessitated special background subtraction methods. The 60/100 color image clearly shows the heating of the ambient interstellar medium by the cluster. The 12/100 and 25/100 images peak on the cluster as expected for exposure of small dust grains to an enhanced UV radiation field; however, the 25/100 color declines to below the average interstellar value at the periphery of the cluster. Potential causes of the color deficit are discussed. A new method of identifying dense molecular material through infrared emission properties is presented. The difference between the 100 micron flux density and the 60 micron flux density scaled by the average interstellar 60/100 color ratio (Delta I(sub 100) is a sensitive diagnostic of material with embedded heating sources (Delta I(sub 100) less than 0) and cold, dense cores (Delta I(sub 100) greater than 0). The dense cores of the Taurus cloud complex as well as Lynds 1457 are clearly identified by this method, while the IR bright but diffuse Pleiades molecular cloud is virtually indistinguishable from the nearby infrared cirrus.
Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method
NASA Astrophysics Data System (ADS)
De Waal, Sybrand A.
1996-07-01
A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.
Tratamiento formal de imágenes astronómicas con PSF espacialmente variable
NASA Astrophysics Data System (ADS)
Sánchez, B. O.; Domínguez, M. J.; Lares, M.
2017-10-01
We present a python implementation of a method for PSF determination in the context of optimal subtraction of astronomical images. We introduce an expansion of the spatially variant point spread function (PSF) in terms of the Karhunen Loève basis. The advantage of this approach is that the basis is able to naturally adapt to the data, instead of imposing a fixed ad-hoc analytic form. Simulated image reconstruction was analyzed, by using the measured PSF, with good agreement in terms of sky background level between the reconstructed and original images. The technique is simple enough to be implemented on more sophisticated image subtraction methods, since it improves its results without extra computational cost in a spatially variant PSF environment.
Gravity gradient preprocessing at the GOCE HPF
NASA Astrophysics Data System (ADS)
Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.
2009-04-01
One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.
Preprocessing of gravity gradients at the GOCE high-level processing facility
NASA Astrophysics Data System (ADS)
Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin
2009-07-01
One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.
Cloning the Gravity and Shear Stress Related Genes from MG-63 Cells by Subtracting Hybridization
NASA Astrophysics Data System (ADS)
Zhang, Shu; Dai, Zhong-quan; Wang, Bing; Cao, Xin-sheng; Li, Ying-hui; Sun, Xi-qing
2008-06-01
Background The purpose of the present study was to clone the gravity and shear stress related genes from osteoblast-like human osteosarcoma MG-63 cells by subtractive hybridization. Method MG-63 cells were divided into two groups (1G group and simulated microgravity group). After cultured for 60 h in two different gravitational environments, two groups of MG-63 cells were treated with 1.5Pa fluid shear stress (FSS) for 60 min, respectively. The total RNA in cells was isolated. The gravity and shear stress related genes were cloned by subtractive hybridization. Result 200 clones were gained. 30 positive clones were selected using PCR method based on the primers of vector and sequenced. The obtained sequences were analyzed by blast. changes of 17 sequences were confirmed by RT-PCR and these genes are related to cell proliferation, cell differentiation, protein synthesis, signal transduction and apoptosis. 5 unknown genes related to gravity and shear stress were found. Conclusion In this part of our study, our result indicates that simulated microgravity may change the activities of MG-63 cells by inducing the functional alterations of specific genes.
Revealing small-scale diffracting discontinuities by an optimization inversion algorithm
NASA Astrophysics Data System (ADS)
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei
2017-02-01
Small-scale diffracting geologic discontinuities play a significant role in studying carbonate reservoirs. The seismic responses of them are coded in diffracted/scattered waves. However, compared with reflections, the energy of these valuable diffractions is generally one or even two orders of magnitude weaker. This means that the information of diffractions is strongly masked by reflections in the seismic images. Detecting the small-scale cavities and tiny faults from the deep carbonate reservoirs, mainly over 6 km, poses an even bigger challenge to seismic diffractions, as the signals of seismic surveyed data are weak and have a low signal-to-noise ratio (SNR). After analyzing the mechanism of the Kirchhoff migration method, the residual of prestack diffractions located in the neighborhood of the first Fresnel aperture is found to remain in the image space. Therefore, a strategy for extracting diffractions in the image space is proposed and a regularized L 2-norm model with a smooth constraint to the local slopes is suggested for predicting reflections. According to the focusing conditions of residual diffractions in the image space, two approaches are provided for extracting diffractions. Diffraction extraction can be directly accomplished by subtracting the predicted reflections from seismic imaging data if the residual diffractions are focused. Otherwise, a diffraction velocity analysis will be performed for refocusing residual diffractions. Two synthetic examples and one field application demonstrate the feasibility and efficiency of the two proposed methods in detecting the small-scale geologic scatterers, tiny faults and cavities.
ERIC Educational Resources Information Center
Larwin, K. H.; Thomas, Eugene M.; Larwin, David A.
2015-01-01
This paper introduces a new term and concept to the leadership discourse: Subtractive Leadership. As an extension of the distributive leadership model, the notion of subtractive leadership refers to a leadership style that detracts from organizational culture and productivity. Subtractive leadership fails to embrace and balance the characteristics…
EPA Region 1 - Valley Depth in Meters
Raster of the Depth in meters of EPA-delimited Valleys in Region 1.Valleys (areas that are lower than their neighbors) were extracted from a Digital Elevation Model (USGS, 30m) by finding the local average elevation, subtracting the actual elevation from the average, and selecting areas where the actual elevation was below the average. The landscape was sampled at seven scales (circles of 1, 2, 4, 7, 11, 16, and 22 km radius) to take into account the diversity of valley shapes and sizes. Areas selected in at least four scales were designated as valleys.
Thinking about efficiency of resource use in forests
Dan Binkley; Jose Luiz Stape; Michael G. Ryan
2004-01-01
The growth of forests can be described as a function of the supply of resources, the proportion of resources captured by trees, and the efficiency with which trees use resources to fix carbon dioxide. This function can be modified to explain wood production by subtracting the allocation of biomass to other tissues and to respiration. At the scale of leaves and seconds...
Accessing the diffracted wavefield by coherent subtraction
NASA Astrophysics Data System (ADS)
Schwarz, Benjamin; Gajewski, Dirk
2017-10-01
Diffractions have unique properties which are still rarely exploited in common practice. Aside from containing subwavelength information on the scattering geometry or indicating small-scale structural complexity, they provide superior illumination compared to reflections. While diffraction occurs arguably on all scales and in most realistic media, the respective signatures typically have low amplitudes and are likely to be masked by more prominent wavefield components. It has been widely observed that automated stacking acts as a directional filter favouring the most coherent arrivals. In contrast to other works, which commonly aim at steering the summation operator towards fainter contributions, we utilize this directional selection to coherently approximate the most dominant arrivals and subtract them from the data. Supported by additional filter functions which can be derived from wave front attributes gained during the stacking procedure, this strategy allows for a fully data-driven recovery of faint diffractions and makes them accessible for further processing. A complex single-channel field data example recorded in the Aegean sea near Santorini illustrates that the diffracted background wavefield is surprisingly rich and despite the absence of a high channel count can still be detected and characterized, suggesting a variety of applications in industry and academia.
The Relationship Between Non-Symbolic Multiplication and Division in Childhood
McCrink, Koleen; Shafto, Patrick; Barth, Hilary
2016-01-01
Children without formal education in addition and subtraction are able to perform multi-step operations over an approximate number of objects. Further, their performance improves when solving approximate (but not exact) addition and subtraction problems that allow for inversion as a shortcut (e.g., a + b − b = a). The current study examines children’s ability to perform multi-step operations, and the potential for an inversion benefit, for the operations of approximate, non-symbolic multiplication and division. Children were trained to compute a multiplication and division scaling factor (*2 or /2, *4 or /4), and then tested on problems that combined two of these factors in a way that either allowed for an inversion shortcut (e.g., 8 * 4 / 4) or did not (e.g., 8 * 4 / 2). Children’s performance was significantly better than chance for all scaling factors during training, and they successfully computed the outcomes of the multi-step testing problems. They did not exhibit a performance benefit for problems with the a * b / b structure, suggesting they did not draw upon inversion reasoning as a logical shortcut to help them solve the multi-step test problems. PMID:26880261
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Hegazy, Maha A.; Mowaka, Shereen; Mohamed, Ekram Hany
2015-04-01
This work represents a comparative study of two smart spectrophotometric techniques namely; successive resolution and progressive resolution for the simultaneous determination of ternary mixtures of Amlodipine (AML), Hydrochlorothiazide (HCT) and Valsartan (VAL) without prior separation steps. These techniques consist of several consecutive steps utilizing zero and/or ratio and/or derivative spectra. By applying successive spectrum subtraction coupled with constant multiplication method, the proposed drugs were obtained in their zero order absorption spectra and determined at their maxima 237.6 nm, 270.5 nm and 250 nm for AML, HCT and VAL, respectively; while by applying successive derivative subtraction they were obtained in their first derivative spectra and determined at P230.8-246, P261.4-278.2, P233.7-246.8 for AML, HCT and VAL respectively. While in the progressive resolution, the concentrations of the components were determined progressively from the same zero order absorption spectrum using absorbance subtraction coupled with absorptivity factor methods or from the same ratio spectrum using only one divisor via amplitude modulation method can be used for the determination of ternary mixtures using only one divisor where the concentrations of the components are determined progressively. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. Moreover comparative study between spectrum addition technique as a novel enrichment technique and a well established one namely spiking technique was adopted for the analysis of pharmaceutical formulations containing low concentration of AML. The methods were validated as per ICH guidelines where accuracy, precision and specificity were found to be within their acceptable limits. The results obtained from the proposed methods were statistically compared with the reported one where no significant difference was observed.
Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier
2018-06-01
Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Salem, Hesham; Mohamed, Dalia
2015-04-01
Six simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the analgesic drug; paracetamol (PARA) and the skeletal muscle relaxant; dantrolene sodium (DANT). Three methods are manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and mean centering (MC). The other three methods are utilizing the isoabsorptive point either at zero order namely; absorbance ratio (AR) and absorbance subtraction (AS) or at ratio spectrum namely; amplitude modulation (AM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined dosage form. Standard deviation values are less than 1.5 in the assay of raw materials and capsules. The obtained results were statistically compared with each other and with those of reported spectrophotometric ones. The comparison showed that there is no significant difference between the proposed methods and the reported methods regarding both accuracy and precision.
Siniatchkin, Michael; Moeller, Friederike; Jacobs, Julia; Stephani, Ulrich; Boor, Rainer; Wolff, Stephan; Jansen, Olav; Siebner, Hartwig; Scherg, Michael
2007-09-01
The ballistocardiogram (BCG) represents one of the most prominent sources of artifacts that contaminate the electroencephalogram (EEG) during functional MRI. The BCG artifacts may affect the detection of interictal epileptiform discharges (IED) in patients with epilepsy, reducing the sensitivity of the combined EEG-fMRI method. In this study we improved the BCG artifact correction using a multiple source correction (MSC) approach. On the one hand, a source analysis of the IEDs was applied to the EEG data obtained outside the MRI scanner to prevent the distortion of EEG signals of interest during the correction of BCG artifacts. On the other hand, the topographies of the BCG artifacts were defined based on the EEG recorded inside the scanner. The topographies of the BCG artifacts were then added to the surrogate model of IED sources and a combined source model was applied to the data obtained inside the scanner. The artifact signal was then subtracted without considerable distortion of the IED topography. The MSC approach was compared with the traditional averaged artifact subtraction (AAS) method. Both methods reduced the spectral power of BCG-related harmonics and enabled better detection of IEDs. Compared with the conventional AAS method, the MSC approach increased the sensitivity of IED detection because the IED signal was less attenuated when subtracting the BCG artifacts. The proposed MSC method is particularly useful in situations in which the BCG artifact is spatially correlated and time-locked with the EEG signal produced by the focal brain activity of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Robert Y., E-mail: rx-tang@laurentian.ca; Laamanen, Curtis, E-mail: cx-laamanen@laurentian.ca; McDonald, Nancy, E-mail: mcdnancye@gmail.com
Purpose: Develop a method to subtract fat tissue contributions to wide-angle x-ray scatter (WAXS) signals of breast biopsies in order to estimate the differential linear scattering coefficients μ{sub s} of fatless tissue. Cancerous and fibroglandular tissue can then be compared independent of fat content. In this work phantom materials with known compositions were used to test the efficacy of the WAXS subtraction model. Methods: Each sample 5 mm in diameter and 5 mm thick was interrogated by a 50 kV 2.7 mm diameter beam for 3 min. A 25 mm{sup 2} by 1 mm thick CdTe detector allowed measurements ofmore » a portion of the θ = 6° scattered field. A scatter technique provided means to estimate the incident spectrum N{sub 0}(E) needed in the calculations of μ{sub s}[x(E, θ)] where x is the momentum transfer argument. Values of μ{sup ¯}{sub s} for composite phantoms consisting of three plastic layers were estimated and compared to the values obtained via the sum μ{sup ¯}{sub s}{sup ∑}(x)=ν{sub 1}μ{sub s1}(x)+ν{sub 2}μ{sub s2}(x)+ν{sub 3}μ{sub s3}(x), where ν{sub i} is the fractional volume of the ith plastic component. Water, polystyrene, and a volume mixture of 0.6 water + 0.4 polystyrene labelled as fibphan were chosen to mimic cancer, fat, and fibroglandular tissue, respectively. A WAXS subtraction model was used to remove the polystyrene signal from tissue composite phantoms so that the μ{sub s} of water and fibphan could be estimated. Although the composite samples were layered, simulations were performed to test the models under nonlayered conditions. Results: The well known μ{sub s} signal of water was reproduced effectively between 0.5 < x < 1.6 nm{sup −1}. The μ{sup ¯}{sub s} obtained for the heterogeneous samples agreed with μ{sup ¯}{sub s}{sup ∑}. Polystyrene signals were subtracted successfully from composite phantoms. The simulations validated the usefulness of the WAXS models for nonlayered biopsies. Conclusions: The methodology to measure μ{sub s} of homogeneous samples was quantitatively accurate. Simple WAXS models predicted the probabilities for specific x-ray scattering to occur from heterogeneous biopsies. The fat subtraction model can allow μ{sub s} signals of breast cancer and fibroglandular tissue to be compared without the effects of fat provided there is an independent measurement of the fat volume fraction ν{sub f}. Future work will consist of devising a quantitative x-ray digital imaging method to estimate ν{sub f} in ex vivo breast samples.« less
QCD Condensates and Holographic Wilson Loops for Asymptotically AdS Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quevedo, R. Carcasses; Goity, Jose L.; Trinchero, Roberto C.
2014-02-01
The minimization of the Nambu-Goto (NG) action for a surface whose contour defines a circular Wilson loop of radius a placed at a finite value of the coordinate orthogonal to the border is considered. This is done for asymptotically AdS spaces. The condensates of dimension n = 2, 4, 6, 8, and 10 are calculated in terms of the coefficients in the expansion in powers of the radius a of the on-shell subtracted NG action for small a->0. The subtraction employed is such that it presents no conflict with conformal invariance in the AdS case and need not introduce anmore » additional infrared scale for the case of confining geometries. It is shown that the UV value of the gluon condensates is universal in the sense that it only depends on the first coefficients of the difference with the AdS case.« less
VizieR Online Data Catalog: M33 SNR candidates properties (Lee+, 2014)
NASA Astrophysics Data System (ADS)
Lee, J. H.; Lee, M. G.
2017-04-01
We utilized the Hα and [S II] images in the LGGS to find new M33 remnants. The LGGS covered three 36' square fields of M33. We subtracted continuum sources from the narrowband images using R-band images. We smoothed the images with better seeing to match the point-spread function in the images with worse seeing, using the IRAF task psfmatch. We then scaled and subtracted the resulting continuum images from narrowband images. We selected M33 remnants considering three criteria: emission-line ratio ([S II]/Hα), the morphological structure, and the absence of blue stars inside the sources. Details are described in L14 (Lee et al. 2014ApJ...786..130L). We detected objects with [S II]/Hα>0.4 in emission-line ratio maps, and selected objects with round or shell structures in each narrowband image. As a result, we chose 435 sources. (2 data files).
Trends in Computer-Aided Manufacturing in Prosthodontics: A Review of the Available Streams
Bennamoun, Mohammed
2014-01-01
In prosthodontics, conventional methods of fabrication of oral and facial prostheses have been considered the gold standard for many years. The development of computer-aided manufacturing and the medical application of this industrial technology have provided an alternative way of fabricating oral and facial prostheses. This narrative review aims to evaluate the different streams of computer-aided manufacturing in prosthodontics. To date, there are two streams: the subtractive and the additive approaches. The differences reside in the processing protocols, materials used, and their respective accuracy. In general, there is a tendency for the subtractive method to provide more homogeneous objects with acceptable accuracy that may be more suitable for the production of intraoral prostheses where high occlusal forces are anticipated. Additive manufacturing methods have the ability to produce large workpieces with significant surface variation and competitive accuracy. Such advantages make them ideal for the fabrication of facial prostheses. PMID:24817888
NASA Astrophysics Data System (ADS)
Asavanant, Warit; Nakashima, Kota; Shiozawa, Yu; Yoshikawa, Jun-Ichi; Furusawa, Akira
2017-12-01
Until now, Schr\\"odinger's cat states are generated by subtracting single photons from the whole bandwidth of squeezed vacua. However, it was pointed out recently that the achievable purities are limited in such method (J. Yoshikawa, W. Asavanant, and A. Furusawa, arXiv:1707.08146 [quant-ph] (2017)). In this paper, we used our new photon subtraction method with a narrowband filtering cavity and generated a highly pure Schr\\"odinger's cat state with the value of $-0.184$ at the origin of the Wigner function. To our knowledge, this is the highest value ever reported without any loss corrections. The temporal mode also becomes exponentially rising in our method, which allows us to make a real-time quadrature measurement on Schr\\"odinger's cat states, and we obtained the value of $-0.162$ at the origin of the Wigner function.
Hawkins, Keith A; Cromer, Jennifer R; Piotrowski, Andrea S; Pearlson, Godfrey D
2011-11-01
The Mini-Mental State Exam (MMSE) is a clinically ubiquitous yet incompletely standardized instrument. Though the test offers considerable examiner leeway, little data exist on the normative consequences of common administration variations. We sought to: (a) determine the effects of education, age, gender, health status, and a common administration variation (serial 7s subtraction vs. "world" spelled backward) on MMSE score within a minority sample, (b) provide normative data stratified on the most empirically relevant bases, and (c) briefly address item failure rates. African American citizens (N = 298) aged 55-87 living independently in the community were recruited by advertisement, community recruitment, and word of mouth. Total score with "world" spelled backward exceeded total score with serial 7s subtraction across all levels of education, replicating findings in Caucasian samples. Education is the primary source of variance on MMSE score, followed by age. In this cohort, women out-performed men when "world" spelled backward was included, but there was no gender effect when serial 7s subtraction was included in MMSE total score. To ensure an appropriate interpretation of MMSE scores, reports, whether clinical or in publications of research findings, should be explicit regarding the administration method. Stratified normative data are provided.
NASA Astrophysics Data System (ADS)
Fuadiah, N. F.; Suryadi, D.; Turmudi
2018-05-01
This study focuses on the design of a didactical situation in addition and subtraction involving negative integers at the pilot experiment phase. As we know, negative numbers become an obstacle for students in solving problems related to them. This study aims to create a didactical design that can assist students in understanding the addition and subtraction. Another expected result in this way is that students are introduced to the characteristics of addition and subtraction of integers. The design was implemented on 32 seventh grade students in one of the classes in a junior secondary school as the pilot experiment. Learning activities were observed thoroughly including the students’ responses that emerged during the learning activities. The written documentation of the students was also used to support the analysis in the learning activities. The results of the analysis showed that this method could help the students perform a large number of integer operations that could not be done with a number line. The teacher’s support as a didactical potential contract was still needed to encourage institutionalization processes. The results of the design analysis used as the basis of the revision are expected to be implemented by the teacher in the teaching experiment.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor; Trócsányi, Zoltán
2008-08-01
In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms.
High-resolution Observations of Hα Spectra with a Subtractive Double Pass
NASA Astrophysics Data System (ADS)
Beck, C.; Rezaei, R.; Choudhary, D. P.; Gosain, S.; Tritschler, A.; Louis, R. E.
2018-02-01
High-resolution imaging spectroscopy in solar physics has relied on Fabry-Pérot interferometers (FPIs) in recent years. FPI systems, however, become technically challenging and expensive for telescopes larger than the 1 m class. A conventional slit spectrograph with a diffraction-limited performance over a large field of view (FOV) can be built at much lower cost and effort. It can be converted into an imaging spectro(polari)meter using the concept of a subtractive double pass (SDP). We demonstrate that an SDP system can reach a similar performance as FPI-based systems with a high spatial and moderate spectral resolution across a FOV of 100^'' ×100^' ' with a spectral coverage of 1 nm. We use Hα spectra taken with an SDP system at the Dunn Solar Telescope and complementary full-disc data to infer the properties of small-scale superpenumbral filaments. We find that the majority of all filaments end in patches of opposite-polarity fields. The internal fine-structure in the line-core intensity of Hα at spatial scales of about 0.5'' exceeds that in other parameters such as the line width, indicating small-scale opacity effects in a larger-scale structure with common properties. We conclude that SDP systems in combination with (multi-conjugate) adaptive optics are a valid alternative to FPI systems when high spatial resolution and a large FOV are required. They can also reach a cadence that is comparable to that of FPI systems, while providing a much larger spectral range and a simultaneous multi-line capability.
Subtraction of Positive and Negative Numbers: The Difference and Completion Approaches with Chips
ERIC Educational Resources Information Center
Flores, Alfinio
2008-01-01
Diverse contexts such as "take away," comparison," and "completion" give rise to subtraction problems. The take-away interpretation of subtraction has been explored using two-colored chips to help students understand addition and subtraction of integers. This article illustrates how the difference and completion (or missing addend) interpretations…
Preschoolers' Understanding of Subtraction-Related Principles
ERIC Educational Resources Information Center
Baroody, Arthur J.; Lai, Meng-lung; Li, Xia; Baroody, Alison E.
2009-01-01
Little research has focused on an informal understanding of subtractive negation (e.g., 3 - 3 = 0) and subtractive identity (e.g., 3 - 0 = 3). Previous research indicates that preschoolers may have a fragile (i.e., unreliable or localized) understanding of the addition-subtraction inverse principle (e.g., 2 + 1 - 1 = 2). Recognition of a small…
Xu, De-Quan; Zhang, Yi-Bing; Xiong, Yuan-Zhu; Gui, Jian-Fang; Jiang, Si-Wen; Su, Yu-Hong
2003-07-01
Using suppression subtractive hybridization (SSH) technique, forward and reverse subtracted cDNA libraries were constructed between Longissimus muscles from Meishan and Landrace pigs. A housekeeping gene, G3PDH, was used to estimate the efficiency of subtractive cDNA. In two cDNA libraries, G3PDH was subtracted very efficiently at appropriate 2(10) and 2(5) folds, respectively, indicating that some differentially expressed genes were also enriched at the same folds and the two subtractive cDNA libraries were very successful. A total of 709 and 673 positive clones were isolated from forward and reverse subtracted cDNA libraries, respectively. Analysis of PCR showed that most of all plasmids in the clones contained 150-750 bp inserts. The construction of subtractive cDNA libraries between muscle tissue from different pig breeds laid solid foundations for isolating and identifying the genes determining muscle growth and meat quality, which will be important to understand the mechanism of muscle growth, determination of meat quality and practice of molecular breeding.
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ma, Tianyu; Shao, Yiping
2008-08-01
This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.
Additive Manufacturing of Metal Structures at the Micrometer Scale.
Hirt, Luca; Reiser, Alain; Spolenak, Ralph; Zambelli, Tomaso
2017-05-01
Currently, the focus of additive manufacturing (AM) is shifting from simple prototyping to actual production. One driving factor of this process is the ability of AM to build geometries that are not accessible by subtractive fabrication techniques. While these techniques often call for a geometry that is easiest to manufacture, AM enables the geometry required for best performance to be built by freeing the design process from restrictions imposed by traditional machining. At the micrometer scale, the design limitations of standard fabrication techniques are even more severe. Microscale AM thus holds great potential, as confirmed by the rapid success of commercial micro-stereolithography tools as an enabling technology for a broad range of scientific applications. For metals, however, there is still no established AM solution at small scales. To tackle the limited resolution of standard metal AM methods (a few tens of micrometers at best), various new techniques aimed at the micrometer scale and below are presently under development. Here, we review these recent efforts. Specifically, we feature the techniques of direct ink writing, electrohydrodynamic printing, laser-assisted electrophoretic deposition, laser-induced forward transfer, local electroplating methods, laser-induced photoreduction and focused electron or ion beam induced deposition. Although these methods have proven to facilitate the AM of metals with feature sizes in the range of 0.1-10 µm, they are still in a prototype stage and their potential is not fully explored yet. For instance, comprehensive studies of material availability and material properties are often lacking, yet compulsory for actual applications. We address these items while critically discussing and comparing the potential of current microscale metal AM techniques. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Efficient thermal noise removal of Sentinel-1 image and its impacts on sea ice applications
NASA Astrophysics Data System (ADS)
Park, Jeong-Won; Korosov, Anton; Babiker, Mohamed
2017-04-01
Wide swath SAR observation from several spaceborne SAR missions played an important role in studying sea ice in the polar region. Sentinel 1A and 1B are producing dual-polarization observation data with the highest temporal resolution ever. For a proper use of dense time-series, radiometric properties must be qualified. Thermal noise is often neglected in many sea ice applications, but is impacting seriously the utility of dual-polarization SAR data. Sentinel-1 TOPSAR image intensity is disturbed by additive thermal noise particularly in cross-polarization channel. Although ESA provides calibrated noise vectors for noise power subtraction, residual noise contribution is significant considering relatively narrow backscattering distribution of cross-polarization channel. In this study, we investigate the noise characteristics and propose an efficient method for noise reduction based on three types of correction: azimuth de-scalloping, noise scaling, and inter-swath power balancing. The core idea is to find optimum correction coefficients resulting in the most noise-uncorrelated gentle backscatter profile over homogeneous region and to combine them with scalloping gain for reconstruction of complete two-dimensional noise field. Denoising is accomplished by subtracting the reconstructed noise field from the original image. The resulting correction coefficients determined by extensive experiments showed different noise characteristics for different Instrument Processing Facility (IPF) versions of Level 1 product generation. Even after thermal noise subtraction, the image still suffers from residual noise, which distorts local statistics. Since this residual noise depends on local signal-to-noise ratio, it can be compensated by variance normalization with coefficients determined from an empirical model. Denoising improved not only visual interpretability but also performances in SAR intensity-based sea ice applications. Results from two applications showed the effectiveness of the proposed method: feature tracking based sea ice drift and texture analysis based sea ice classification. For feature tracking, large spatial asymmetry of keypoint distribution caused by higher noise level in the nearest subswath was decresed so that the matched features to be selected evenly in space. For texture analysis, inter-subswath texture differences caused by different noise equivalent sigma zero were normalized so that the texture features estimated in any subswath have similar value with those in other subswaths.
Renormalization of quark propagators from twisted-mass lattice QCD at N{sub f}=2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blossier, B.; Boucaud, Ph.; Pene, O.
2011-04-01
We present results concerning the nonperturbative evaluation of the renormalization constant for the quark field, Z{sub q}, from lattice simulations with twisted-mass quarks and three values of the lattice spacing. We use the regularization-invariant momentum-subtraction (RI'-MOM) scheme. Z{sub q} has very large lattice spacing artefacts; it is considered here as a test bed to elaborate accurate methods which will be used for other renormalization constants. We recall and develop the nonperturbative correction methods and propose tools to test the quality of the correction. These tests are also applied to the perturbative correction method. We check that the lattice-spacing artefacts indeedmore » scale as a{sup 2}p{sup 2}. We then study the running of Z{sub q} with particular attention to the nonperturbative effects, presumably dominated by the dimension-two gluon condensate in Landau gauge. We show indeed that this effect is present, and not small. We check its scaling in physical units, confirming that it is a continuum effect. It gives a {approx}4% contribution at 2 GeV. Different variants are used in order to test the reliability of our result and estimate the systematic uncertainties. Finally, combining all our results and using the known Wilson coefficient of , we find g{sup 2}({mu}{sup 2}){sub {mu}}{sup 2}{sub CM}=2.01(11)({sub -0.73}{sup +0.61})GeV{sup 2} at {mu}=10 GeV, the local operator A{sup 2} being renormalized in the MS scheme. This last result is in fair agreement within uncertainties with the value independently extracted from the strong coupling constant. We convert the nonperturbative part of Z{sub q} from the regularization-invariant momentum-subtraction (RI'-MOM) scheme to MS. Our result for the quark field renormalization constant in the MS scheme is Z{sub q} {sup MS} {sup pert}((2 GeV){sup 2},g{sub bare}{sup 2})=0.750(3)(7)-0.313(20)(g{sub bare}{sup 2}-1.5) for the perturbative contribution and Z{sub q}{sup MSnonperturbative}((2 GeV){sup 2},g{sub bare}{sup 2})=0.781(6)(21)-0.313(20)(g{sub bare}{sup 2}-1.5) when the nonperturbative contribution is included.« less
Method and apparatus for reducing range ambiguity in synthetic aperture radar
Kare, Jordin T.
1999-10-26
A modified Synthetic Aperture Radar (SAR) system with reduced sensitivity to range ambiguities, and which uses secondary receiver channels to detect the range ambiguous signals and subtract them from the signal received by the main channel. Both desired and range ambiguous signals are detected by a main receiver and by one or more identical secondary receivers. All receivers are connected to a common antenna with two or more feed systems offset in elevation (e.g., a reflector antenna with multiple feed horns or a phased array with multiple phase shift networks. The secondary receiver output(s) is (are) then subtracted from the main receiver output in such a way as to cancel the ambiguous signals while only slightly attenuating the desired signal and slightly increasing the noise in the main channel, and thus does not significantly affect the desired signal. This subtraction may be done in real time, or the outputs of the receivers may be recorded separately and combined during signal processing.
K-edge subtraction synchrotron X-ray imaging in bio-medical research.
Thomlinson, W; Elleaume, H; Porra, L; Suortti, P
2018-05-01
High contrast in X-ray medical imaging, while maintaining acceptable radiation dose levels to the patient, has long been a goal. One of the most promising methods is that of K-edge subtraction imaging. This technique, first advanced as long ago as 1953 by B. Jacobson, uses the large difference in the absorption coefficient of elements at energies above and below the K-edge. Two images, one taken above the edge and one below the edge, are subtracted leaving, ideally, only the image of the distribution of the target element. This paper reviews the development of the KES techniques and technology as applied to bio-medical imaging from the early low-power tube sources of X-rays to the latest high-power synchrotron sources. Applications to coronary angiography, functional lung imaging and bone growth are highlighted. A vision of possible imaging with new compact sources is presented. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Dixon, Juli K.; Andreasen, Janet B.; Avila, Cheryl L.; Bawatneh, Zyad; Deichert, Deana L.; Howse, Tashana D.; Turner, Mercedes Sotillo
2014-01-01
A goal of this study was to examine elementary preservice teachers' (PSTs) ability to contextualize and decontextualize fraction subtraction by asking them to write word problems to represent fraction subtraction expressions and to choose prewritten word problems to support given fraction subtraction expressions. Three themes emerged from the…
ERIC Educational Resources Information Center
Canobi, Katherine H.; Bethune, Narelle E.
2008-01-01
Three studies addressed children's arithmetic. First, 50 3- to 5-year-olds judged physical demonstrations of addition, subtraction and inversion, with and without number words. Second, 20 3- to 4-year-olds made equivalence judgments of additions and subtractions. Third, 60 4- to 6-year-olds solved addition, subtraction and inversion problems that…
Cloud characterization and clear-sky correction from Landsat-7
Cahalan, Robert F.; Oreopoulos, L.; Wen, G.; Marshak, S.; Tsay, S. -C.; DeFelice, Tom
2001-01-01
Landsat, with its wide swath and high resolution, fills an important mesoscale gap between atmospheric variations seen on a few kilometer scale by local surface instrumentation and the global view of coarser resolution satellites such as MODIS. In this important scale range, Landsat reveals radiative effects on the few hundred-meter scale of common photon mean-free-paths, typical of scattering in clouds at conservative (visible) wavelengths, and even shorter mean-free-paths of absorptive (near-infrared) wavelengths. Landsat also reveals shadowing effects caused by both cloud and vegetation that impact both cloudy and clear-sky radiances. As a result, Landsat has been useful in development of new cloud retrieval methods and new aerosol and surface retrievals that account for photon diffusion and shadowing effects. This paper discusses two new cloud retrieval methods: the nonlocal independent pixel approximation (NIPA) and the normalized difference nadir radiance method (NDNR). We illustrate the improvements in cloud property retrieval enabled by the new low gain settings of Landsat-7 and difficulties found at high gains. Then, we review the recently developed “path radiance” method of aerosol retrieval and clear-sky correction using data from the Department of Energy Atmospheric Radiation Measurement (ARM) site in Oklahoma. Nearby clouds change the solar radiation incident on the surface and atmosphere due to indirect illumination from cloud sides. As a result, if clouds are nearby, this extra side-illumination causes clear pixels to appear brighter, which can be mistaken for extra aerosol or higher surface albedo. Thus, cloud properties must be known in order to derive accurate aerosol and surface properties. A three-dimensional (3D) Monte Carlo (MC) radiative transfer simulation illustrates this point and suggests a method to subtract the cloud effect from aerosol and surface retrievals. The main conclusion is that cloud, aerosol, and surface retrievals are linked and must be treated as a combined system. Landsat provides the range of scales necessary to observe the 3D cloud radiative effects that influence joint surface-atmospheric retrievals.
NASA Astrophysics Data System (ADS)
Fredenberg, Erik; Cederström, Björn; Lundqvist, Mats; Ribbing, Carolina; Åslund, Magnus; Diekmann, Felix; Nishikawa, Robert; Danielsson, Mats
2008-03-01
Dual-energy subtraction imaging (DES) is a method to improve the detectability of contrast agents over a lumpy background. Two images, acquired at x-ray energies above and below an absorption edge of the agent material, are logarithmically subtracted, resulting in suppression of the signal from the tissue background and a relative enhancement of the signal from the agent. Although promising, DES is still not widely used in clinical practice. One reason may be the need for two distinctly separated x-ray spectra that are still close to the absorption edge, realized through dual exposures which may introduce motion unsharpness. In this study, electronic spectrum-splitting with a silicon-strip detector is theoretically and experimentally investigated for a mammography model with iodinated contrast agent. Comparisons are made to absorption imaging and a near-ideal detector using a signal-to-noise ratio that includes both statistical and structural noise. Similar to previous studies, heavy absorption filtration was needed to narrow the spectra at the expense of a large reduction in x-ray flux. Therefore, potential improvements using a chromatic multi-prism x-ray lens (MPL) for filtering were evaluated theoretically. The MPL offers a narrow tunable spectrum, and we show that the image quality can be improved compared to conventional filtering methods.
Intensity Mapping Foreground Cleaning with Generalized Needlet Internal Linear Combination
NASA Astrophysics Data System (ADS)
Olivari, L. C.; Remazeilles, M.; Dickinson, C.
2018-05-01
Intensity mapping (IM) is a new observational technique to survey the large-scale structure of matter using spectral emission lines. IM observations are contaminated by instrumental noise and astrophysical foregrounds. The foregrounds are at least three orders of magnitude larger than the searched signals. In this work, we apply the Generalized Needlet Internal Linear Combination (GNILC) method to subtract radio foregrounds and to recover the cosmological HI and CO signals within the IM context. For the HI IM case, we find that GNILC can reconstruct the HI plus noise power spectra with 7.0% accuracy for z = 0.13 - 0.48 (960 - 1260 MHz) and l <~ 400, while for the CO IM case, we find that it can reconstruct the CO plus noise power spectra with 6.7% accuracy for z = 2.4 - 3.4 (26 - 34 GHz) and l <~ 3000.
Heusermann, Wolf; Ludin, Beat; Pham, Nhan T; Auer, Manfred; Weidemann, Thomas; Hintersteiner, Martin
2016-05-09
The increasing involvement of academic institutions and biotech companies in drug discovery calls for cost-effective methods to identify new bioactive molecules. Affinity-based on-bead screening of combinatorial one-bead one-compound libraries combines a split-mix synthesis design with a simple protein binding assay operating directly at the bead matrix. However, one bottleneck for academic scale on-bead screening is the unavailability of a cheap, automated, and robust screening platform that still provides a quantitative signal related to the amount of target protein binding to individual beads for hit bead ranking. Wide-field fluorescence microscopy has long been considered unsuitable due to significant broad spectrum autofluorescence of the library beads in conjunction with low detection sensitivity. Herein, we demonstrate how such a standard microscope equipped with LED-based excitation and a modern CMOS camera can be successfully used for selecting hit beads. We show that the autofluorescence issue can be overcome by an optical image subtraction approach that yields excellent signal-to-noise ratios for the detection of bead-associated target proteins. A polymer capillary attached to a semiautomated bead-picking device allows the operator to efficiently isolate individual hit beads in less than 20 s. The system can be used for ultrafast screening of >200,000 bead-bound compounds in 1.5 h, thereby making high-throughput screening accessible to a wider group within the scientific community.
NASA Technical Reports Server (NTRS)
Dittmer, P. H.; Scherrer, P. H.; Wilcox, J. M.
1978-01-01
The large-scale solar velocity field has been measured over an aperture of radius 0.8 solar radii on 121 days between April and September, 1976. Measurements are made in the line Fe I 5123.730 A, employing a velocity subtraction technique similar to that of Severny et al. (1976). Comparisons of the amplitude and frequency of the five-minute resonant oscillation with the geomagnetic C9 index and magnetic sector boundaries show no evidence of any relationship between the oscillations and coronal holes or sector structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Robert Y., E-mail: rx-tang@laurentian.ca; McDonald, Nancy, E-mail: mcdnancye@gmail.com; Laamanen, Curtis, E-mail: cx-laamanen@laurentian.ca
Purpose: To develop a method to estimate the mean fractional volume of fat (ν{sup ¯}{sub fat}) within a region of interest (ROI) of a tissue sample for wide-angle x-ray scatter (WAXS) applications. A scatter signal from the ROI was obtained and use of ν{sup ¯}{sub fat} in a WAXS fat subtraction model provided a way to estimate the differential linear scattering coefficient μ{sub s} of the remaining fatless tissue. Methods: The efficacy of the method was tested using animal tissue from a local butcher shop. Formalin fixed samples, 5 mm in diameter 4 mm thick, were prepared. The two mainmore » tissue types were fat and meat (fibrous). Pure as well as composite samples consisting of a mixture of the two tissue types were analyzed. For the latter samples, ν{sub fat} for the tissue columns of interest were extracted from corresponding pixels in CCD digital x-ray images using a calibration curve. The means ν{sup ¯}{sub fat} were then calculated for use in a WAXS fat subtraction model. For the WAXS measurements, the samples were interrogated with a 2.7 mm diameter 50 kV beam and the 6° scattered photons were detected with a CdTe detector subtending a solid angle of 7.75 × 10{sup −5} sr. Using the scatter spectrum, an estimate of the incident spectrum, and a scatter model, μ{sub s} was determined for the tissue in the ROI. For the composite samples, a WAXS fat subtraction model was used to estimate the μ{sub s} of the fibrous tissue in the ROI. This signal was compared to μ{sub s} of fibrous tissue obtained using a pure fibrous sample. Results: For chicken and beef composites, ν{sup ¯}{sub fat}=0.33±0.05 and 0.32 ± 0.05, respectively. The subtractions of these fat components from the WAXS composite signals provided estimates of μ{sub s} for chicken and beef fibrous tissue. The differences between the estimates and μ{sub s} of fibrous obtained with a pure sample were calculated as a function of the momentum transfer x. A t-test showed that the mean of the differences did not vary from zero in a statistically significant way thereby validating the methods. Conclusions: The methodology to estimate ν{sup ¯}{sub fat} in a ROI of a tissue sample via CCD x-ray imaging was quantitatively accurate. The WAXS fat subtraction model allowed μ{sub s} of fibrous tissue to be obtained from a ROI which had some fat. The fat estimation method coupled with the WAXS models can be used to compare μ{sub s} coefficients of fibroglandular and cancerous breast tissue.« less
NASA Astrophysics Data System (ADS)
Fang, M.; Hager, B. H.
2014-12-01
In geophysical applications the boundary element method (BEM) often carries the essential physics in addition to being an efficient numerical scheme. For use of the BEM in a self-gravitating uniform half-space, we made extra effort and succeeded in deriving the fundamental solution analytically in closed-form. A problem that goes deep into the heart of the classic BEM is encountered when we try to apply the new fundamental solution in BEM for deformation field induced by a magma chamber or a fluid-filled reservoir. The central issue of the BEM is the singular integral arising from determination of the boundary values. A widely employed technique is to rescale the singular boundary point into a small finite volume and then shrink it to extract the limits. This operation boils down to the calculation of the so-called C-matrix. Authors in the past take the liberty of either adding or subtracting a small volume. By subtracting a small volume, the C-matrix is (1/2)I on a smooth surface, where I is the identity matrix; by adding a small volume, we arrive at the same C-matrix in the form of I - (1/2)I. This evenness is a result of the spherical symmetry of Kelvin's fundamental solution employed. When the spherical symmetry is broken by gravity, the C-matrix is polarized. And we face the choice between right and wrong, for adding and subtracting a small volume yield different C-matrices. Close examination reveals that both derivations, addition and subtraction of a small volume, are ad hoc. To resolve the issue we revisit the Somigliana identity with a new derivation and careful step-by-step anatomy. The result proves that even though both adding and subtracting a small volume appear to twist the original boundary, only addition essentially modifies the original boundary and consequently modifies the physics of the original problem in a subtle way. The correct procedure is subtraction. We complete a new BEM theory by introducing in full analytical form what we call the singular stress tensor for the fundamental solution. We partition the stress tensor of the fundamental solution into a singular part and a regular part. In this way all singular integrals systematically shift into the easy singular stress tensor. Applications of this new BEM to deformation and gravitational perturbation induced by magma chambers of finite volume will be presented.
Balk, Benjamin; Elder, Kelly
2000-01-01
We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.
Brain Activation during Addition and Subtraction Tasks In-Noise and In-Quiet
Abd Hamid, Aini Ismafairus; Yusoff, Ahmad Nazlim; Mukari, Siti Zamratol-Mai Sarah; Mohamad, Mazlyfarina
2011-01-01
Background: In spite of extensive research conducted to study how human brain works, little is known about a special function of the brain that stores and manipulates information—the working memory—and how noise influences this special ability. In this study, Functional magnetic resonance imaging (fMRI) was used to investigate brain responses to arithmetic problems solved in noisy and quiet backgrounds. Methods: Eighteen healthy young males performed simple arithmetic operations of addition and subtraction with in-quiet and in-noise backgrounds. The MATLAB-based Statistical Parametric Mapping (SPM8) was implemented on the fMRI datasets to generate and analyse the activated brain regions. Results: Group results showed that addition and subtraction operations evoked extended activation in the left inferior parietal lobe, left precentral gyrus, left superior parietal lobe, left supramarginal gyrus, and left middle temporal gyrus. This supported the hypothesis that the human brain relatively activates its left hemisphere more compared with the right hemisphere when solving arithmetic problems. The insula, middle cingulate cortex, and middle frontal gyrus, however, showed more extended right hemispheric activation, potentially due to the involvement of attention, executive processes, and working memory. For addition operations, there was extensive left hemispheric activation in the superior temporal gyrus, inferior frontal gyrus, and thalamus. In contrast, subtraction tasks evoked a greater activation of similar brain structures in the right hemisphere. For both addition and subtraction operations, the total number of activated voxels was higher for in-noise than in-quiet conditions. Conclusion: These findings suggest that when arithmetic operations were delivered auditorily, the auditory, attention, and working memory functions were required to accomplish the executive processing of the mathematical calculation. The respective brain activation patterns appear to be modulated by the noisy background condition. PMID:22135581
A new PIC noise reduction technique
NASA Astrophysics Data System (ADS)
Barnes, D. C.
2014-10-01
Numerical solution of the Vlasov equation is considered in a general situation in which there is an underlying static solution (equilibrium). There are no further assumptions about dimensionality, smallenss of orbits, or disparate time scales. The semi-characteristic (SC) method for Vlasov solution is described. The usual characteristics of the equation, which are the single particle orbits, are modified in such a way that the equilibrium phase-space flow is removed. In this way, the shot noise introduced by the usual discrete particle representation of the equilibrium is static in time and can be removed completely by subtraction. An almost exact algorithm for this is based on the observation that a (infinitesimal or) discrete time step of any equilibrium MC realization is again a realization of the equilibrium, building up strings of associated simulation particles. In this way, the only added discretization error arises from the need to extrapolate backward in time the chain end points one dt using a canonical transformation. Previously developed energy-conserving time-implicit methods are applied without modification. 1D ES examples of Landau damping and velocity-space instability are given to illustrate the method.
NASA Astrophysics Data System (ADS)
Inanç, Arda; Kösoğlu, Gülşen; Yüksel, Heba; Naci Inci, Mehmet
2018-06-01
A new fibre optic Lloyd's mirror method is developed for extracting 3-D height distribution of various objects at the micron scale with a resolution of 4 μm. The fibre optic assembly is elegantly integrated to an optical microscope and a CCD camera. It is demonstrated that the proposed technique is quite suitable and practical to produce an interference pattern with an adjustable frequency. By increasing the distance between the fibre and the mirror with a micrometre stage in the Lloyd's mirror assembly, the separation between the two bright fringes is lowered down to the micron scale without using any additional elements as part of the optical projection unit. A fibre optic cable, whose polymer jacket is partially stripped, and a microfluidic channel are used as test objects to extract their surface topographies. Point by point sensitivity of the method is found to be around 8 μm, changing a couple of microns depending on the fringe frequency and the measured height. A straightforward calibration procedure for the phase to height conversion is also introduced by making use of the vertical moving stage of the optical microscope. The phase analysis of the acquired image is carried out by One Dimensional Continuous Wavelet Transform for which the chosen wavelet is the Morlet wavelet and the carrier removal of the projected fringe patterns is achieved by reference subtraction. Furthermore, flexible multi-frequency property of the proposed method allows measuring discontinuous heights where there are phase ambiguities like 2π by lowering the fringe frequency and eliminating the phase ambiguity.
A rational interpolation method to compute frequency response
NASA Technical Reports Server (NTRS)
Kenney, Charles; Stubberud, Stephen; Laub, Alan J.
1993-01-01
A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.
NASA Astrophysics Data System (ADS)
Denz, Cornelia; Dellwig, Thilo; Lembcke, Jan; Tschudi, Theo
1996-02-01
We propose and demonstrate experimentally a method for utilizing a dynamic phase-encoded photorefractive memory to realize parallel optical addition, subtraction, and inversion operations of stored images. The phase-encoded holographic memory is realized in photorefractive BaTiO3, storing eight images using WalshHadamard binary phase codes and an incremental recording procedure. By subsampling the set of reference beams during the recall operation, the selectivity of the phase address is decreased, allowing one to combine images in such a way that different linear combination of the images can be realized at the output of the memory.
Zimmermann's forest formula, infrared divergences and the QCD beta function
NASA Astrophysics Data System (ADS)
Herzog, Franz
2018-01-01
We review Zimmermann's forest formula, which solves Bogoliubov's recursive R-operation for the subtraction of ultraviolet divergences in perturbative Quantum Field Theory. We further discuss a generalisation of the R-operation which subtracts besides ultraviolet also Euclidean infrared divergences. This generalisation, which goes under the name of the R*-operation, can be used efficiently to compute renormalisation constants. We will discuss several results obtained by this method with focus on the QCD beta function at five loops as well as the application to hadronic Higgs boson decay rates at N4LO. This article summarizes a talk given at the Wolfhart Zimmermann Memorial Symposium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, S. A., E-mail: volkoff-sergey@mail.ru
2016-06-15
A new subtractive procedure for canceling ultraviolet and infrared divergences in the Feynman integrals described here is developed for calculating QED corrections to the electron anomalous magnetic moment. The procedure formulated in the form of a forest expression with linear operators applied to Feynman amplitudes of UV-diverging subgraphs makes it possible to represent the contribution of each Feynman graph containing only electron and photon propagators in the form of a converging integral with respect to Feynman parameters. The application of the developed method for numerical calculation of two- and threeloop contributions is described.
Salem, Hesham; Mohamed, Dalia
2015-04-05
Six simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the analgesic drug; paracetamol (PARA) and the skeletal muscle relaxant; dantrolene sodium (DANT). Three methods are manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and mean centering (MC). The other three methods are utilizing the isoabsorptive point either at zero order namely; absorbance ratio (AR) and absorbance subtraction (AS) or at ratio spectrum namely; amplitude modulation (AM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined dosage form. Standard deviation values are less than 1.5 in the assay of raw materials and capsules. The obtained results were statistically compared with each other and with those of reported spectrophotometric ones. The comparison showed that there is no significant difference between the proposed methods and the reported methods regarding both accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Shehata, Mostafa Abd El-Atty
2017-09-01
Simultaneous determination of miconazole (MIC), mometasone furaoate (MF), and gentamicin (GEN) in their pharmaceutical combination. Gentamicin determination is based on derivatization with of o-phthalaldehyde reagent (OPA) without any interference of other cited drugs, while the spectra of MIC and MF are resolved using both successive and progressive resolution techniques. The first derivative spectrum of MF is measured using constant multiplication or spectrum subtraction, while its recovered zero order spectrum is obtained using derivative transformation. Beside the application of constant value method. Zero order spectrum of MIC is obtained by derivative transformation after getting its first derivative spectrum by derivative subtraction method. The novel method namely, differential amplitude modulation is used to get the concentration of MF and MIC, while the novel graphical method namely, concentration value is used to get the concentration of MIC, MF, and GEN. Accuracy and precision testing of the developed methods show good results. Specificity of the methods is ensured and is successfully applied for the analysis of pharmaceutical formulation of the three drugs in combination. ICH guidelines are used for validation of the proposed methods. Statistical data are calculated, and the results are satisfactory revealing no significant difference regarding accuracy and precision.
Abdel-Ghany, Maha F; Abdel-Aziz, Omar; Mohammed, Yomna Y
2015-01-01
Four simple, specific, accurate and precise spectrophotometric methods were developed and validated for simultaneous determination of Domperidone (DP) and Ranitidine Hydrochloride (RT) in bulk powder and pharmaceutical formulation. The first method was simultaneous ratio subtraction (SRS), the second was ratio subtraction (RS) coupled with zero order spectrophotometry (D(0)), the third was first derivative of the ratio spectra ((1)DD) and the fourth method was mean centering of ratio spectra (MCR). The calibration curve is linear over the concentration range of 0.5-5 and 1-45 μg mL(-1) for DP and RT, respectively. The proposed spectrophotometric methods can analyze both drugs without any prior separation steps. The selectivity of the adopted methods was tested by analyzing synthetic mixtures of the investigated drugs, also in their pharmaceutical formulation. The suggested methods were validated according to International Conference of Harmonization (ICH) guidelines and the results revealed that; they were precise and reproducible. All the obtained results were statistically compared with those of the reported method, where there was no significant difference. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Robert, Nicole D.; LeFevre, Jo-Anne
2013-01-01
Does solving subtraction problems with negative answers (e.g., 5-14) require different cognitive processes than solving problems with positive answers (e.g., 14-5)? In a dual-task experiment, young adults (N=39) combined subtraction with two working memory tasks, verbal memory and visual-spatial memory. All of the subtraction problems required…
NASA Astrophysics Data System (ADS)
Helama, S.; Lindholm, M.; Timonen, M.; Eronen, M.
2004-12-01
Tree-ring standardization methods were compared. Traditional methods along with the recently introduced approaches of regional curve standardization (RCS) and power-transformation (PT) were included. The difficulty in removing non-climatic variation (noise) while simultaneously preserving the low-frequency variability in the tree-ring series was emphasized. The potential risk of obtaining inflated index values was analysed by comparing methods to extract tree-ring indices from the standardization curve. The material for the tree-ring series, previously used in several palaeoclimate predictions, came from living and dead wood of high-latitude Scots pine in northernmost Europe. This material provided a useful example of a long composite tree-ring chronology with the typical strengths and weaknesses of such data, particularly in the context of standardization. PT stabilized the heteroscedastic variation in the original tree-ring series more efficiently than any other standardization practice expected to preserve the low-frequency variability. RCS showed great potential in preserving variability in tree-ring series at centennial time scales; however, this method requires a homogeneous sample for reliable signal estimation. It is not recommended to derive indices by subtraction without first stabilizing the variance in the case of series of forest-limit tree-ring data. Index calculation by division did not seem to produce inflated chronology values for the past one and a half centuries of the chronology (where mean sample cambial age is high). On the other hand, potential bias of high RCS chronology values was observed during the period of anomalously low mean sample cambial age. An alternative technique for chronology construction was proposed based on series age decomposition, where indices in the young vigorously behaving part of each series are extracted from the curve by division and in the mature part by subtraction. Because of their specific nature, the dendrochronological data here should not be generalized to all tree-ring records. The examples presented should be used as guidelines for detecting potential sources of bias and as illustrations of the usefulness of tree-ring records as palaeoclimate indicators.
N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method
NASA Astrophysics Data System (ADS)
Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.
2018-05-01
Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almeida, Leandro G.; Physics Department, Brookhaven National Laboratory, Upton, New York 11973; Sturm, Christian
2010-09-01
Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the MS scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f}=3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less
Tillman, F.D.; Callegary, J.B.; Nagler, P.L.; Glenn, E.P.
2012-01-01
Groundwater is a vital water resource in the arid to semi-arid southwestern United States. Accurate accounting of inflows to and outflows from the groundwater system is necessary to effectively manage this shared resource, including the important outflow component of groundwater discharge by vegetation. A simple method for estimating basin-scale groundwater discharge by vegetation is presented that uses remote sensing data from satellites, geographic information systems (GIS) land cover and stream location information, and a regression equation developed within the Southern Arizona study area relating the Enhanced Vegetation Index from the MODIS sensors on the Terra satellite to measured evapotranspiration. Results computed for 16-day composited satellite passes over the study area during the 2000 through 2007 time period demonstrate a sinusoidal pattern of annual groundwater discharge by vegetation with median values ranging from around 0.3 mm per day in the cooler winter months to around 1.5 mm per day during summer. Maximum estimated annual volume of groundwater discharge by vegetation was between 1.4 and 1.9 billion m3 per year with an annual average of 1.6 billion m3. A simplified accounting of the contribution of precipitation to vegetation greenness was developed whereby monthly precipitation data were subtracted from computed vegetation discharge values, resulting in estimates of minimum groundwater discharge by vegetation. Basin-scale estimates of minimum and maximum groundwater discharge by vegetation produced by this simple method are useful bounding values for groundwater budgets and groundwater flow models, and the method may be applicable to other areas with similar vegetation types.
Activity Approach to the Formation of the Method of Addition and Subtraction in Elementary Students
ERIC Educational Resources Information Center
Maksimov, L. K.; Maksimova, L. V.
2013-01-01
One of the main tasks in teaching mathematics to elementary students is to form calculating methods and techniques. The efforts of teachers and methodologists are aimed at solving this problem. Educational and psychological research is devoted to it. At the same time school teaching experience demonstrates some difficulties in learning methods of…
Mixedness determination of rare earth-doped ceramics
NASA Astrophysics Data System (ADS)
Czerepinski, Jennifer H.
The lack of chemical uniformity in a powder mixture, such as clustering of a minor component, can lead to deterioration of materials properties. A method to determine powder mixture quality is to correlate the chemical homogeneity of a multi-component mixture with its particle size distribution and mixing method. This is applicable to rare earth-doped ceramics, which require at least 1-2 nm dopant ion spacing to optimize optical properties. Mixedness simulations were conducted for random heterogeneous mixtures of Nd-doped LaF3 mixtures using the Concentric Shell Model of Mixedness (CSMM). Results indicate that when the host to dopant particle size ratio is 100, multi-scale concentration variance is optimized. In order to verify results from the model, experimental methods that probe a mixture at the micro, meso, and macro scales are needed. To directly compare CSMM results experimentally, an image processing method was developed to calculate variance profiles from electron images. An in-lens (IL) secondary electron image is subtracted from the corresponding Everhart-Thornley (ET) secondary electron image in a Field-Emission Scanning Electron Microscope (FESEM) to produce two phases and pores that can be quantified with 50 nm spatial resolution. A macro was developed to quickly analyze multi-scale compositional variance from these images. Results for a 50:50 mixture of NdF3 and LaF3 agree with the computational model. The method has proven to be applicable only for mixtures with major components and specific particle morphologies, but the macro is useful for any type of imaging that produces excellent phase contrast, such as confocal microscopy. Fluorescence spectroscopy was used as an indirect method to confirm computational results for Nd-doped LaF3 mixtures. Fluorescence lifetime can be used as a quantitative method to indirectly measure chemical homogeneity when the limits of electron microscopy have been reached. Fluorescence lifetime represents the compositional fluctuations of a dopant on the nanoscale while accounting for billions of particles in a fast, non-destructive manner. The significance of this study will show how small-scale fluctuations in homogeneity limit the optimization of optical properties, which can be improved by the proper selection of particle size and mixing method.
The functional architectures of addition and subtraction: Network discovery using fMRI and DCM.
Yang, Yang; Zhong, Ning; Friston, Karl; Imamura, Kazuyuki; Lu, Shengfu; Li, Mi; Zhou, Haiyan; Wang, Haiyuan; Li, Kuncheng; Hu, Bin
2017-06-01
The neuronal mechanisms underlying arithmetic calculations are not well understood but the differences between mental addition and subtraction could be particularly revealing. Using fMRI and dynamic causal modeling (DCM), this study aimed to identify the distinct neuronal architectures engaged by the cognitive processes of simple addition and subtraction. Our results revealed significantly greater activation during subtraction in regions along the dorsal pathway, including the left inferior frontal gyrus (IFG), middle portion of dorsolateral prefrontal cortex (mDLPFC), and supplementary motor area (SMA), compared with addition. Subsequent analysis of the underlying changes in connectivity - with DCM - revealed a common circuit processing basic (numeric) attributes and the retrieval of arithmetic facts. However, DCM showed that addition was more likely to engage (numeric) retrieval-based circuits in the left hemisphere, while subtraction tended to draw on (magnitude) processing in bilateral parietal cortex, especially the right intraparietal sulcus (IPS). Our findings endorse previous hypotheses about the differences in strategic implementation, dominant hemisphere, and the neuronal circuits underlying addition and subtraction. Moreover, for simple arithmetic, our connectivity results suggest that subtraction calls on more complex processing than addition: auxiliary phonological, visual, and motor processes, for representing numbers, were engaged by subtraction, relative to addition. Hum Brain Mapp 38:3210-3225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Snow Depth Mapping at a Basin-Wide Scale in the Western Arctic Using UAS Technology
NASA Astrophysics Data System (ADS)
de Jong, T.; Marsh, P.; Mann, P.; Walker, B.
2015-12-01
Assessing snow depths across the Arctic has proven to be extremely difficult due to the variability of snow depths at scales from metres to 100's of metres. New Unmanned Aerial Systems (UAS) technology provides the possibility to obtain centimeter level resolution imagery (~3cm), and to create Digital Surface Models (DSM) based on the Structure from Motion method. However, there is an ongoing need to quantify the accuracy of this method over different terrain and vegetation types across the Arctic. In this study, we used a small UAS equipped with a high resolution RGB camera to create DSMs over a 1 km2 watershed in the western Canadian Arctic during snow (end of winter) and snow-free periods. To improve the image georeferencing, 15 Ground Control Points were marked across the watershed and incorporated into the DSM processing. The summer DSM was subtracted from the snowcovered DSM to deliver snow depth measurements across the entire watershed. These snow depth measurements were validated by over 2000 snow depth measurements. This technique has the potential to improve larger scale snow depth mapping across watersheds by providing snow depth measurements at a ~3 cm . The ability of mapping both shallow snow (less than 75cm) covering much of the basin and snow patches (up to 5 m in depth) that cover less than 10% of the basin, but contain a significant portion of total basin snowcover, is important for both water resource applications, as well as for testing snow models.
NASA Astrophysics Data System (ADS)
Kureba, C. O.; Buthelezi, Z.; Carter, J.; Cooper, G. R. J.; Fearick, R. W.; Förtsch, S. V.; Jingo, M.; Kleinig, W.; Krugmann, A.; Krumbolz, A. M.; Kvasil, J.; Mabiala, J.; Mira, J. P.; Nesterenko, V. O.; von Neumann-Cosel, P.; Neveling, R.; Papka, P.; Reinhard, P.-G.; Richter, A.; Sideras-Haddad, E.; Smit, F. D.; Steyn, G. F.; Swartz, J. A.; Tamii, A.; Usman, I. T.
2018-04-01
The phenomenon of fine structure of the Isoscalar Giant Quadrupole Resonance (ISGQR) has been studied with high energy-resolution proton inelastic scattering at iThemba LABS in the chain of stable even-mass Nd isotopes covering the transition from spherical to deformed ground states. A wavelet analysis of the background-subtracted spectra in the deformed 146, 148, 150Nd isotopes reveals characteristic scales in correspondence with scales obtained from a Skyrme RPA calculation using the SVmas10 parameterization. A semblance analysis shows that these scales arise from the energy shift between the main fragments of the K = 0 , 1 and K = 2 components.
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
Bifurcated method and apparatus for floating point addition with decreased latency time
Farmwald, Paul M.
1987-01-01
Apparatus for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
Longitudinal development of subtraction performance in elementary school.
Artemenko, Christina; Pixner, Silvia; Moeller, Korbinian; Nuerk, Hans-Christoph
2017-10-05
A major goal of education in elementary mathematics is the mastery of arithmetic operations. However, research on subtraction is rather scarce, probably because subtraction is often implicitly assumed to be cognitively similar to addition, its mathematical inverse. To evaluate this assumption, we examined the relation between the borrow effect in subtraction and the carry effect in addition, and the developmental trajectory of the borrow effect in children using a choice reaction paradigm in a longitudinal study. In contrast to the carry effect in adults, carry and borrow effects in children were found to be categorical rather than continuous. From grades 3 to 4, children became more proficient in two-digit subtraction in general, but not in performing the borrow operation in particular. Thus, we observed no specific developmental progress in place-value computation, but a general improvement in subtraction procedures. Statement of contribution What is already known on this subject? The borrow operation increases difficulty in two-digit subtraction in adults. The carry effect in addition, as the inverse operation of borrowing, comprises categorical and continuous processing characteristics. What does this study add? In contrast to the carry effect in adults, the borrow and carry effects are categorical in elementary school children. Children generally improve in subtraction performance from grades 3 to 4 but do not progress in place-value computation in particular. © 2017 The British Psychological Society.
Alternative method for determining the constant offset in lidar signal
Vladimir A. Kovalev; Cyle Wold; Alexander Petkov; Wei Min Hao
2009-01-01
We present an alternative method for determining the total offset in lidar signal created by a daytime background-illumination component and electrical or digital offset. Unlike existing techniques, here the signal square-range-correction procedure is initially performed using the total signal recorded by lidar, without subtraction of the offset component. While...
Towards a self-consistent dynamical nuclear model
NASA Astrophysics Data System (ADS)
Roca-Maza, X.; Niu, Y. F.; Colò, G.; Bortignon, P. F.
2017-04-01
Density functional theory (DFT) is a powerful and accurate tool, exploited in nuclear physics to investigate the ground-state and some of the collective properties of nuclei along the whole nuclear chart. Models based on DFT are not, however, suitable for the description of single-particle dynamics in nuclei. Following the field theoretical approach by A Bohr and B R Mottelson to describe nuclear interactions between single-particle and vibrational degrees of freedom, we have taken important steps towards the building of a microscopic dynamic nuclear model. In connection with this, one important issue that needs to be better understood is the renormalization of the effective interaction in the particle-vibration approach. One possible way to renormalize the interaction is by the so-called subtraction method. In this contribution, we will implement the subtraction method in our model for the first time and study its consequences.
Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem
NASA Astrophysics Data System (ADS)
Auteri, F.; Quartapelle, L.; Vigevano, L.
2002-08-01
This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.
Ayoub, Bassam M
2017-07-01
Introducing green analysis to pharmaceutical products is considered a significant approach to preserving the environment. This method can be an environmentally friendly alternative to the existing methods, accompanied by a validated automated procedure for the analysis of a drug with the lowest possible number of samples. Different simple spectrophotometric methods were developed for the simultaneous determination of empagliflozin (EG) and metformin (MT) by manipulating their ratio spectra in their application on a recently approved pharmaceutical combination, Synjardy tablets. A spiking technique was used to increase the concentration of EG in samples prepared from the tablets to allow for the simultaneous determination of EG with MT without prior separation. Validation parameters according to International Conference on Harmonization guidelines were acceptable over a concentration range of 2-12 μg/mL for both drugs using derivative ratio and ratio subtraction coupled with extended ratio subtraction. The optimized methods were compared using one-way analysis of variance and proved to be suitable as ecofriendly approaches for industrial QC laboratories.
Computer analysis of ATR-FTIR spectra of paint samples for forensic purposes
NASA Astrophysics Data System (ADS)
Szafarska, Małgorzata; Woźniakiewicz, Michał; Pilch, Mariusz; Zięba-Palus, Janina; Kościelniak, Paweł
2009-04-01
A method of subtraction and normalization of IR spectra (MSN-IR) was developed and successfully applied to extract mathematically the pure paint spectrum from the spectrum of paint coat on different bases, both acquired by the Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) technique. The method consists of several stages encompassing several normalization and subtraction processes. The similarity of the spectrum obtained with the reference spectrum was estimated by means of the normalized Manhattan distance. The utility and performance of the method proposed were tested by examination of five different paints sprayed on plastic (polyester) foil and on fabric materials (cotton). It was found that the numerical algorithm applied is able - in contrast to other mathematical approaches conventionally used for the same aim - to reconstruct a pure paint IR spectrum effectively without a loss of chemical information provided. The approach allows the physical separation of a paint from a base to be avoided, hence a time and work-load of analysis to be considerably reduced. The results obtained prove that the method can be considered as a useful tool which can be applied to forensic purposes.
A calibration-free electrode compensation method
Rossant, Cyrille; Fontaine, Bertrand; Magnusson, Anna K.
2012-01-01
In a single-electrode current-clamp recording, the measured potential includes both the response of the membrane and that of the measuring electrode. The electrode response is traditionally removed using bridge balance, where the response of an ideal resistor representing the electrode is subtracted from the measurement. Because the electrode is not an ideal resistor, this procedure produces capacitive transients in response to fast or discontinuous currents. More sophisticated methods exist, but they all require a preliminary calibration phase, to estimate the properties of the electrode. If these properties change after calibration, the measurements are corrupted. We propose a compensation method that does not require preliminary calibration. Measurements are compensated offline by fitting a model of the neuron and electrode to the trace and subtracting the predicted electrode response. The error criterion is designed to avoid the distortion of compensated traces by spikes. The technique allows electrode properties to be tracked over time and can be extended to arbitrary models of electrode and neuron. We demonstrate the method using biophysical models and whole cell recordings in cortical and brain-stem neurons. PMID:22896724
[Detection of lung nodules. New opportunities in chest radiography].
Pötter-Lang, S; Schalekamp, S; Schaefer-Prokop, C; Uffmann, M
2014-05-01
Chest radiography still represents the most commonly performed X-ray examination because it is readily available, requires low radiation doses and is relatively inexpensive. However, as previously published, many initially undetected lung nodules are retrospectively visible in chest radiographs. The great improvements in detector technology with the increasing dose efficiency and improved contrast resolution provide a better image quality and reduced dose needs. The dual energy acquisition technique and advanced image processing methods (e.g. digital bone subtraction and temporal subtraction) reduce the anatomical background noise by reduction of overlapping structures in chest radiography. Computer-aided detection (CAD) schemes increase the awareness of radiologists for suspicious areas. The advanced image processing methods show clear improvements for the detection of pulmonary lung nodules in chest radiography and strengthen the role of this method in comparison to 3D acquisition techniques, such as computed tomography (CT). Many of these methods will probably be integrated into standard clinical treatment in the near future. Digital software solutions offer advantages as they can be easily incorporated into radiology departments and are often more affordable as compared to hardware solutions.
White-Light Optical Information Processing and Holography.
1982-05-03
artifact noise . I. wever, the deblurring spatial filter that we used were a narrow spectral band centered at 5154A green light. To compensate for the scaling...Processing, White-Light 11olographyv, Image Profcessing, Optical Signal Process inI, Image Subtraction, Image Deblurring . 70. A S’ R ACT (Continua on crow ad...optical processing technique, we had shown that the incoherent source techniques provides better image quality, and very low coherent artifact noise
NASA Astrophysics Data System (ADS)
Wei, Chen-Wei; Xia, Jinjun; Pelivanov, Ivan; Hu, Xiaoge; Gao, Xiaohu; O'Donnell, Matthew
2012-10-01
Results on magnetically trapping and manipulating micro-scale beads circulating in a flow field mimicking metastatic cancer cells in human peripheral vessels are presented. Composite contrast agents combining magneto-sensitive nanospheres and highly optical absorptive gold nanorods were conjugated to micro-scale polystyrene beads. To efficiently trap the targeted objects in a fast stream, a dual magnet system consisting of two flat magnets to magnetize (polarize) the contrast agent and an array of cone magnets producing a sharp gradient field to trap the magnetized contrast agent was designed and constructed. A water-ink solution with an optical absorption coefficient of 10 cm-1 was used to mimic the optical absorption of blood. Magnetomotive photoacoustic imaging helped visualize bead trapping, dynamic manipulation of trapped beads in a flow field, and the subtraction of stationary background signals insensitive to the magnetic field. The results show that trafficking micro-scale objects can be effectively trapped in a stream with a flow rate up to 12 ml/min and the background can be significantly (greater than 15 dB) suppressed. It makes the proposed method very promising for sensitive detection of rare circulating tumor cells within high flow vessels with a highly absorptive optical background.
Luke, Paul
1996-01-01
An ionization detector electrode and signal subtraction apparatus and method provides at least one first conductive trace formed onto the first surface of an ionization detector. The first surface opposes a second surface of the ionization detector. At least one second conductive trace is also formed on the first surface of the ionization detector in a substantially interlaced and symmetrical pattern with the at least one first conductive trace. Both of the traces are held at a voltage potential of a first polarity type. By forming the traces in a substantially interlaced and symmetric pattern, signals generated by a charge carrier are substantially of equal strength with respect to both of the traces. The only significant difference in measured signal strength occurs when the charge carrier moves to within close proximity of the traces and is received at the collecting trace. The measured signals are then subtracted and compared to quantitatively measure the magnitude of the charge and to determine the position at which the charge carrier originated within the ionization detector.
Luke, P.
1996-06-25
An ionization detector electrode and signal subtraction apparatus and method provide at least one first conductive trace formed onto the first surface of an ionization detector. The first surface opposes a second surface of the ionization detector. At least one second conductive trace is also formed on the first surface of the ionization detector in a substantially interlaced and symmetrical pattern with the at least one first conductive trace. Both of the traces are held at a voltage potential of a first polarity type. By forming the traces in a substantially interlaced and symmetric pattern, signals generated by a charge carrier are substantially of equal strength with respect to both of the traces. The only significant difference in measured signal strength occurs when the charge carrier moves to within close proximity of the traces and is received at the collecting trace. The measured signals are then subtracted and compared to quantitatively measure the magnitude of the charge and to determine the position at which the charge carrier originated within the ionization detector. 9 figs.
Choudhuri, Samir; Bharadwaj, Somnath; Roy, Nirupam; Ghosh, Abhik; Ali, Sk Saiyad
2016-06-11
It is important to correctly subtract point sources from radio-interferometric data in order to measure the power spectrum of diffuse radiation like the Galactic synchrotron or the Epoch of Reionization 21-cm signal. It is computationally very expensive and challenging to image a very large area and accurately subtract all the point sources from the image. The problem is particularly severe at the sidelobes and the outer parts of the main lobe where the antenna response is highly frequency dependent and the calibration also differs from that of the phase centre. Here, we show that it is possible to overcome this problem by tapering the sky response. Using simulated 150 MHz observations, we demonstrate that it is possible to suppress the contribution due to point sources from the outer parts by using the Tapered Gridded Estimator to measure the angular power spectrum C ℓ of the sky signal. We also show from the simulation that this method can self-consistently compute the noise bias and accurately subtract it to provide an unbiased estimation of C ℓ .
NASA Astrophysics Data System (ADS)
Amanullah Tomal, A. N. M.; Saleh, Tanveer; Raisuddin Khan, Md.
2017-11-01
At present, two important processes, namely CNC machining and rapid prototyping (RP) are being used to create prototypes and functional products. Combining both additive and subtractive processes into a single platform would be advantageous. However, there are two important aspects need to be taken into consideration for this process hybridization. First is the integration of two different control systems for two processes and secondly maximizing workpiece alignment accuracy during the changeover step. Recently we have developed a new hybrid system which incorporates Fused Deposition Modelling (FDM) as RP Process and CNC grinding operation as subtractive manufacturing process into a single setup. Several objects were produced with different layer thickness for example 0.1 mm, 0.15 mm and 0.2 mm. It was observed that pure FDM method is unable to attain desired dimensional accuracy and can be improved by a considerable margin about 66% to 80%, if finishing operation by grinding is carried out. It was also observed layer thickness plays a role on the dimensional accuracy and best accuracy is achieved with the minimum layer thickness (0.1 mm).
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
Ale, Angelique; Ermolayev, Vladimir; Deliolanis, Nikolaos C; Ntziachristos, Vasilis
2013-05-01
The ability to visualize early stage lung cancer is important in the study of biomarkers and targeting agents that could lead to earlier diagnosis. The recent development of hybrid free-space 360-deg fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) imaging yields a superior optical imaging modality for three-dimensional small animal fluorescence imaging over stand-alone optical systems. Imaging accuracy was improved by using XCT information in the fluorescence reconstruction method. Despite this progress, the detection sensitivity of targeted fluorescence agents remains limited by nonspecific background accumulation of the fluorochrome employed, which complicates early detection of murine cancers. Therefore we examine whether x-ray CT information and bulk fluorescence detection can be combined to increase detection sensitivity. Correspondingly, we research the performance of a data-driven fluorescence background estimator employed for subtraction of background fluorescence from acquisition data. Using mice containing known fluorochromes ex vivo, we demonstrate the reduction of background signals from reconstructed images and sensitivity improvements. Finally, by applying the method to in vivo data from K-ras transgenic mice developing lung cancer, we find small tumors at an early stage compared with reconstructions performed using raw data. We conclude with the benefits of employing fluorescence subtraction in hybrid FMT-XCT for early detection studies.
Digital subtraction angiography of the pulmonary arteries for the diagnosis of pulmonary embolism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludwig, J.W.; Verhoeven, L.A.J.; Kersbergen, J.J.
1983-06-01
A comparative study of radionuclide scanning (perfusion studies in all 18 patients and ventilation studies in 9) and digital subtraction angiography (DSA) was performed in 18 patients with suspected pulmonary thromboembolism. In 17 patients good visualization of the arteries was obtained with DSA; 10 of these patients had no pre-existing lung disease, and 7 had chronic obstructive pulmonary disease (COPD). The information provided by DSA in this small group was equal to or better than that of scintigraphy, especially in patients with COPD, and the reliability of DSA was superior to that of the radionuclide scintigraphy. Methods for preventing motionmore » artifacts with DSA are also described.« less
Iwanishi, Katsuhiro; Watabe, Hiroshi; Hayashi, Takuya; Miyake, Yoshinori; Minato, Kotaro; Iida, Hidehiro
2009-06-01
Cerebral blood flow (CBF), cerebral metabolic rate of oxygen (CMRO(2)), oxygen extraction fraction (OEF), and cerebral blood volume (CBV) are quantitatively measured with PET with (15)O gases. Kudomi et al. developed a dual tracer autoradiographic (DARG) protocol that enables the duration of a PET study to be shortened by sequentially administrating (15)O(2) and C(15)O(2) gases. In this protocol, before the sequential PET scan with (15)O(2) and C(15)O(2) gases ((15)O(2)-C(15)O(2) PET scan), a PET scan with C(15)O should be preceded to obtain CBV image. C(15)O has a high affinity for red blood cells and a very slow washout rate, and residual radioactivity from C(15)O might exist during a (15)O(2)-C(15)O(2) PET scan. As the current DARG method assumes no residual C(15)O radioactivity before scanning, we performed computer simulations to evaluate the influence of the residual C(15)O radioactivity on the accuracy of measured CBF and OEF values with DARG method and also proposed a subtraction technique to minimize the error due to the residual C(15)O radioactivity. In the simulation, normal and ischemic conditions were considered. The (15)O(2) and C(15)O(2) PET count curves with the residual C(15)O PET counts were generated by the arterial input function with the residual C(15)O radioactivity. The amounts of residual C(15)O radioactivity were varied by changing the interval between the C(15)O PET scan and (15)O(2)-C(15)O(2) PET scan, and the absolute inhaled radioactivity of the C(15)O gas. Using the simulated input functions and the PET counts, the CBF and OEF were computed by the DARG method. Furthermore, we evaluated a subtraction method that subtracts the influence of the C(15)O gas in the input function and PET counts. Our simulations revealed that the CBF and OEF values were underestimated by the residual C(15)O radioactivity. The magnitude of this underestimation depended on the amount of C(15)O radioactivity and the physiological conditions. This underestimation was corrected by the subtraction method. This study showed the influence of C(15)O radioactivity in DARG protocol, and the magnitude of the influence was affected by several factors, such as the radioactivity of C(15)O, and the physiological condition.
Anatomical noise in contrast-enhanced digital mammography. Part II. Dual-energy imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Melissa L.; Yaffe, Martin J.; Mainprize, James G.
2013-08-15
Purpose: Dual-energy (DE) contrast-enhanced digital mammography (CEDM) uses an iodinated contrast agent in combination with digital mammography (DM) to evaluate lesions on the basis of tumor angiogenesis. In DE imaging, low-energy (LE) and high-energy (HE) images are acquired after contrast administration and their logarithms are subtracted to cancel the appearance of normal breast tissue. Often there is incomplete signal cancellation in the subtracted images, creating a background “clutter” that can impair lesion detection. This is the second component of a two-part report on anatomical noise in CEDM. In Part I the authors characterized the anatomical noise for single-energy (SE) temporalmore » subtraction CEDM by a power law, with model parameters α and β. In this work the authors quantify the anatomical noise in DE CEDM clinical images and compare this with the noise in SE CEDM. The influence on the anatomical noise of the presence of iodine in the breast, the timing of imaging postcontrast administration, and the x-ray energy used for acquisition are each evaluated.Methods: The power law parameters, α and β, were measured from unprocessed LE and HE images and from DE subtracted images to quantify the anatomical noise. A total of 98 DE CEDM cases acquired in a previous clinical pilot study were assessed. Conventional DM images from 75 of the women were evaluated for comparison with DE CEDM. The influence of the imaging technique on anatomical noise was determined from an analysis of differences between the power law parameters as measured in DM, LE, HE, and DE subtracted images for each subject.Results: In DE CEDM, weighted image subtraction lowers β to about 1.1 from 3.2 and 3.1 in LE and HE unprocessed images, respectively. The presence of iodine has a small but significant effect in LE images, reducing β by about 0.07 compared to DM, with α unchanged. Increasing the x-ray energy, from that typical in DM to a HE beam, significantly decreases α by about 2 × 10{sup −5} mm{sup 2}, and lowers β by about 0.14 compared to LE images. A comparison of SE and DE CEDM at 4 min postcontrast shows equivalent power law parameters in unprocessed images, and lower α and β by about 3 × 10{sup −5} mm{sup 2} and 0.50, respectively, in DE versus SE subtracted images.Conclusions: Image subtraction in both SE and DE CEDM reduces β by over a factor of 2, while maintaining α below that in DM. Given the equivalent α between SE and DE unprocessed CEDM images, and the smaller anatomical noise in the DE subtracted images, the DE approach may have an advantage over SE CEDM. It will be necessary to test this potential advantage in future lesion detectability experiments, which account for realistic lesion signals. The authors' results suggest that LE images could be used in place of DM images in CEDM exam interpretation.« less
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang
2016-02-01
Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses hidden in vibration signals and performs well for bearing fault diagnosis.
Chavez, P.S.
1988-01-01
Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.
Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay
2016-10-01
Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.
NASA Astrophysics Data System (ADS)
Malfense Fierro, Gian Piero; Meo, Michele
2017-04-01
Currently there are numerous phased array techniques such as Full Matrix Capture (FMC) and Total Focusing Method (TFM) that provide good damage assessment for composite materials. Although, linear methods struggle to evaluate and assess low levels of damage, while nonlinear methods have shown great promise in early damage detection. A sweep and subtraction evaluation method coupled with a constructive nonlinear array method (CNA) is proposed in order to assess damage specific nonlinearities, address issues with frequency selection when using nonlinear ultrasound imaging techniques and reduce equipment generated nonlinearities. These methods were evaluated using multiple excitation locations on an impacted composite panel with a complex damage (barely visible impact damage). According to various recent works, damage excitation can be accentuated by exciting at local defect resonance (LDR) frequencies; although these frequencies are not always easily determinable. The sweep methodology uses broadband excitation to determine both local defect and material resonances, by assessing local defect generated nonlinearities using a laser vibrometer it is possible to assess which frequencies excite the complex geometry of the crack. The dual effect of accurately determining local defect resonances, the use of an image subtraction method and the reduction of equipment based nonlinearities using CNA result in greater repeatability and clearer nonlinear imaging (NIM).
Contrast enhanced imaging with a stationary digital breast tomosynthesis system
NASA Astrophysics Data System (ADS)
Puett, Connor; Calliste, Jabari; Wu, Gongting; Inscoe, Christina R.; Lee, Yueh Z.; Zhou, Otto; Lu, Jianping
2017-03-01
Digital breast tomosynthesis (DBT) captures some depth information and thereby improves the conspicuity of breast lesions, compared to standard mammography. Using contrast during DBT may also help distinguish malignant from benign sites. However, adequate visualization of the low iodine signal requires a subtraction step to remove background signal and increase lesion contrast. Additionally, attention to factors that limit contrast, including scatter, noise, and artifact, are important during the image acquisition and post-acquisition processing steps. Stationary DBT (sDBT) is an emerging technology that offers a higher spatial and temporal resolution than conventional DBT. This phantom-based study explored contrast-enhanced sDBT (CE sDBT) across a range of clinically-appropriate iodine concentrations, lesion sizes, and breast thicknesses. The protocol included an effective scatter correction method and an iterative reconstruction technique that is unique to the sDBT system. The study demonstrated the ability of this CE sDBT system to collect projection images adequate for both temporal subtraction (TS) and dual-energy subtraction (DES). Additionally, the reconstruction approach preserved the improved contrast-to-noise ratio (CNR) achieved in the subtraction step. Finally, scatter correction increased the iodine signal and CNR of iodine-containing regions in projection views and reconstructed image slices during both TS and DES. These findings support the ongoing study of sDBT as a potentially useful tool for contrast-enhanced breast imaging and also highlight the significant effect that scatter has on image quality during DBT.
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
NASA Astrophysics Data System (ADS)
Ikejimba, Lynda; Kiarashi, Nooshin; Lin, Yuan; Chen, Baiyu; Ghate, Sujata V.; Zerhouni, Moustafa; Samei, Ehsan; Lo, Joseph Y.
2012-03-01
Digital breast tomosynthesis (DBT) is a novel x-ray imaging technique that provides 3D structural information of the breast. In contrast to 2D mammography, DBT minimizes tissue overlap potentially improving cancer detection and reducing number of unnecessary recalls. The addition of a contrast agent to DBT and mammography for lesion enhancement has the benefit of providing functional information of a lesion, as lesion contrast uptake and washout patterns may help differentiate between benign and malignant tumors. This study used a task-based method to determine the optimal imaging approach by analyzing six imaging paradigms in terms of their ability to resolve iodine at a given dose: contrast enhanced mammography and tomosynthesis, temporal subtraction mammography and tomosynthesis, and dual energy subtraction mammography and tomosynthesis. Imaging performance was characterized using a detectability index d', derived from the system task transfer function (TTF), an imaging task, iodine contrast, and the noise power spectrum (NPS). The task modeled a 5 mm lesion containing iodine concentrations between 2.1 mg/cc and 8.6 mg/cc. TTF was obtained using an edge phantom, and the NPS was measured over several exposure levels, energies, and target-filter combinations. Using a structured CIRS phantom, d' was generated as a function of dose and iodine concentration. In general, higher dose gave higher d', but for the lowest iodine concentration and lowest dose, dual energy subtraction tomosynthesis and temporal subtraction tomosynthesis demonstrated the highest performance.
Systematic effects of foreground removal in 21-cm surveys of reionization
NASA Astrophysics Data System (ADS)
Petrovic, Nada; Oh, S. Peng
2011-05-01
21-cm observations have the potential to revolutionize our understanding of the high-redshift Universe. Whilst extremely bright radio continuum foregrounds exist at these frequencies, their spectral smoothness can be exploited to allow efficient foreground subtraction. It is well known that - regardless of other instrumental effects - this removes power on scales comparable to the survey bandwidth. We investigate associated systematic biases. We show that removing line-of-sight fluctuations on large scales aliases into suppression of the 3D power spectrum across a broad range of scales. This bias can be dealt with by correctly marginalizing over small wavenumbers in the 1D power spectrum; however, the unbiased estimator will have unavoidably larger variance. We also show that Gaussian realizations of the power spectrum permit accurate and extremely rapid Monte Carlo simulations for error analysis; repeated realizations of the fully non-Gaussian field are unnecessary. We perform Monte Carlo maximum likelihood simulations of foreground removal which yield unbiased, minimum variance estimates of the power spectrum in agreement with Fisher matrix estimates. Foreground removal also distorts the 21-cm probability distribution function (PDF), reducing the contrast between neutral and ionized regions, with potentially serious consequences for efforts to extract information from the PDF. We show that it is the subtraction of large-scale modes which is responsible for this distortion, and that it is less severe in the earlier stages of reionization. It can be reduced by using larger bandwidths. In the late stages of reionization, identification of the largest ionized regions (which consist of foreground emission only) provides calibration points which potentially allow recovery of large-scale modes. Finally, we also show that (i) the broad frequency response of synchrotron and free-free emission will smear out any features in the electron momentum distribution and ensure spectrally smooth foregrounds and (ii) extragalactic radio recombination lines should be negligible foregrounds.
Renormalization of the Brazilian chiral nucleon-nucleon potential
NASA Astrophysics Data System (ADS)
Da Rocha, Carlos A.; Timóteo, Varese S.
2013-03-01
In this work we present a renormalization of the Brazilian nucleon-nucleon (NN) potential using a subtractive method. We show that the exchange of correlated two pion is important for isovector channels, mainly in tensor and central potentials.
Geary, D C; Frensch, P A; Wiley, J G
1993-06-01
Thirty-six younger adults (10 male, 26 female; ages 18 to 38 years) and 36 older adults (14 male, 22 female; ages 61 to 80 years) completed simple and complex paper-and-pencil subtraction tests and solved a series of simple and complex computer-presented subtraction problems. For the computer task, strategies and solution times were recorded on a trial-by-trial basis. Older Ss used a developmentally more mature mix of problem-solving strategies to solve both simple and complex subtraction problems. Analyses of component scores derived from the solution times suggest that the older Ss are slower at number encoding and number production but faster at executing the borrow procedure. In contrast, groups did not appear to differ in the speed of subtraction fact retrieval. Results from a computational simulation are consistent with the interpretation that older adults' advantage for strategy choices and for the speed of executing the borrow procedure might result from more practice solving subtraction problems.
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
The Super-Linear Slope Of The Spatially-resolved Star Formation Law In NGC 3521 And NGC 5194 (m51a)
NASA Astrophysics Data System (ADS)
Liu, Guilin; Koda, J.; Calzetti, D.; Fukuhara, M.; Momose, R.
2011-01-01
We have conducted interferometric observations with CARMA and an OTF mapping with the 45-m telescope at NRO in the CO (1-0) emission line of NGC 3521. Combining these new data, together with similar data for M51a and archival SINGS H-alpha, 24um, THINGS H I and GALEX FUV data for both galaxies, we investigate the empirical scaling law that connects the surface density of star formation rate (SFR) and cold gas (the Schmidt-Kennicutt law) on a spatially-resolved basis, and find a super-linear slope when carefully subtracting the background emissions in the SFR image. We argue that plausibly deriving SFR maps of nearby galaxies requires the diffuse stellar/dust background emission to be carefully subtracted (especially in mid-IR). An approach to complete this task is presented and applied in our pixel-by-pixel analysis on both galaxies, showing that the controversial results whether the molecular S-K law is super-linear or basically linear is a result of removing or preserving the local background. In both galaxies, the power index of the molecular S-K law is super-linear (1.5-1.9) at the highest available resolution (230 pc), and decreases monotonically for decreasing resolution; while the scatter (mainly intrinsic) increases as the resolution becomes higher, indicating a trend for which the S-K law breaks down below some scale. Both quantities are systematically larger in M51a than in NGC 3521, but when plotted against the de-projected scale, they become highly consistent between the two galaxies, tentatively suggesting that the sub-kpc molecular S-K law in spiral galaxies depends only on the scale being considered, without varying amongst spiral galaxies. We obtaion slope=-1.1[log(scale/kpc)]+1.4 and scatter=-0.2 [scale/kpc]+0.7 through fitting to the M51a data, which describes both galaxies impressively well on sub-kpc scales. However, a larger sample of galaxies with better sensitivity, resolution and broader FoV are required to test these results.
Three Research Strategies of Neuroscience and the Future of Legal Imaging Evidence.
Jun, Jinkwon; Yoo, Soyoung
2018-01-01
Neuroscientific imaging evidence (NIE) has become an integral part of the criminal justice system in the United States. However, in most legal cases, NIE is submitted and used only to mitigate penalties because the court does not recognize it as substantial evidence, considering its lack of reliability. Nevertheless, we here discuss how neuroscience is expected to improve the use of NIE in the legal system. For this purpose, we classified the efforts of neuroscientists into three research strategies: cognitive subtraction, the data-driven approach, and the brain-manipulation approach. Cognitive subtraction is outdated and problematic; consequently, the court deemed it to be an inadequate approach in terms of legal evidence in 2012. In contrast, the data-driven and brain manipulation approaches, which are state-of-the-art approaches, have overcome the limitations of cognitive subtraction. The data-driven approach brings data science into the field and is benefiting immensely from the development of research platforms that allow automatized collection, analysis, and sharing of data. This broadens the scale of imaging evidence. The brain-manipulation approach uses high-functioning tools that facilitate non-invasive and precise human brain manipulation. These two approaches are expected to have synergistic effects. Neuroscience has strived to improve the evidential reliability of NIE, with considerable success. With the support of cutting-edge technologies, and the progress of these approaches, the evidential status of NIE will be improved and NIE will become an increasingly important part of legal practice.
Destructive effect of HIFU on rabbit embedded endometrial carcinoma tissues and their vascularities.
Guan, Liming; Xu, Gang
2017-03-21
To evaluate damage effect of High-intensity focused ultrasound on early stage endometrial cancer tissues and their vascularities. Rabbit endometrial cancer models were established via tumor blocks implantation for a prospective control study. Ultrasonic ablation efficacy was evaluated by pathologic and imaging changes. The target lesions of experimental rabbits before and after ultrasonic ablation were observed after autopsy. The slides were used for hematoxylin-eosin staining, elastic fiber staining and endothelial cell staining; the slides were observed by optical microscopy. One slide was observed by electron microscopy. Then the target lesions of experimental animals with ultrasonic ablation were observed by vascular imaging, one group was visualized by digital subtract angiography, one group was quantified by color Doppler flow imaging, and one group was detected by dye perfusion.SPSS 19.0 software was used for statistical analyses. Histological examination indicated that High-intensity focused ultrasound caused the tumor tissues and their vascularities coagulative necrosis. Tumor vascular structure components including elastic fiber, endothelial cells all were destroyed by ultrasonic ablation. Digital subtract angiography showed tumor vascular shadow were dismissed after ultrasonic ablation. After ultrasonic ablation, gray-scale of tumor nodules enhanced in ultrasonography, tumor peripheral and internal blood flow signals disappeared or significantly reduced in color Doppler flow imaging. Vascular perfusion performed after ultrasonic ablation, tumor vessels could not filled by dye liquid. High-intensity focused ultrasound as a noninvasive method can destroy whole endometrial cancer cells and their supplying vascularities, which maybe an alternative approach of targeted therapy and new antiangiogenic strategy for endometrial cancer.
Improved LSB matching steganography with histogram characters reserved
NASA Astrophysics Data System (ADS)
Chen, Zhihong; Liu, Wenyao
2008-03-01
This letter bases on the researches of LSB (least significant bit, i.e. the last bit of a binary pixel value) matching steganographic method and the steganalytic method which aims at histograms of cover images, and proposes a modification to LSB matching. In the LSB matching, if the LSB of the next cover pixel matches the next bit of secret data, do nothing; otherwise, choose to add or subtract one from the cover pixel value at random. In our improved method, a steganographic information table is defined and records the changes which embedded secrete bits introduce in. Through the table, the next LSB which has the same pixel value will be judged to add or subtract one dynamically in order to ensure the histogram's change of cover image is minimized. Therefore, the modified method allows embedding the same payload as the LSB matching but with improved steganographic security and less vulnerability to attacks compared with LSB matching. The experimental results of the new method show that the histograms maintain their attributes, such as peak values and alternative trends, in an acceptable degree and have better performance than LSB matching in the respects of histogram distortion and resistance against existing steganalysis.
Computation of wind tunnel wall effects for complex models using a low-order panel method
NASA Technical Reports Server (NTRS)
Ashby, Dale L.; Harris, Scott H.
1994-01-01
A technique for determining wind tunnel wall effects for complex models using the low-order, three dimensional panel method PMARC (Panel Method Ames Research Center) has been developed. Initial validation of the technique was performed using lift-coefficient data in the linear lift range from tests of a large-scale STOVL fighter model in the National Full-Scale Aerodynamics Complex (NFAC) facility. The data from these tests served as an ideal database for validating the technique because the same model was tested in two wind tunnel test sections with widely different dimensions. The lift-coefficient data obtained for the same model configuration in the two test sections were different, indicating a significant influence of the presence of the tunnel walls and mounting hardware on the lift coefficient in at least one of the two test sections. The wind tunnel wall effects were computed using PMARC and then subtracted from the measured data to yield corrected lift-coefficient versus angle-of-attack curves. The corrected lift-coefficient curves from the two wind tunnel test sections matched very well. Detailed pressure distributions computed by PMARC on the wing lower surface helped identify the source of large strut interference effects in one of the wind tunnel test sections. Extension of the technique to analysis of wind tunnel wall effects on the lift coefficient in the nonlinear lift range and on drag coefficient will require the addition of boundary-layer and separated-flow models to PMARC.
Wiegers, Evita C; Philips, Bart W J; Heerschap, Arend; van der Graaf, Marinette
2017-12-01
J-difference editing is often used to select resonances of compounds with coupled spins in 1 H-MR spectra. Accurate phase and frequency alignment prior to subtracting J-difference-edited MR spectra is important to avoid artefactual contributions to the edited resonance. In-vivo J-difference-edited MR spectra were aligned by maximizing the normalized scalar product between two spectra (i.e., the correlation over a spectral region). The performance of our correlation method was compared with alignment by spectral registration and by alignment of the highest point in two spectra. The correlation method was tested at different SNR levels and for a broad range of phase and frequency shifts. In-vivo application of the proposed correlation method showed reduced subtraction errors and increased fit reliability in difference spectra as compared with conventional peak alignment. The correlation method and the spectral registration method generally performed equally well. However, better alignment using the correlation method was obtained for spectra with a low SNR (down to ~2) and for relatively large frequency shifts. Our correlation method for simultaneously phase and frequency alignment is able to correct both small and large phase and frequency drifts and also performs well at low SNR levels.
NASA Astrophysics Data System (ADS)
Ribeiro, Raquel; Santos, Xavier; Sillero, Neftali; Carretero, Miguel A.; Llorente, Gustavo A.
2009-03-01
The human exploitation of land resources (land use) has been considered the major factor responsible for changes in biodiversity within terrestrial ecosystems given that it affects directly the distribution of the fauna. Reptiles are known to be particularly sensitive to habitat change due to their ecological constraints. Here, the impact of land use on reptile diversity was analysed, choosing Catalonia (NE Iberia) as a case study. This region provides a suitable scenario for such a biogeographical study since it harbours: 1) a rich reptile fauna; 2) a highly diverse environment showing strong variation in those variables usually shaping reptile distributions; and 3) good species distribution data. Potential species richness was calculated, using ecological modelling techniques (Ecological Niche Factor Analysis - ENFA). The subtraction of the observed from the potential species richness was the dependent variable in a backwards multiple linear regression, using land use variables. Agriculture was the land use with the strongest relation with the non-fulfilment of the potential species richness, indicating a trend towards a deficit of biodiversity. Deciduous forest was the only land use negatively related with the subtracted species richness. Results indicate a clear relationship between land use and biodiversity at a mesoscale. This finding represents an important baseline for conservation guidelines within the habitat change framework because it has been achieved at the same spatial scale of chorological studies and management policies.
Contexts for Column Addition and Subtraction
ERIC Educational Resources Information Center
Lopez Fernandez, Jorge M.; Velazquez Estrella, Aileen
2011-01-01
In this article, the authors discuss their approach to column addition and subtraction algorithms. Adapting an original idea of Paul Cobb and Erna Yackel's from "A Contextual Investigation of Three-Digit Addition and Subtraction" related to packing and unpacking candy in a candy factory, the authors provided an analogous context by…
Developing a Model to Support Students in Solving Subtraction
ERIC Educational Resources Information Center
Murdiyani, Nila Mareta; Zulkardi; Putri, Ratu Ilma Indra; van Eerde, Dolly; van Galen, Frans
2013-01-01
Subtraction has two meanings and each meaning leads to the different strategies. The meaning of "taking away something" suggests a direct subtraction, while the meaning of "determining the difference between two numbers" is more likely to be modeled as indirect addition. Many prior researches found that the second meaning and…
SNIa detection in the SNLS photometric analysis using Morphological Component Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Möller, A.; Ruhlmann-Kleider, V.; Neveu, J.
2015-04-01
Detection of supernovae (SNe) and, more generally, of transient events in large surveys can provide numerous false detections. In the case of a deferred processing of survey images, this implies reconstructing complete light curves for all detections, requiring sizable processing time and resources. Optimizing the detection of transient events is thus an important issue for both present and future surveys. We present here the optimization done in the SuperNova Legacy Survey (SNLS) for the 5-year data deferred photometric analysis. In this analysis, detections are derived from stacks of subtracted images with one stack per lunation. The 3-year analysis provided 300,000more » detections dominated by signals of bright objects that were not perfectly subtracted. Allowing these artifacts to be detected leads not only to a waste of resources but also to possible signal coordinate contamination. We developed a subtracted image stack treatment to reduce the number of non SN-like events using morphological component analysis. This technique exploits the morphological diversity of objects to be detected to extract the signal of interest. At the level of our subtraction stacks, SN-like events are rather circular objects while most spurious detections exhibit different shapes. A two-step procedure was necessary to have a proper evaluation of the noise in the subtracted image stacks and thus a reliable signal extraction. We also set up a new detection strategy to obtain coordinates with good resolution for the extracted signal. SNIa Monte-Carlo (MC) generated images were used to study detection efficiency and coordinate resolution. When tested on SNLS 3-year data this procedure decreases the number of detections by a factor of two, while losing only 10% of SN-like events, almost all faint ones. MC results show that SNIa detection efficiency is equivalent to that of the original method for bright events, while the coordinate resolution is improved.« less
NASA Astrophysics Data System (ADS)
Choi, Myoung-Hwan; Ahn, Jungryul; Park, Dae Jin; Lee, Sang Min; Kim, Kwangsoo; Cho, Dong-il Dan; Senok, Solomon S.; Koo, Kyo-in; Goo, Yong Sook
2017-02-01
Objective. Direct stimulation of retinal ganglion cells in degenerate retinas by implanting epi-retinal prostheses is a recognized strategy for restoration of visual perception in patients with retinitis pigmentosa or age-related macular degeneration. Elucidating the best stimulus-response paradigms in the laboratory using multielectrode arrays (MEA) is complicated by the fact that the short-latency spikes (within 10 ms) elicited by direct retinal ganglion cell (RGC) stimulation are obscured by the stimulus artifact which is generated by the electrical stimulator. Approach. We developed an artifact subtraction algorithm based on topographic prominence discrimination, wherein the duration of prominences within the stimulus artifact is used as a strategy for identifying the artifact for subtraction and clarifying the obfuscated spikes which are then quantified using standard thresholding. Main results. We found that the prominence discrimination based filters perform creditably in simulation conditions by successfully isolating randomly inserted spikes in the presence of simple and even complex residual artifacts. We also show that the algorithm successfully isolated short-latency spikes in an MEA-based recording from degenerate mouse retinas, where the amplitude and frequency characteristics of the stimulus artifact vary according to the distance of the recording electrode from the stimulating electrode. By ROC analysis of false positive and false negative first spike detection rates in a dataset of one hundred and eight RGCs from four retinal patches, we found that the performance of our algorithm is comparable to that of a generally-used artifact subtraction filter algorithm which uses a strategy of local polynomial approximation (SALPA). Significance. We conclude that the application of topographic prominence discrimination is a valid and useful method for subtraction of stimulation artifacts with variable amplitudes and shapes. We propose that our algorithm may be used as stand-alone or supplementary to other artifact subtraction algorithms like SALPA.
A Low-Stress Algorithm for Fractions
ERIC Educational Resources Information Center
Ruais, Ronald W.
1978-01-01
An algorithm is given for the addition and subtraction of fractions based on dividing the sum of diagonal numerator and denominator products by the product of the denominators. As an explanation of the teaching method, activities used in teaching are demonstrated. (MN)
Cryptosporidium spp. and Toxoplasma gondii are important coccidian parasites that have caused waterborne and foodborne disease outbreaks worldwide. Techniques like subtractive hybridization, microarrays, and quantitative reverse transcriptase real-time polymerase chain reaction (...
Knobloch, Gesine; Lauff, Marie-Teres; Hirsch, Sebastian; Schwenke, Carsten; Hamm, Bernd; Wagner, Moritz
2016-12-01
To prospectively compare 3D flow-dependent subtractive MRA vs. 2D flow-independent non-subtractive MRA for assessment of the calf arteries at 3 Tesla. Forty-two patients with peripheral arterial occlusive disease underwent nonenhanced MRA of calf arteries at 3 Tesla with 3D flow-dependent subtractive MRA (fast spin echo sequence; 3D-FSE-MRA) and 2D flow-independent non-subtractive MRA (balanced steady-state-free-precession sequence; 2D-bSSFP-MRA). Moreover, all patients underwent contrast-enhanced MRA (CE-MRA) as standard-of-reference. Two readers performed a per-segment evaluation for image quality (4 = excellent to 0 = non-diagnostic) and severity of stenosis. Image quality scores of 2D-bSSFP-MRA were significantly higher compared to 3D-FSE-MRA (medians across readers: 4 vs. 3; p < 0.0001) with lower rates of non-diagnostic vessel segments on 2D-bSSFP-MRA (reader 1: <1 % vs. 15 %; reader 2: 1 % vs. 29 %; p < 0.05). Diagnostic performance of 2D-bSSFP-MRA and 3D-FSE-MRA across readers showed sensitivities of 89 % (214/240) vs. 70 % (168/240), p = 0.0153; specificities: 91 % (840/926) vs. 63 % (585/926), p < 0.0001; and diagnostic accuracies of 90 % (1054/1166) vs. 65 % (753/1166), p < 0.0001. 2D flow-independent non-subtractive MRA (2D-bSSFP-MRA) is a robust nonenhanced MRA technique for assessment of the calf arteries at 3 Tesla with significantly higher image quality and diagnostic accuracy compared to 3D flow-dependent subtractive MRA (3D-FSE-MRA). • 2D flow-independent non-subtractive MRA (2D-bSSFP-MRA) is a robust NE-MRA technique at 3T • 2D-bSSFP-MRA outperforms 3D flow-dependent subtractive MRA (3D-FSE-MRA) as NE-MRA of calf arteries • 2D-bSSFP-MRA is a promising alternative to CE-MRA for calf PAOD evaluation.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Mohamed, Dalia; Elshahed, Mona S.
2018-01-01
In the presented work several spectrophotometric methods were performed for the quantification of canagliflozin (CGZ) and metformin hydrochloride (MTF) simultaneously in their binary mixture. Two of these methods; response correlation (RC) and advanced balance point-spectrum subtraction (ABP-SS) were developed and introduced for the first time in this work, where the latter method (ABP-SS) was performed on both the zero order and the first derivative spectra of the drugs. Besides, two recently established methods; advanced amplitude modulation (AAM) and advanced absorbance subtraction (AAS) were also accomplished. All the proposed methods were validated in accordance to the ICH guidelines, where all methods were proved to be accurate and precise. Additionally, the linearity range, limit of detection and limit of quantification were determined and the selectivity was examined through the analysis of laboratory prepared mixtures and the combined dosage form of the drugs. The proposed methods were capable of determining the two drugs in the ratio present in the pharmaceutical formulation CGZ:MTF (1:17) without the requirement of any preliminary separation, further dilution or standard spiking. The results obtained by the proposed methods were in compliance with the reported chromatographic method when compared statistically, proving the absence of any significant difference in accuracy and precision between the proposed and reported methods.
Parametric Imaging Of Digital Subtraction Angiography Studies For Renal Transplant Evaluation
NASA Astrophysics Data System (ADS)
Gallagher, Joe H.; Meaney, Thomas F.; Flechner, Stuart M.; Novick, Andrew C.; Buonocore, Edward
1981-11-01
A noninvasive method for diagnosing acute tubular necrosis and rejection would be an important tool for the management of renal transplant patients. From a sequence of digital subtraction angiographic images acquired after an intravenous injection of radiographic contrast material, the parametric images of the maximum contrast, the time when the maximum contrast is reached, and two times the time at which one half of the maximum contrast is reached are computed. The parametric images of the time when the maximum is reached clearly distinguish normal from abnormal renal function. However, it is the parametric image of two times the time when one half of the maximum is reached which provides some assistance in differentiating acute tubular necrosis from rejection.
Comparative efficiency of a scheme of cyclic alternating-period subtraction
NASA Astrophysics Data System (ADS)
Golikov, V. S.; Artemenko, I. G.; Malinin, A. P.
1986-06-01
The estimation of the detection quality of a signal on a background of correlated noise according to the Neumann-Pearson criterion is examined. It is shown that, in a number of cases, the cyclic alternating-period subtraction scheme has a higher noise immunity than the conventional alternating-period subtraction scheme.
Relearning To Teach Arithmetic Addition and Subtraction: A Teacher's Study Guide.
ERIC Educational Resources Information Center
Russell, Susan Jo
This package features videotapes and a study guide that are designed to help teachers revisit the operations of addition and subtraction and consider how students can develop meaningful approaches to these operations. The study guides' sessions are on addition, subtraction, the teacher's role, and goals for students and teachers. The readings in…
Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions
ERIC Educational Resources Information Center
Torbeyns, Joke; Verschaffel, Lieven
2016-01-01
This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…
The Use of Procedural Knowledge in Simple Addition and Subtraction Problems
ERIC Educational Resources Information Center
Fayol, Michel; Thevenot, Catherine
2012-01-01
In a first experiment, adults were asked to solve one-digit additions, subtractions and multiplications. When the sign appeared 150 ms before the operands, addition and subtraction were solved faster than when the sign and the operands appeared simultaneously on screen. This priming effect was not observed for multiplication problems. A second…
ERIC Educational Resources Information Center
Watson, Silvana Maria R.; Lopes, João; Oliveira, Célia; Judge, Sharon
2018-01-01
Purpose: The purpose of this descriptive study is to investigate why some elementary children have difficulties mastering addition and subtraction calculation tasks. Design/methodology/approach: The researchers have examined error types in addition and subtraction calculation made by 697 Portuguese students in elementary grades. Each student…
NASA Astrophysics Data System (ADS)
Elghobashy, Mohamed R.; Bebawy, Lories I.; Shokry, Rafeek F.; Abbas, Samah S.
2016-03-01
A sensitive and selective stability-indicating successive ratio subtraction coupled with constant multiplication (SRS-CM) spectrophotometric method was studied and developed for the spectrum resolution of five component mixture without prior separation. The components were hydroquinone in combination with tretinoin, the polymer formed from hydroquinone alkali degradation, 1,4 benzoquinone and the preservative methyl paraben. The proposed method was used for their determination in their pure form and in pharmaceutical formulation. The zero order absorption spectra of hydroquinone, tretinoin, 1,4 benzoquinone and methyl paraben were determined at 293, 357.5, 245 and 255.2 nm, respectively. The calibration curves were linear over the concentration ranges of 4.00-46.00, 1.00-7.00, 0.60-5.20, and 1.00-7.00 μg mL- 1 for hydroquinone, tretinoin, 1,4 benzoquinone and methyl paraben, respectively. The pharmaceutical formulation was subjected to mild alkali condition and measured by this method resulting in the polymerization of hydroquinone and the formation of toxic 1,4 benzoquinone. The proposed method was validated according to ICH guidelines. The results obtained were statistically analyzed and compared with those obtained by applying the reported method.
Elghobashy, Mohamed R; Bebawy, Lories I; Shokry, Rafeek F; Abbas, Samah S
2016-03-15
A sensitive and selective stability-indicating successive ratio subtraction coupled with constant multiplication (SRS-CM) spectrophotometric method was studied and developed for the spectrum resolution of five component mixture without prior separation. The components were hydroquinone in combination with tretinoin, the polymer formed from hydroquinone alkali degradation, 1,4 benzoquinone and the preservative methyl paraben. The proposed method was used for their determination in their pure form and in pharmaceutical formulation. The zero order absorption spectra of hydroquinone, tretinoin, 1,4 benzoquinone and methyl paraben were determined at 293, 357.5, 245 and 255.2 nm, respectively. The calibration curves were linear over the concentration ranges of 4.00-46.00, 1.00-7.00, 0.60-5.20, and 1.00-7.00 μg mL(-1) for hydroquinone, tretinoin, 1,4 benzoquinone and methyl paraben, respectively. The pharmaceutical formulation was subjected to mild alkali condition and measured by this method resulting in the polymerization of hydroquinone and the formation of toxic 1,4 benzoquinone. The proposed method was validated according to ICH guidelines. The results obtained were statistically analyzed and compared with those obtained by applying the reported method. Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian
2015-09-01
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
Volumetric display containing multiple two-dimensional color motion pictures
NASA Astrophysics Data System (ADS)
Hirayama, R.; Shiraki, A.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T.
2014-06-01
We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.
Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar
2016-01-01
Purpose This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. Materials and Methods A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. Results The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. Conclusion A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment. PMID:27051635
Rheology of concentrated suspensions of non-colloidal rigid fibers
NASA Astrophysics Data System (ADS)
Guazzelli, Elisabeth; Tapia, Franco; Shaikh, Saif; Butler, Jason E.; Pouliquen, Olivier
2017-11-01
Pressure and volume-imposed rheology is used to study suspensions of non-colloidal, rigid fibers in the concentrated regime for aspect ratios ranging from 3 to 15. The suspensions exhibit yield-stresses. Subtracting these apparent yield-stresses reveals a viscous scaling for both the shear and normal stresses. The variation in aspect ratio does not affect the friction coefficient (ratio of shear and normal stresses), but increasing the aspect ratio lowers the maximum volume fraction at which the suspension flows. Constitutive laws are proposed for the viscosities and the friction coefficient close to this maximum flowable fraction. The scaling of the stresses near this jamming transition are found to differ substantially from that of a suspension of spheres.
Graphene-assisted multiple-input high-base optical computing
Hu, Xiao; Wang, Andong; Zeng, Mengqi; Long, Yun; Zhu, Long; Fu, Lei; Wang, Jian
2016-01-01
We propose graphene-assisted multiple-input high-base optical computing. We fabricate a nonlinear optical device based on a fiber pigtail cross-section coated with a single-layer graphene grown by chemical vapor deposition (CVD) method. An approach to implementing modulo 4 operations of three-input hybrid addition and subtraction of quaternary base numbers in the optical domain using multiple non-degenerate four-wave mixing (FWM) processes in graphene coated optical fiber device and (differential) quadrature phase-shift keying ((D)QPSK) signals is presented. We demonstrate 10-Gbaud modulo 4 operations of three-input quaternary hybrid addition and subtraction (A + B − C, A + C − B, B + C − A) in the experiment. The measured optical signal-to-noise ratio (OSNR) penalties for modulo 4 operations of three-input quaternary hybrid addition and subtraction (A + B − C, A + C − B, B + C − A) are measured to be less than 7 dB at a bit-error rate (BER) of 2 × 10−3. The BER performance as a function of the relative time offset between three signals (signal offset) is also evaluated showing favorable performance. PMID:27604866
Patterns of problem-solving in children's literacy and arithmetic.
Farrington-Flint, Lee; Vanuxem-Cotterill, Sophie; Stiller, James
2009-11-01
Patterns of problem-solving among 5-to-7 year-olds' were examined on a range of literacy (reading and spelling) and arithmetic-based (addition and subtraction) problem-solving tasks using verbal self-reports to monitor strategy choice. The results showed higher levels of variability in the children's strategy choice across Years I and 2 on the arithmetic (addition and subtraction) than literacy-based tasks (reading and spelling). However, across all four tasks, the children showed a tendency to move from less sophisticated procedural-based strategies, which included phonological strategies for reading and spelling and counting-all and finger modellingfor addition and subtraction, to more efficient retrieval methods from Years I to 2. Distinct patterns in children's problem-solving skill were identified on the literacy and arithmetic tasks using two separate cluster analyses. There was a strong association between these two profiles showing that those children with more advanced problem-solving skills on the arithmetic tasks also showed more advanced profiles on the literacy tasks. The results highlight how different-aged children show flexibility in their use of problem-solving strategies across literacy and arithmetical contexts and reinforce the importance of studying variations in children's problem-solving skill across different educational contexts.
NASA Astrophysics Data System (ADS)
Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu
2017-09-01
A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.
NASA Astrophysics Data System (ADS)
Dong, Zhichao; Cheng, Haobo
2018-01-01
A highly noise-tolerant hybrid algorithm (NTHA) is proposed in this study for phase retrieval from a single-shot spatial carrier fringe pattern (SCFP), which effectively combines the merits of spatial carrier phase shift method and two dimensional continuous wavelet transform (2D-CWT). NTHA firstly extracts three phase-shifted fringe patterns from the SCFP with one pixel malposition; then calculates phase gradients by subtracting the reference phase from the other two target phases, which are retrieved respectively from three phase-shifted fringe patterns by 2D-CWT; finally, reconstructs the phase map by a least square gradient integration method. Its typical characters include but not limited to: (1) doesn't require the spatial carrier to be constant; (2) the subtraction mitigates edge errors of 2D-CWT; (3) highly noise-tolerant, because not only 2D-CWT is noise-insensitive, but also the noise in the fringe pattern doesn't directly take part in the phase reconstruction as in previous hybrid algorithm. Its feasibility and performances are validated extensively by simulations and contrastive experiments to temporal phase shift method, Fourier transform and 2D-CWT methods.
The potential for neurovascular intravenous angiography using K-edge digital subtraction angiography
NASA Astrophysics Data System (ADS)
Schültke, E.; Fiedler, S.; Kelly, M.; Griebel, R.; Juurlink, B.; LeDuc, G.; Estève, F.; Le Bas, J.-F.; Renier, M.; Nemoz, C.; Meguro, K.
2005-08-01
Background: Catheterization of small-caliber blood vessels in the central nervous system can be extremely challenging. Alternatively, intravenous (i.v.) administration of contrast agent is minimally invasive and therefore carries a much lower risk for the patient. With conventional X-ray equipment, volumes of contrast agent that could be safely administered to the patient do not allow acquisition of high-quality images after i.v. injection, because the contrast bolus is extremely diluted by passage through the heart. However, synchrotron-based digital K-edge subtraction angiography does allow acquisition of high-quality images after i.v. administration of relatively small doses of contrast agent. Materials and methods: Eight adult male New Zealand rabbits were used for our experiments. Animals were submitted to both angiography with conventional X-ray equipment and synchrotron-based digital subtraction angiography. Results: With conventional X-ray equipment, no contrast was seen in either cerebral or spinal blood vessels after i.v. injection of iodinated contrast agent. However, using K-edge digital subtraction angiography, as little as 1 ml iodinated contrast agent, when administered as i.v. bolus, yielded images of small-caliber blood vessels in the central nervous system (both brain and spinal cord). Conclusions: If it would be possible to image blood vessels of the same diameter in the central nervous system of human patients, the synchrotron-based technique could yield high-quality images at a significantly lower risk for the patient than conventional X-ray imaging. Images could be acquired where catheterization of feeding blood vessels has proven impossible.
NASA Astrophysics Data System (ADS)
Tachibana, Rie; Kohlhase, Naja; Näppi, Janne J.; Hironaka, Toru; Ota, Junko; Ishida, Takayuki; Regge, Daniele; Yoshida, Hiroyuki
2016-03-01
Accurate electronic cleansing (EC) for CT colonography (CTC) enables the visualization of the entire colonic surface without residual materials. In this study, we evaluated the accuracy of a novel multi-material electronic cleansing (MUMA-EC) scheme for non-cathartic ultra-low-dose dual-energy CTC (DE-CTC). The MUMA-EC performs a wateriodine material decomposition of the DE-CTC images and calculates virtual monochromatic images at multiple energies, after which a random forest classifier is used to label the images into the regions of lumen air, soft tissue, fecal tagging, and two types of partial-volume boundaries based on image-based features. After the labeling, materials other than soft tissue are subtracted from the CTC images. For pilot evaluation, 384 volumes of interest (VOIs), which represented sources of subtraction artifacts observed in current EC schemes, were sampled from 32 ultra-low-dose DE-CTC scans. The voxels in the VOIs were labeled manually to serve as a reference standard. The metric for EC accuracy was the mean overlap ratio between the labels of the reference standard and the labels generated by the MUMA-EC, a dualenergy EC (DE-EC), and a single-energy EC (SE-EC) scheme. Statistically significant differences were observed between the performance of the MUMA/DE-EC and the SE-EC methods (p<0.001). Visual assessment confirmed that the MUMA-EC generated less subtraction artifacts than did DE-EC and SE-EC. Our MUMA-EC scheme yielded superior performance over conventional SE-EC scheme in identifying and minimizing subtraction artifacts on noncathartic ultra-low-dose DE-CTC images.
Delayed ripple counter simplifies square-root computation
NASA Technical Reports Server (NTRS)
Cliff, R.
1965-01-01
Ripple subtract technique simplifies the logic circuitry required in a binary computing device to derive the square root of a number. Successively higher numbers are subtracted from a register containing the number out of which the square root is to be extracted. The last number subtracted will be the closest integer to the square root of the number.
ERIC Educational Resources Information Center
Wiles, Clyde A.
The study's purpose was to investigate the differential effects on the achievement of second-grade students that could be attributed to three instructional sequences for the learning of the addition and subtraction algorithms. One sequence presented the addition algorithm first (AS), the second presented the subtraction algorithm first (SA), and…
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
ERIC Educational Resources Information Center
Mathematics Teacher, 1985
1985-01-01
Discusses: (1) use of matrix techniques to write secret codes (includes ready-to-duplicate worksheets); (2) a method of multiplication and division of polynomials in one variable that is not tedius, time-consuming, or dependent on guesswork; and (3) adding and subtracting rational expressions and solving rational equations. (JN)
Arterial Blood Flow Measurement Using Digital Subtraction Angiography (DSA)
NASA Astrophysics Data System (ADS)
Swanson, David K.; Myerowitz, P. David; Van Lysel, Michael S.; Peppler, Walter W.; Fields, Barry L.; Watson, Kim M.; O'Connor, Julia
1984-08-01
Standard angiography demonstrates the anatomy of arterial occlusive disease but not its physiological signficance. Using intravenous digital subtraction angiography (DSA), we investigated transit-time videodensitometric techniques in measuring femoral arterial flows in dogs. These methods have been successfully applied to intraarterial DSA but not to intravenous DSA. Eight 20 kg dogs were instrumented with an electromagnetic flow probe and a balloon occluder above an imaged segment of femoral artery. 20 cc of Renografin 76 was power injected at 15 cc/sec into the right atrium. Flow in the femoral artery was varied by partial balloon occlusion or peripheral dilatation following induced ischemia resulting in 51 flow measurements varying from 15 to 270 cc/min. Three different transit-time techniques were studied: crosscorrelation, mean square error, and two leading edge methods. Correlation between videodensitometry and flowmeter measurements using these different techniques ranged from 0.78 to 0.88 with a mean square error of 29 to 37 cc/min. Blood flow information using several different transit-time techniques can be obtained with intravenous DSA.
NASA Astrophysics Data System (ADS)
Ravenni, Andrea; Liguori, Michele; Bartolo, Nicola; Shiraishi, Maresuke
2017-09-01
Cross-correlations between Cosmic Microwave Background (CMB) temperature and y-spectral distortion anisotropies have been previously proposed as a way to measure the local bispectrum parameter fNLloc. in a range of scales inaccessible to either CMB (T, E) bispectra or μ T correlations. This is useful e.g. to test scale dependence of primordial non-Gaussianity. Unfortunately, the primordial y T signal is strongly contaminated by the late-time correlation between the Integrated Sachs Wolfe and Sunyaev-Zel'dovich (SZ) effects. Moreover, SZ itself generates a large noise contribution in the y-parameter map. We consider two original ways to address these issues. In order to remove the bias due to the SZ-CMB temperature coupling, while also providing additional signal, we include in the analysis the cross-correlation between y-distortions and CMB polarization. In order to reduce the noise, we propose to clean the y-map by subtracting a SZ template, reconstructed via cross-correlation with external tracers (CMB and galaxy-lensing signals). We combine this SZ template subtraction with the previously suggested solution of directly masking detected clusters. Our final forecasts show that, using y-distortions, a PRISM-like survey can achieve 1σ(fNLloc.) = 300, while an ideal experiment will achieve 1σ(fNLloc.) = 130 with improvements of a factor between 2.1 and 3.8, depending on the considered survey, from adding the y E signal, and a further 20-30 % from template cleaning. These forecasts are much worse than current fNLloc. boundaries from Planck, but we stress that they refer to completely different scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravenni, Andrea; Liguori, Michele; Bartolo, Nicola
Cross-correlations between Cosmic Microwave Background (CMB) temperature and y -spectral distortion anisotropies have been previously proposed as a way to measure the local bispectrum parameter f {sub NL}{sup loc}. in a range of scales inaccessible to either CMB ( T , E ) bispectra or μ T correlations. This is useful e.g. to test scale dependence of primordial non-Gaussianity. Unfortunately, the primordial y T signal is strongly contaminated by the late-time correlation between the Integrated Sachs Wolfe and Sunyaev-Zel'dovich (SZ) effects. Moreover, SZ itself generates a large noise contribution in the y -parameter map. We consider two original ways tomore » address these issues. In order to remove the bias due to the SZ-CMB temperature coupling, while also providing additional signal, we include in the analysis the cross-correlation between y -distortions and CMB polarization . In order to reduce the noise, we propose to clean the y -map by subtracting a SZ template, reconstructed via cross-correlation with external tracers (CMB and galaxy-lensing signals). We combine this SZ template subtraction with the previously suggested solution of directly masking detected clusters. Our final forecasts show that, using y -distortions, a PRISM-like survey can achieve 1σ( f {sub NL}{sup loc}.) = 300, while an ideal experiment will achieve 1σ( f {sub NL}{sup loc}.) = 130 with improvements of a factor between 2.1 and 3.8, depending on the considered survey, from adding the y E signal, and a further 20–30 % from template cleaning. These forecasts are much worse than current f {sub NL}{sup loc}. boundaries from Planck , but we stress that they refer to completely different scales.« less
NASA Astrophysics Data System (ADS)
Magdy, Nancy; Ayad, Miriam F.
2015-02-01
Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.
Color enhancement and image defogging in HSI based on Retinex model
NASA Astrophysics Data System (ADS)
Gao, Han; Wei, Ping; Ke, Jun
2015-08-01
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
ERIC Educational Resources Information Center
Baroody, Arthur J.
2016-01-01
Six widely used US Grade 1 curricula do not adequately address the following three developmental prerequisites identified by a proposed learning trajectory for the meaningful learning of the subtraction-as-addition strategy (e.g., for 13-8 think "what + 8 = 13?"): (a) reverse operations (adding 8 is undone by subtracting 8); (b) common…
Dual-tracer background subtraction approach for fluorescent molecular tomography
Holt, Robert W.; El-Ghussein, Fadi; Davis, Scott C.; Samkoe, Kimberley S.; Gunn, Jason R.; Leblond, Frederic
2013-01-01
Abstract. Diffuse fluorescence tomography requires high contrast-to-background ratios to accurately reconstruct inclusions of interest. This is a problem when imaging the uptake of fluorescently labeled molecularly targeted tracers in tissue, which can result in high levels of heterogeneously distributed background uptake. We present a dual-tracer background subtraction approach, wherein signal from the uptake of an untargeted tracer is subtracted from targeted tracer signal prior to image reconstruction, resulting in maps of targeted tracer binding. The approach is demonstrated in simulations, a phantom study, and in a mouse glioma imaging study, demonstrating substantial improvement over conventional and homogenous background subtraction image reconstruction approaches. PMID:23292612
Physical renormalization condition for de Sitter QED
NASA Astrophysics Data System (ADS)
Hayashinaka, Takahiro; Xue, She-Sheng
2018-05-01
We considered a new renormalization condition for the vacuum expectation values of the scalar and spinor currents induced by a homogeneous and constant electric field background in de Sitter spacetime. Following a semiclassical argument, the condition named maximal subtraction imposes the exponential suppression on the massive charged particle limit of the renormalized currents. The maximal subtraction changes the behaviors of the induced currents previously obtained by the conventional minimal subtraction scheme. The maximal subtraction is favored for a couple of physically decent predictions including the identical asymptotic behavior of the scalar and spinor currents, the removal of the IR hyperconductivity from the scalar current, and the finite current for the massless fermion.
Unmanned Vehicle Guidance Using Video Camera/Vehicle Model
NASA Technical Reports Server (NTRS)
Sutherland, T.
1999-01-01
A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.
Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm
NASA Astrophysics Data System (ADS)
Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen
2011-08-01
Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.
René de Cotret, Laurent P; Siwick, Bradley J
2017-07-01
The general problem of background subtraction in ultrafast electron powder diffraction (UEPD) is presented with a focus on the diffraction patterns obtained from materials of moderately complex structure which contain many overlapping peaks and effectively no scattering vector regions that can be considered exclusively background. We compare the performance of background subtraction algorithms based on discrete and dual-tree complex (DTCWT) wavelet transforms when applied to simulated UEPD data on the M1-R phase transition in VO 2 with a time-varying background. We find that the DTCWT approach is capable of extracting intensities that are accurate to better than 2% across the whole range of scattering vector simulated, effectively independent of delay time. A Python package is available.
Stand-off transmission lines and method for making same
Tuckerman, David B.
1991-01-01
Standoff transmission lines in an integrated circuit structure are formed by etching away or removing the portion of the dielectric layer separating the microstrip metal lines and the ground plane from the regions that are not under the lines. The microstrip lines can be fabricated by a subtractive process of etching a metal layer, an additive process of direct laser writing fine lines followed by plating up the lines or a subtractive/additive process in which a trench is etched over a nucleation layer and the wire is electrolytically deposited. Microstrip lines supported on freestanding posts of dielectric material surrounded by air gaps are produced. The average dielectric constant between the lines and ground plane is reduced, resulting in higher characteristic impedance, less crosstalk between lines, increased signal propagation velocities, and reduced wafer stress.
Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D
2018-08-01
This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozario, T; Bereg, S; Chiu, T
Purpose: In order to locate lung tumors on projection images without internal markers, digitally reconstructed radiograph (DRR) is created and compared with projection images. Since lung tumors always move and their locations change on projection images while they are static on DRRs, a special DRR (background DRR) is generated based on modified anatomy from which lung tumors are removed. In addition, global discrepancies exist between DRRs and projections due to their different image originations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported. Methods: This method divides global images into a matrix ofmore » small tiles and similarities will be evaluated by calculating normalized cross correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) will be automatically optimized to keep the tumor within a single tile which has bad matching with the corresponding DRR tile. A pixel based linear transformation will be determined by linear interpolations of tile transformation results obtained during tile matching. The DRR will be transformed to the projection image level and subtracted from it. The resulting subtracted image now contains only the tumor. A DRR of the tumor is registered to the subtracted image to locate the tumor. Results: This method has been successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (Brainlab) for dynamic tumor tracking on phantom studies. Radiation opaque markers are implanted and used as ground truth for tumor positions. Although, other organs and bony structures introduce strong signals superimposed on tumors at some angles, this method accurately locates tumors on every projection over 12 gantry angles. The maximum error is less than 2.6 mm while the total average error is 1.0 mm. Conclusion: This algorithm is capable of detecting tumor without markers despite strong background signals.« less
Teaching Integer Operations Using Ring Theory
ERIC Educational Resources Information Center
Hirsch, Jenna
2012-01-01
A facility with signed numbers forms the basis for effective problem solving throughout developmental mathematics. Most developmental mathematics textbooks explain signed number operations using absolute value, a method that involves considering the problem in several cases (same sign, opposite sign), and in the case of subtraction, rewriting the…
Optical computation using residue arithmetic.
Huang, A; Tsunoda, Y; Goodman, J W; Ishihara, S
1979-01-15
Using residue arithmetic it is possible to perform additions, subtractions, multiplications, and polynomial evaluation without the necessity for carry operations. Calculations can, therefore, be performed in a fully parallel manner. Several different optical methods for performing residue arithmetic operations are described. A possible combination of such methods to form a matrix vector multiplier is considered. The potential advantages of optics in performing these kinds of operations are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podoshvedov, S. A., E-mail: podoshvedov@mail.ru
A method to generate Schroedinger cat states in free propagating optical fields based on the use of displaced states (or displacement operators) is developed. Some optical schemes with photon-added coherent states are studied. The schemes are modifications of the general method based on a sequence of displacements and photon additions or subtractions adjusted to generate Schroedinger cat states of a larger size. The effects of detection inefficiency are taken into account.
Haro 11: Where is the Lyman Continuum Source?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keenan, Ryan P.; Oey, M. S.; Jaskot, Anne E.
2017-10-10
Identifying the mechanism by which high-energy Lyman continuum (LyC) photons escaped from early galaxies is one of the most pressing questions in cosmic evolution. Haro 11 is the best known local LyC-leaking galaxy, providing an important opportunity to test our understanding of LyC escape. The observed LyC emission in this galaxy presumably originates from one of the three bright, photoionizing knots known as A, B, and C. It is known that Knot C has strong Ly α emission, and Knot B hosts an unusually bright ultraluminous X-ray source, which may be a low-luminosity active galactic nucleus. To clarify the LyCmore » source, we carry out ionization-parameter mapping (IPM) by obtaining narrow-band imaging from the Hubble Space Telescope WFC3 and ACS cameras to construct spatially resolved ratio maps of [O iii]/[O ii] emission from the galaxy. IPM traces the ionization structure of the interstellar medium and allows us to identify optically thin regions. To optimize the continuum subtraction, we introduce a new method for determining the best continuum scale factor derived from the mode of the continuum-subtracted, image flux distribution. We find no conclusive evidence of LyC escape from Knots B or C, but instead we identify a high-ionization region extending over at least 1 kpc from Knot A. This knot shows evidence of an extremely young age (≲1 Myr), perhaps containing very massive stars (>100 M {sub ⊙}). It is weak in Ly α , so if it is confirmed as the LyC source, our results imply that LyC emission may be independent of Ly α emission.« less
Kimori, Yoshitaka; Baba, Norio; Morone, Nobuhiro
2010-07-08
A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.
ERIC Educational Resources Information Center
Karp, Karen; Caldwell, Janet; Zbiek, Rose Mary; Bay-Williams, Jennifer
2011-01-01
What is the relationship between addition and subtraction? How do individuals know whether an algorithm will always work? Can they explain why order matters in subtraction but not in addition, or why it is false to assert that the sum of any two whole numbers is greater than either number? It is organized around two big ideas and supported by…
Subtraction with hadronic initial states at NLO: an NNLO-compatible scheme
NASA Astrophysics Data System (ADS)
Somogyi, Gábor
2009-05-01
We present an NNLO-compatible subtraction scheme for computing QCD jet cross sections of hadron-initiated processes at NLO accuracy. The scheme is constructed specifically with those complications in mind, that emerge when extending the subtraction algorithm to next-to-next-to-leading order. It is therefore possible to embed the present scheme in a full NNLO computation without any modifications.
Qualitative assessment of gene expression in affymetrix genechip arrays
NASA Astrophysics Data System (ADS)
Nagarajan, Radhakrishnan; Upreti, Meenakshi
2007-01-01
Affymetrix Genechip microarrays are used widely to determine the simultaneous expression of genes in a given biological paradigm. Probes on the Genechip array are atomic entities which by definition are randomly distributed across the array and in turn govern the gene expression. In the present study, we make several interesting observations. We show that there is considerable correlation between the probe intensities across the array which defy the independence assumption. While the mechanism behind such correlations is unclear, we show that scaling behavior and the profiles of perfect match (PM) as well as mismatch (MM) probes are similar and immune-to-background subtraction. We believe that the observed correlations are possibly an outcome of inherent non-stationarities or patchiness in the array devoid of biological significance. This is demonstrated by inspecting their scaling behavior and profiles of the PM and MM probe intensities obtained from publicly available Genechip arrays from three eukaryotic genomes, namely: Drosophila melanogaster (fruit fly), Homo sapiens (humans) and Mus musculus (house mouse) across distinct biological paradigms and across laboratories, with and without background subtraction. The fluctuation functions were estimated using detrended fluctuation analysis (DFA) with fourth-order polynomial detrending. The results presented in this study provide new insights into correlation signatures of PM and MM probe intensities and suggests the choice of DFA as a tool for qualitative assessment of Affymetrix Genechip microarrays prior to their analysis. A more detailed investigation is necessary in order to understand the source of these correlations.
Multifractal structures in radial velocity measurements for exoplanets
NASA Astrophysics Data System (ADS)
Del Sordo, Fabio; Sahil Agarwal, Debra A. Fischer, John S. Wettlaufer
2015-01-01
The radial velocity method is a powerful way to search for exoplanetary systems and it led to many discoveries of exoplanets in the last 20 years.Nevertheless, in order observe Earth-like planets, such method needs to be refined, i.e. one needs to improve the signal-to-noise ratio.On one hand this can be achieved by building spectrographs with better performances, but on the other hand it is also central to understand the noise present in the data.Radial-velocity data are time-series which contains the effect of planets as well as of stellar disturbances. Therefore, they are the result of different physical processes which operate on different time-scales, acting in a not always periodic fashionI present here a possible approach to such problem, which consists in looking for multifractal structures in the time-series coming from radial velocity measurements, identifying the underlying long-range correlations and fractal scaling properties, and connecting them to the underlying physical processes, like stellar oscillation, granulation, rotation, and magnetic activity.This method has been previously applied to satellite data related to Arctic sea albedo, relevant for identify trends and noise in the Arctic sea ice (Agarwal, Moon and Wettlaufer, Proc. R. Soc., 2012).Here we use such analysis for exoplanetary data related to possible Earth-like planets.Moreover, we apply the same procedure to synthetic data from numerical simulation of stellar dynamos, which give insight on the mechanism responsible for the noise. In such way we can therefore raise the signal-to-noise ratio in the data using the synthetic data as predicted noise to be subtracted from the observations.
The spectrum of static subtracted geometries
NASA Astrophysics Data System (ADS)
Andrade, Tomás; Castro, Alejandra; Cohen-Maldonado, Diego
2017-05-01
Subtracted geometries are black hole solutions of the four dimensional STU model with rather interesting ties to asymptotically flat black holes. A peculiar feature is that the solutions to the Klein-Gordon equation on this subtracted background can be organized according to representations of the conformal group SO(2, 2). We test if this behavior persists for the linearized fluctuations of gravitational and matter fields on static, electrically charged backgrounds of this kind. We find that there is a subsector of the modes that do display conformal symmetry, while some modes do not. We also discuss two different effective actions that describe these subtracted geometries and how the spectrum of quasinormal modes is dramatically different depending upon the action used.
Complete Nagy-Soper subtraction for next-to-leading order calculations in QCD
NASA Astrophysics Data System (ADS)
Bevilacqua, G.; Czakon, M.; Kubocz, M.; Worek, M.
2013-10-01
We extend the Helac-Dipoles package with the implementation of a new subtraction formalism, first introduced by Nagy and Soper in the formulation of an improved parton shower. We discuss a systematic, semi-numerical approach for the evaluation of the integrated subtraction terms for both massless and massive partons, which provides the missing ingredient for a complete implementation. In consequence, the new scheme can now be used as part of a complete NLO QCD calculation for processes with arbitrary parton masses and multiplicities. We assess its overall performance through a detailed comparison with results based on Catani-Seymour subtraction. The importance of random polarization and color sampling of the external partons is also examined.
Ambient-Light-Canceling Camera Using Subtraction of Frames
NASA Technical Reports Server (NTRS)
Morookian, John Michael
2004-01-01
The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.
Extended Colour--Some Methods and Applications.
ERIC Educational Resources Information Center
Dean, P. J.; Murkett, A. J.
1985-01-01
Describes how color graphics are built up on microcomputer displays and how a range of colors can be produced. Discusses the logic of color formation, noting that adding/subtracting color can be conveniently demonstrated. Color generating techniques in physics (resistor color coding and continuous spectrum production) are given with program…
Integers Made Easy: Just Walk It Off
ERIC Educational Resources Information Center
Nurnberger-Haag, Julie
2007-01-01
This article describes a multisensory method for teaching students how to multiply and divide as well as add and subtract integers. The author uses sidewalk chalk and the underlying concept of integers to physically and mentally engage students in understanding the concepts of integers, making connections, and developing computational fluency.…
Meli, Leonardo; Pacchierotti, Claudio; Prattichizzo, Domenico
2014-04-01
This study presents a novel approach to force feedback in robot-assisted surgery. It consists of substituting haptic stimuli, composed of a kinesthetic component and a skin deformation, with cutaneous stimuli only. The force generated can then be thought as a subtraction between the complete haptic interaction, cutaneous, and kinesthetic, and the kinesthetic part of it. For this reason, we refer to this approach as sensory subtraction. Sensory subtraction aims at outperforming other nonkinesthetic feedback techniques in teleoperation (e.g., sensory substitution) while guaranteeing the stability and safety of the system. We tested the proposed approach in a challenging 7-DoF bimanual teleoperation task, similar to the Pegboard experiment of the da Vinci Skills Simulator. Sensory subtraction showed improved performance in terms of completion time, force exerted, and total displacement of the rings with respect to two popular sensory substitution techniques. Moreover, it guaranteed a stable interaction in the presence of a communication delay in the haptic loop.
Kim, Dong-Yeon; Kim, Eo-Bin; Kim, Hae-Young; Kim, Ji-Hwan
2017-01-01
PURPOSE To evaluate the fit of a three-unit metal framework of fixed dental prostheses made by subtractive and additive manufacturing. MATERIALS AND METHODS One master model of metal was fabricated. Twenty silicone impressions were made on the master die, working die of 10 poured with Type 4 stone, and working die of 10 made of scannable stone. Ten three-unit wax frameworks were fabricated by wax-up from Type IV working die. Stereolithography files of 10 three-unit frameworks were obtained using a model scanner and three-dimensional design software on a scannable working die. The three-unit wax framework was fabricated using subtractive manufacturing (SM) by applying the prepared stereolithography file, and the resin framework was fabricated by additive manufacturing (AM); both used metal alloy castings for metal frameworks. Marginal and internal gap were measured using silicone replica technique and digital microscope. Measurement data were analyzed by Kruskal-Wallis H test and Mann-Whitney U-test (α=.05). RESULTS The lowest and highest gaps between premolar and molar margins were in the SM group and the AM group, respectively. There was a statistically significant difference in the marginal gap among the 3 groups (P<.001). In the marginal area where pontic was present, the largest gap was 149.39 ± 42.30 µm in the AM group, and the lowest gap was 24.40 ± 11.92 µm in the SM group. CONCLUSION Three-unit metal frameworks made by subtractive manufacturing are clinically applicable. However, additive manufacturing requires more research to be applied clinically. PMID:29279766
Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software
GIANNASTASIO, Daiana; da ROSA, Ricardo Abreu; PERES, Bernardo Urbanetto; BARRETO, Mirela Sangoi; DOTTO, Gustavo Nogara; KUGA, Milton Carlos; PEREIRA, Jefferson Ricardo; SÓ, Marcus Vinícius Reis
2013-01-01
Objective This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Material and Methods Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. Results The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Conclusion Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop. PMID:24212994
Hayward, David C.; Hetherington, Suzannah; Behm, Carolyn A.; Grasso, Lauretta C.; Forêt, Sylvain; Miller, David J.; Ball, Eldon E.
2011-01-01
Background A successful metamorphosis from a planktonic larva to a settled polyp, which under favorable conditions will establish a future colony, is critical for the survival of corals. However, in contrast to the situation in other animals, e.g., frogs and insects, little is known about the molecular basis of coral metamorphosis. We have begun to redress this situation with previous microarray studies, but there is still a great deal to learn. In the present paper we have utilized a different technology, subtractive hybridization, to characterize genes differentially expressed across this developmental transition and to compare the success of this method to microarray. Methodology/Principal Findings Suppressive subtractive hybridization (SSH) was used to identify two pools of transcripts from the coral, Acropora millepora. One is enriched for transcripts expressed at higher levels at the pre-settlement stage, and the other for transcripts expressed at higher levels at the post-settlement stage. Virtual northern blots were used to demonstrate the efficacy of the subtractive hybridization technique. Both pools contain transcripts coding for proteins in various functional classes but transcriptional regulatory proteins were represented more frequently in the post-settlement pool. Approximately 18% of the transcripts showed no significant similarity to any other sequence on the public databases. Transcripts of particular interest were further characterized by in situ hybridization, which showed that many are regulated spatially as well as temporally. Notably, many transcripts exhibit axially restricted expression patterns that correlate with the pool from which they were isolated. Several transcripts are expressed in patterns consistent with a role in calcification. Conclusions We have characterized over 200 transcripts that are differentially expressed between the planula larva and post-settlement polyp of the coral, Acropora millepora. Sequence, putative function, and in some cases temporal and spatial expression are reported. PMID:22065994
Image Subtraction Reduction of Open Clusters M35 & NGC 2158 in the K2 Campaign 0 Super Stamps
NASA Astrophysics Data System (ADS)
Soares-Furtado, M.; Hartman, J. D.; Bakos, G. Á.; Huang, C. X.; Penev, K.; Bhatti, W.
2017-04-01
We observed the open clusters M35 and NGC 2158 during the initial K2 campaign (C0). Reducing these data to high-precision photometric timeseries is challenging due to the wide point-spread function (PSF) and the blending of stellar light in such dense regions. We developed an image-subtraction-based K2 reduction pipeline that is applicable to both crowded and sparse stellar fields. We applied our pipeline to the data-rich C0 K2 super stamp, containing the two open clusters, as well as to the neighboring postage stamps. In this paper, we present our image subtraction reduction pipeline and demonstrate that this technique achieves ultra-high photometric precision for sources in the C0 super stamp. We extract the raw light curves of 3960 stars taken from the UCAC4 and EPIC catalogs and de-trend them for systematic effects. We compare our photometric results with the prior reductions published in the literature. For de-trended TFA-corrected sources in the 12-12.25 {{{K}}}{{p}} magnitude range, we achieve a best 6.5-hour window running rms of 35 ppm, falling to 100 ppm for fainter stars in the 14-14.25 {{{K}}}{{p}} magnitude range. For stars with {K}p> 14, our de-trended and 6.5-hour binned light curves achieve the highest photometric precision. Moreover, all our TFA-corrected sources have higher precision on all timescales investigated. This work represents the first published image subtraction analysis of a K2 super stamp. This method will be particularly useful for analyzing the Galactic bulge observations carried out during K2 campaign 9. The raw light curves and the final results of our de-trending processes are publicly available at http://k2.hatsurveys.org/archive/.
Progressive disease in glioblastoma: Benefits and limitations of semi-automated volumetry
Alber, Georgina; Bette, Stefanie; Kaesmacher, Johannes; Boeckh-Behrens, Tobias; Gempt, Jens; Ringel, Florian; Specht, Hanno M.; Meyer, Bernhard; Zimmer, Claus
2017-01-01
Purpose Unambiguous evaluation of glioblastoma (GB) progression is crucial, both for clinical trials as well as day by day routine management of GB patients. 3D-volumetry in the follow-up of GB provides quantitative data on tumor extent and growth, and therefore has the potential to facilitate objective disease assessment. The present study investigated the utility of absolute changes in volume (delta) or regional, segmentation-based subtractions for detecting disease progression in longitudinal MRI follow-ups. Methods 165 high resolution 3-Tesla MRIs of 30 GB patients (23m, mean age 60.2y) were retrospectively included in this single center study. Contrast enhancement (CV) and tumor-related signal alterations in FLAIR images (FV) were semi-automatically segmented. Delta volume (dCV, dFV) and regional subtractions (sCV, sFV) were calculated. Disease progression was classified for every follow-up according to histopathologic results, decisions of the local multidisciplinary CNS tumor board and a consensus rating of the neuro-radiologic report. Results A generalized logistic mixed model for disease progression (yes / no) with dCV, dFV, sCV and sFV as input variables revealed that only dCV was significantly associated with prediction of disease progression (P = .005). Delta volume had a better accuracy than regional, segmentation-based subtractions (79% versus 72%) and a higher area under the curve by trend in ROC curves (.83 versus .75). Conclusion Absolute volume changes of the contrast enhancing tumor part were the most accurate volumetric determinant to detect progressive disease in assessment of GB and outweighed FLAIR changes as well as regional, segmentation-based image subtractions. This parameter might be useful in upcoming objective response criteria for glioblastoma. PMID:28245291
2014-01-01
Background Fractal geometry has been the basis for the development of a diagnosis of preneoplastic and neoplastic cells that clears up the undetermination of the atypical squamous cells of undetermined significance (ASCUS). Methods Pictures of 40 cervix cytology samples diagnosed with conventional parameters were taken. A blind study was developed in which the clinic diagnosis of 10 normal cells, 10 ASCUS, 10 L-SIL and 10 H-SIL was masked. Cellular nucleus and cytoplasm were evaluated in the generalized Box-Counting space, calculating the fractal dimension and number of spaces occupied by the frontier of each object. Further, number of pixels occupied by surface of each object was calculated. Later, the mathematical features of the measures were studied to establish differences or equalities useful for diagnostic application. Finally, the sensibility, specificity, negative likelihood ratio and diagnostic concordance with Kappa coefficient were calculated. Results Simultaneous measures of the nuclear surface and the subtraction between the boundaries of cytoplasm and nucleus, lead to differentiate normality, L-SIL and H-SIL. Normality shows values less than or equal to 735 in nucleus surface and values greater or equal to 161 in cytoplasm-nucleus subtraction. L-SIL cells exhibit a nucleus surface with values greater than or equal to 972 and a subtraction between nucleus-cytoplasm higher to 130. L-SIL cells show cytoplasm-nucleus values less than 120. The rank between 120–130 in cytoplasm-nucleus subtraction corresponds to evolution between L-SIL and H-SIL. Sensibility and specificity values were 100%, the negative likelihood ratio was zero and Kappa coefficient was equal to 1. Conclusions A new diagnostic methodology of clinic applicability was developed based on fractal and euclidean geometry, which is useful for evaluation of cervix cytology. PMID:24742118
Koyama, Tomonori; Inada, Naoko; Tsujii, Hiromi; Kurita, Hiroshi
2008-08-01
An original combination score (i.e. the sum of Vocabulary and Comprehension subtracted from the sum of Block Design and Digit Span) was created from the four Wechsler Intelligence Scale for Children-Third Edition (WISC-III) subtests identified by discriminant analysis on WISC-III data from 139/129 children with/without pervasive developmental disorders (PDD; mean, 8.3/8.1 years) and its utility examined for predicting PDD. Its best cut-off was 2/3, with sensitivity, specificity, positive and negative predictive values of 0.68, 0.61, 0.65 and 0.64, respectively. The score seems useful, so long as clinicians are aware of its limitations and use it only as a supplemental measure in PDD diagnosis.
Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua
2014-01-01
A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance.
Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua
2014-01-01
A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance. PMID:25484912
Compressive Sensing for Background Subtraction
2009-12-20
i) reconstructing an image using only a single optical pho- todiode (infrared, hyperspectral, etc.) along with a digital micromirror device (DMD... curves , we use the full images, run the background subtraction algorithm proposed in [19], and obtain baseline background subtracted images. We then...the images to generate the ROC curve . 5.5 Silhouettes vs. Difference Images We have used a multi camera set up for a 3D voxel reconstruction using the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiinoki, T; Shibuya, K; Sawada, A
Purpose: The new real-time tumor-tracking radiotherapy (RTRT) system was installed in our institution. This system consists of two x-ray tubes and color image intensifiers (I.I.s). The fiducial marker which was implanted near the tumor was tracked using color fluoroscopic images. However, the implantation of the fiducial marker is very invasive. Color fluoroscopic images enable to increase the recognition of the tumor. However, these images were not suitable to track the tumor without fiducial marker. The purpose of this study was to investigate the feasibility of markerless tracking using dual energy colored fluoroscopic images for real-time tumor-tracking radiotherapy system. Methods: Themore » colored fluoroscopic images of static and moving phantom that had the simulated tumor (30 mm diameter sphere) were experimentally acquired using the RTRT system. The programmable respiratory motion phantom was driven using the sinusoidal pattern in cranio-caudal direction (Amplitude: 20 mm, Time: 4 s). The x-ray condition was set to 55 kV, 50 mA and 105 kV, 50 mA for low energy and high energy, respectively. Dual energy images were calculated based on the weighted logarithmic subtraction of high and low energy images of RGB images. The usefulness of dual energy imaging for real-time tracking with an automated template image matching algorithm was investigated. Results: Our proposed dual energy subtraction improve the contrast between tumor and background to suppress the bone structure. For static phantom, our results showed that high tracking accuracy using dual energy subtraction images. For moving phantom, our results showed that good tracking accuracy using dual energy subtraction images. However, tracking accuracy was dependent on tumor position, tumor size and x-ray conditions. Conclusion: We indicated that feasibility of markerless tracking using dual energy fluoroscopic images for real-time tumor-tracking radiotherapy system. Furthermore, it is needed to investigate the tracking accuracy using proposed dual energy subtraction images for clinical cases.« less
Prado, Jérôme; Mutreja, Rachna; Zhang, Hongchuan; Mehta, Rucha; Desroches, Amy S.; Minas, Jennifer E.; Booth, James R.
2010-01-01
It has been proposed that recent cultural inventions such as symbolic arithmetic recycle evolutionary older neural mechanisms. A central assumption of this hypothesis is that the degree to which a pre-existing mechanism is recycled depends upon the degree of similarity between its initial function and the novel task. To test this assumption, we investigated whether the brain region involved in magnitude comparison in the intraparietal sulcus (IPS), localized by a numerosity comparison task, is recruited to a greater degree by arithmetic problems that involve number comparison (single-digit subtractions) than by problems that involve retrieving facts from memory (single-digit multiplications). Our results confirmed that subtractions are associated with greater activity in the IPS than multiplications, whereas multiplications elicit greater activity than subtractions in regions involved in verbal processing including the middle temporal gyrus and inferior frontal gyrus that were localized by a phonological processing task. Pattern analyses further indicated that the neural mechanisms more active for subtraction than multiplication in the IPS overlap with those involved in numerosity comparison, and that the strength of this overlap predicts inter-individual performance in the subtraction task. These findings provide novel evidence that elementary arithmetic relies on the co-option of evolutionary older neural circuits. PMID:21246667
Measurement of kT splitting scales in W→ℓν events at [Formula: see text] with the ATLAS detector.
Aad, G; Abajyan, T; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdelalim, A A; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Acharya, B S; Adamczyk, L; Adams, D L; Addy, T N; Adelman, J; Adomeit, S; Adragna, P; Adye, T; Aefsky, S; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahles, F; Ahmad, A; Ahsan, M; Aielli, G; Åkesson, T P A; Akimoto, G; Akimov, A V; Alam, M A; Albert, J; Albrand, S; Aleksa, M; Aleksandrov, I N; Alessandria, F; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Aliev, M; Alimonti, G; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alon, R; Alonso, A; Alonso, F; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amelung, C; Ammosov, V V; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Arce, A T H; Arfaoui, S; Arguin, J-F; Argyropoulos, S; Arik, E; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Artamonov, A; Artoni, G; Arutinov, D; Asai, S; Ask, S; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Astbury, A; Atkinson, M; Auerbach, B; Auge, E; Augsten, K; Aurousseau, M; Avolio, G; Axen, D; Azuelos, G; Azuma, Y; Baak, M A; Baccaglioni, G; Bacci, C; Bach, A M; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagnaia, P; Bai, Y; Bailey, D C; Bain, T; Baines, J T; Baker, O K; Baker, S; Balek, P; Balli, F; Banas, E; Banerjee, P; Banerjee, Sw; Banfi, D; Bangert, A; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barber, T; Barberio, E L; Barberis, D; Barbero, M; Bardin, D Y; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartsch, V; Basye, A; Bates, R L; Batkova, L; Batley, J R; Battaglia, A; Battistin, M; Bauer, F; Bawa, H S; Beale, S; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becks, K H; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behar Harpaz, S; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellomo, M; Belloni, A; Beloborodova, O; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Benoit, M; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernat, P; Bernhard, R; Bernius, C; Bernlochner, F U; Berry, T; Bertella, C; Bertin, A; Bertolucci, F; Besana, M I; Besjes, G J; Besson, N; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biscarat, C; Bittner, B; Black, C W; Black, J E; Black, K M; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blocki, J; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Boddy, C R; Boehler, M; Boek, J; Boek, T T; Boelaert, N; Bogaerts, J A; Bogdanchikov, A; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Bolnet, N M; Bomben, M; Bona, M; Boonekamp, M; Bordoni, S; Borer, C; Borisov, A; Borissov, G; Borjanovic, I; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Bouchami, J; Boudreau, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozovic-Jelisavcic, I; Bracinik, J; Branchini, P; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Bremer, J; Brendlinger, K; Brenner, R; Bressler, S; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Broggi, F; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brown, G; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchanan, J; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Budick, B; Bugge, L; Bulekov, O; Bundock, A C; Bunse, M; Buran, T; Burckhart, H; Burdin, S; Burgess, T; Burke, S; Busato, E; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Buttar, C M; Butterworth, J M; Buttinger, W; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Caloi, R; Calvet, D; Calvet, S; Camacho Toro, R; Camarri, P; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Canale, V; Canelli, F; Canepa, A; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capriotti, D; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Cascella, M; Caso, C; Castaneda-Miranda, E; Castillo Gimenez, V; Castro, N F; Cataldi, G; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cavaliere, V; Cavalleri, P; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, K; Chang, P; Chapleau, B; Chapman, J D; Chapman, J W; Charlton, D G; Chavda, V; Chavez Barajas, C A; Cheatham, S; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, S; Chen, X; Chen, Y; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Cheung, S L; Chevalier, L; Chiefari, G; Chikovani, L; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choudalakis, G; Chouridou, S; Chow, B K B; Christidi, I A; Christov, A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirilli, M; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, B; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Colas, J; Cole, S; Colijn, A P; Collins, N J; Collins-Tooth, C; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Courneyea, L; Cowan, G; Cox, B E; Cranmer, K; Crépé-Renaudin, S; Crescioli, F; Cristinziani, M; Crosetti, G; Cuciuc, C-M; Cuenca Almenar, C; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Curtis, C J; Cuthbert, C; Cwetanski, P; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; D'Orazio, A; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dallaire, F; Dallapiccola, C; Dam, M; Damiani, D S; Danielsson, H O; Dao, V; Darbo, G; Darlea, G L; Darmora, S; Dassoulas, J A; Davey, W; Davidek, T; Davidson, N; Davidson, R; Davies, E; Davies, M; Davignon, O; Davison, A R; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De La Taille, C; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; De Zorzi, G; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Degenhardt, J; Del Peso, J; Del Prete, T; Delemontex, T; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demirkoz, B; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deviveiros, P O; Dewhurst, A; DeWilde, B; Dhaliwal, S; Dhullipudi, R; Di Ciaccio, A; Di Ciaccio, L; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Luise, S; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dindar Yagci, K; Dingfelder, J; Dinut, F; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobbs, M; Dobos, D; Dobson, E; Dodd, J; Doglioni, C; Doherty, T; Dohmae, T; Doi, Y; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donini, J; Dopke, J; Doria, A; Dos Anjos, A; Dotti, A; Dova, M T; Doyle, A T; Dressnandt, N; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Duda, D; Dudarev, A; Dudziak, F; Duerdoth, I P; Duflot, L; Dufour, M-A; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Duxfield, R; Dwuznik, M; Ebenstein, W L; Ebke, J; Eckweiler, S; Edson, W; Edwards, C A; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Eisenhandler, E; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, K; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Engelmann, R; Engl, A; Epp, B; Erdmann, J; Ereditato, A; Eriksson, D; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Espinal Curull, X; Esposito, B; Etienne, F; Etienvre, A I; Etzion, E; Evangelakou, D; Evans, H; Fabbri, L; Fabre, C; Facini, G; Fakhrutdinov, R M; Falciano, S; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farley, J; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Fatholahzadeh, B; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feligioni, L; Feng, C; Feng, E J; Fenyuk, A B; Ferencei, J; Fernando, W; Ferrag, S; Ferrando, J; Ferrara, V; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filthaut, F; Fincke-Keeler, M; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, J; Fisher, M J; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Fonseca Martin, T; Formica, A; Forti, A; Fortin, D; Fournier, D; Fowler, A J; Fox, H; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Frank, T; Franklin, M; Franz, S; Fraternali, M; Fratina, S; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gadatsch, S; Gadfort, T; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Gan, K K; Gandrajula, R P; Gao, Y S; Gaponenko, A; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gerlach, P; Gershon, A; Geweniger, C; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Gianotti, F; Gibbard, B; Gibson, A; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gillman, A R; Gingrich, D M; Ginzburg, J; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giovannini, P; Giraud, P F; Giugni, D; Giunta, M; Gjelsten, B K; Gladilin, L K; Glasman, C; Glatzer, J; Glazov, A; Glonti, G L; Goddard, J R; Godfrey, J; Godlewski, J; Goebel, M; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez Silva, M L; Gonzalez-Sevilla, S; Goodson, J J; Goossens, L; Göpfert, T; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gough Eschrich, I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramstad, E; Grancagnolo, F; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Gray, J A; Graziani, E; Grebenyuk, O G; Greenshaw, T; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grigalashvili, N; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Groth-Jensen, J; Grybel, K; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gunther, J; Guo, B; Guo, J; Gutierrez, P; Guttman, N; Gutzwiller, O; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haas, S; Haber, C; Hadavand, H K; Hadley, D R; Haefner, P; Hajduk, Z; Hakobyan, H; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Handel, C; Hanke, P; Hansen, J R; Hansen, J B; Hansen, J D; Hansen, P H; Hansson, P; Hara, K; Harenberg, T; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Hartert, J; Hartjes, F; Haruyama, T; Harvey, A; Hasegawa, S; Hasegawa, Y; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayakawa, T; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heinemann, B; Heisterkamp, S; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, R C W; Henke, M; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Hernandez, C M; Hernández Jiménez, Y; Herrberg, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirsch, F; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holmgren, S O; Holy, T; Holzbauer, J L; Hong, T M; Hooft van Huysduynen, L; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, P J; Hsu, S-C; Hu, D; Hubacek, Z; Hubaut, F; Huegging, F; Huettmann, A; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibbotson, M; Ibragimov, I; Iconomidou-Fayard, L; Idarraga, J; Iengo, P; Igonkina, O; Ikegami, Y; Ikematsu, K; Ikeno, M; Iliadis, D; Ilic, N; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Jantsch, A; Janus, M; Jared, R C; Jarlskog, G; Jeanty, L; Jeng, G-Y; Jen-La Plante, I; Jennens, D; Jenni, P; Jeske, C; Jež, P; Jézéquel, S; Jha, M K; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinnouchi, O; Joergensen, M D; Joffe, D; Johansen, M; Johansson, K E; Johansson, P; Johnert, S; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Joram, C; Jorge, P M; Joshi, K D; Jovicevic, J; Jovin, T; Ju, X; Jung, C A; Jungst, R M; Juranek, V; Jussel, P; Juste Rozas, A; Kabana, S; Kaci, M; Kaczmarska, A; Kadlecik, P; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalinin, S; Kama, S; Kanaya, N; Kaneda, M; Kaneti, S; Kanno, T; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karagounis, M; Karakostas, K; Karnevskiy, M; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Keener, P T; Kehoe, R; Keil, M; Keller, J S; Kenyon, M; Keoshkerian, H; Kepka, O; Kerschen, N; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharchenko, D; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kitamura, T; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klemetti, M; Klier, A; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klinkby, E B; Klioutchnikova, T; Klok, P F; Klous, S; Kluge, E-E; Kluge, T; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Ko, B R; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koenig, S; Koetsveld, F; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohn, F; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Kolesnikov, V; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Köneke, K; König, A C; Kono, T; Kononov, A I; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, S; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Krejci, F; Kretzschmar, J; Kreutzfeldt, K; Krieger, N; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, M K; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurata, M; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwee, R; La Rosa, A; La Rotonda, L; Labarga, L; Lablak, S; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laisne, E; Lambourne, L; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Larner, A; Lassnig, M; Laurelli, P; Lavorini, V; Lavrijsen, W; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, M; Legendre, M; Legger, F; Leggett, C; Lehmacher, M; Lehmann Miotto, G; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Lendermann, V; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leonhardt, K; Leontsinis, S; Lepold, F; Leroy, C; Lessard, J-R; Lester, C G; Lester, C M; Levêque, J; Levin, D; Levinson, L J; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, S; Li, X; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Limper, M; Lin, S C; Linde, F; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, D; Liu, J B; Liu, L; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, R E; Lopes, L; Lopez Mateos, D; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Losty, M J; Lou, X; Lounis, A; Loureiro, K F; Love, J; Love, P A; Lowe, A J; Lu, F; Lubatti, H J; Luci, C; Lucotte, A; Ludwig, D; Ludwig, I; Ludwig, J; Luehring, F; Lukas, W; Luminari, L; Lund, E; Lundberg, B; Lundberg, J; Lundberg, O; Lund-Jensen, B; Lundquist, J; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Maček, B; Machado Miguens, J; Macina, D; Mackeprang, R; Madar, R; Madaras, R J; Maddocks, H J; Mader, W F; Madsen, A; Maeno, M; Maeno, T; Magnoni, L; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Mahout, G; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Malecki, P; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V; Malyukov, S; Mamuzic, J; Manabe, A; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, A; Mapelli, L; March, L; Marchand, J F; Marchese, F; Marchiori, G; Marcisovsky, M; Marino, C P; Marroquim, F; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, J P; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martinez Outschoorn, V; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Matsunaga, H; Matsushita, T; Mättig, P; Mättig, S; Mattravers, C; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazur, M; Mazzaferro, L; Mazzanti, M; Mc Donald, J; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; Mclaughlan, T; McMahon, S J; McPherson, R A; Meade, A; Mechnich, J; Mechtel, M; Medinnis, M; Meehan, S; Meera-Lebbai, R; Meguro, T; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mendoza Navas, L; Meng, Z; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer, J; Michal, S; Micu, L; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Miller, D W; Miller, R J; Mills, W J; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Miñano Moya, M; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitrevski, J; Mitsou, V A; Mitsui, S; Miyagawa, P S; Mjörnmark, J U; Moa, T; Moeller, V; Mohapatra, S; Mohr, W; Moles-Valls, R; Molfetas, A; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Mora Herrera, C; Moraes, A; Morange, N; Morel, J; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Möser, N; Moser, H G; Mosidze, M; Moss, J; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Muenstermann, D; Müller, T A; Munwes, Y; Murray, W J; Mussche, I; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Napier, A; Narayan, R; Nash, M; Nattermann, T; Naumann, T; Navarro, G; Neal, H A; Nechaeva, P Yu; Neep, T J; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neusiedl, A; Neves, R M; Nevski, P; Newcomer, F M; Newman, P R; Nguyen, D H; Nguyen Thi Hong, V; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Niedercorn, F; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsen, H; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novakova, J; Nozaki, M; Nozka, L; Nuncio-Quiroz, A-E; Nunes Hanninger, G; Nunnemann, T; Nurse, E; O'Brien, B J; O'Neil, D C; O'Shea, V; Oakes, L B; Oakham, F G; Oberlack, H; Ocariz, J; Ochi, A; Ochoa, M I; Oda, S; Odaka, S; Odier, J; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohshima, T; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira, M; Oliveira Damazio, D; Oliver Garcia, E; Olivito, D; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Osuna, C; Otero Y Garzon, G; Ottersbach, J P; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Ouyang, Q; Ovcharova, A; Owen, M; Owen, S; Ozcan, V E; Ozturk, N; Pacheco Pages, A; Padilla Aranda, C; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Paleari, C P; Palestini, S; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Papadelis, A; Papadopoulou, Th D; Paramonov, A; Paredes Hernandez, D; Park, W; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pashapour, S; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, M; Pedraza Lopez, S; Pedraza Morales, M I; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penson, A; Penwell, J; Perez Cavalcanti, T; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrino, R; Perrodo, P; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Petschull, D; Petteni, M; Pezoa, R; Phan, A; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piec, S M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pizio, C; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poblaguev, A; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Poll, J; Polychronakos, V; Pomeroy, D; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospelov, G E; Pospisil, S; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Prabhu, R; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Pretzl, K; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Prudent, X; Przybycien, M; Przysiezniak, H; Psoroulas, S; Ptacek, E; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Pylypchenko, Y; Qian, J; Quadt, A; Quarrie, D R; Quayle, W B; Quilty, D; Raas, M; Radeka, V; Radescu, V; Radloff, P; Ragusa, F; Rahal, G; Rahimi, A M; Rajagopalan, S; Rammensee, M; Rammes, M; Randle-Conde, A S; Randrianarivony, K; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Reinsch, A; Reisinger, I; Relich, M; Rembser, C; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Resende, B; Reznicek, P; Rezvani, R; Richter, R; Richter-Was, E; Ridel, M; Rieck, P; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Rios, R R; Ritsch, E; Riu, I; Rivoltella, G; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Rocha de Lima, J G; Roda, C; Roda Dos Santos, D; Roe, A; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romeo, G; Romero Adam, E; Rompotis, N; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, A; Rose, M; Rosenbaum, G A; Rosendahl, P L; Rosenthal, O; Rosselet, L; Rossetti, V; Rossi, E; Rossi, L P; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Ruckstuhl, N; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rumyantsev, L; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ruzicka, P; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Salihagic, D; Salnikov, A; Salt, J; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santamarina Rios, C; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Saraiva, J G; Sarangi, T; Sarkisyan-Grinbaum, E; Sarrazin, B; Sarri, F; Sartisohn, G; Sasaki, O; Sasaki, Y; Sasao, N; Satsounkevitch, I; Sauvage, G; Sauvan, E; Sauvan, J B; Savard, P; Savinov, V; Savu, D O; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scannicchio, D A; Scarcella, M; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaelicke, A; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schroeder, C; Schroer, N; Schultens, M J; Schultes, J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwartzman, A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellden, B; Sellers, G; Seman, M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaw, K; Sherwood, P; Shimizu, S; Shimojima, M; Shin, T; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidoti, A; Siegert, F; Sijacki, Dj; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinnari, L A; Skottowe, H P; Skovpen, K; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, B C; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snow, S W; Snow, J; Snyder, S; Sobie, R; Sodomka, J; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Solovyev, V; Soni, N; Sood, A; Sopko, V; Sopko, B; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A; South, D; Spagnolo, S; Spanò, F; Spighi, R; Spigo, G; Spiwoks, R; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Stahlman, J; Stamen, R; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Staude, A; Stavina, P; Steele, G; Steinbach, P; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoerig, K; Stoicea, G; Stonjek, S; Strachota, P; Stradling, A R; Straessner, A; Strandberg, J; Strandberg, S; Strandlie, A; Strang, M; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Strong, J A; Stroynowski, R; Stugu, B; Stumer, I; Stupak, J; Sturm, P; Styles, N A; Su, D; Subramania, Hs; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Tackmann, K; Taffard, A; Tafirout, R; Taiblum, N; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A; Tam, J Y C; Tamsett, M C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tani, K; Tannoury, N; Tapprogge, S; Tardif, D; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tassi, E; Tayalati, Y; Taylor, C; Taylor, F E; Taylor, G N; Taylor, W; Teinturier, M; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Terada, S; Terashi, K; Terron, J; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thoma, S; Thomas, J P; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tic, T; Tikhomirov, V O; Tikhonov, Y A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Tonoyan, A; Topfel, C; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Triplett, N; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiakiris, M; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsung, J-W; Tsuno, S; Tsybychev, D; Tua, A; Tudorache, A; Tudorache, V; Tuggle, J M; Turala, M; Turecek, D; Turk Cakir, I; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Tzanakos, G; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Urbaniec, D; Urquijo, P; Usai, G; Vacavant, L; Vacek, V; Vachon, B; Vahsen, S; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Berg, R; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Poel, E; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; Vanadia, M; Vandelli, W; Vaniachine, A; Vankov, P; Vannucci, F; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vassilakopoulos, V I; Vazeille, F; Vazquez Schroeder, T; Veloso, F; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinek, E; Vinogradov, V B; Virzi, J; Vitells, O; Viti, M; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vokac, P; Volpi, G; Volpi, M; Volpini, G; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorwerk, V; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, W; Wagner, P; Wahlen, H; Wahrmund, S; Wakabayashi, J; Walch, S; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watanabe, I; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, A T; Waugh, B M; Weber, M S; Webster, J S; Weidberg, A R; Weigell, P; Weingarten, J; Weiser, C; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Werth, M; Wessels, M; Wetter, J; Weydert, C; Whalen, K; White, A; White, M J; White, S; Whitehead, S R; Whiteson, D; Whittington, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilhelm, I; Wilkens, H G; Will, J Z; Williams, E; Williams, H H; Williams, S; Willis, W; Willocq, S; Wilson, J A; Wilson, M G; Wilson, A; Wingerter-Seez, I; Winkelmann, S; Winklmeier, F; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wong, W C; Wooden, G; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wraight, K; Wright, M; Wrona, B; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wynne, B M; Xella, S; Xiao, M; Xie, S; Xu, C; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, T; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yang, Z; Yanush, S; Yao, L; Yasu, Y; Yatsenko, E; Ye, J; Ye, S; Yen, A L; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D; Yu, D R; Yu, J; Yu, J; Yuan, L; Yurkewicz, A; Zabinski, B; Zaidan, R; Zaitsev, A M; Zambito, S; Zanello, L; Zanzi, D; Zaytsev, A; Zeitnitz, C; Zeman, M; Zemla, A; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, L; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, N; Zhou, Y; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhuravlov, V; Zibell, A; Zieminska, D; Zimin, N I; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zitoun, R; Živković, L; Zmouchko, V V; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zutshi, V; Zwalinski, L
A measurement of splitting scales, as defined by the k T clustering algorithm, is presented for final states containing a W boson produced in proton-proton collisions at a centre-of-mass energy of 7 TeV. The measurement is based on the full 2010 data sample corresponding to an integrated luminosity of 36 pb -1 which was collected using the ATLAS detector at the CERN Large Hadron Collider. Cluster splitting scales are measured in events containing W bosons decaying to electrons or muons. The measurement comprises the four hardest splitting scales in a k T cluster sequence of the hadronic activity accompanying the W boson, and ratios of these splitting scales. Backgrounds such as multi-jet and top-quark-pair production are subtracted and the results are corrected for detector effects. Predictions from various Monte Carlo event generators at particle level are compared to the data. Overall, reasonable agreement is found with all generators, but larger deviations between the predictions and the data are evident in the soft regions of the splitting scales.
NASA Astrophysics Data System (ADS)
Barth, Johannes; van Geldern, Robert; Veizer, Jan; Karim, Ajaz; Freitag, Heiko; Fowlwer, Hayley
2017-04-01
Comparison of water stable isotopes of rivers to those of precipitation enables separation of evaporation from transpiration on the catchment scale. The method exploits isotope ratio changes that are caused exclusively by evaporation over longer time periods of at least one hydrological year. When interception is quantified by mapping plant types in catchments, the amount of water lost by transpiration can be determined. When in turn pairing transpiration with the water use efficiency (WUE i.e. water loss by transpiration per uptake of CO2) and subtracting heterotrophic soil respiration fluxes (Rh), catchment-wide carbon balances can be established. This method was applied to several regions including the Great Lakes and the Clyde River Catchments ...(Barth, et al., 2007, Karim, et al., 2008). In these studies evaporation loss was 24 % and 1.3 % and transpiration loss was 47 % and 22 % when compared to incoming precipitation for the Great Lakes and the Clyde Catchment, respectively. Applying WUE values for typical plant covers and using area-typical Rh values led to estimates of CO2 uptake of 251 g C m-2 a-1 for the Great Lakes Catchment and CO2 loss of 21 g C m2 a-1 for the Clyde Catchment. These discrepancies are most likely due to different vegetation covers. The method applies to scales of several thousand km2 and has good potential for improvement via calibration on smaller scales. This can for instance be achieved by separate treatment of sub-catchments with more detailed mapping of interception as a major unknown. These previous studies have shown that better uncertainty analyses are necessary in order to estimate errors in water and carbon balances. The stable isotope method is also a good basis for comparison to other landscape carbon balances for instance by eddy covariance techniques. This independent method and its up-scaling combined with the stable isotope and area-integrating methods can provide cross validation of large-scale carbon budgets. Together they can help to constrain relationships between carbon and water balances on the continental scale. References .Barth JAC, Freitag H, Fowler HJ, Smith A, Ingle C, Karim A (2007) Water fluxes and their control on the terrestrial carbon balance: Results from a stable isotope study on the Clyde Watershed (Scotland). Appl Geochem 22: 2684-2694 DOI 10.1016/j.apgeochem.2007.06.002 Karim A, Veizer J, Barth J.A.C. (2008) Net ecosystem production in the great lakes basin and its implications for the North American missing carbon sink: A hydrologic and stable isotope approach. Global and Planetary Change 61: 15-27 DOI 10.1016/j.gloplacha.2007.08.004
Merit and Justice: An Experimental Analysis of Attitude to Inequality
Rustichini, Aldo; Vostroknutov, Alexander
2014-01-01
Merit and justice play a crucial role in ethical theory and political philosophy. Some theories view justice as allocation according to merit; others view justice as based on criteria of its own, and take merit and justice as two independent values. We study experimentally how these views are perceived. In our experiment subjects played two games (both against the computer): a game of skill and a game of luck. After each game they observed the earnings of all the subjects in the session, and thus the differences in outcomes. Each subject could reduce the winnings of one other person at a cost. The majority of the subjects used the option to subtract. The decision to subtract and the amount subtracted depended on whether the game was one of skill or luck, and on the distance between the earnings of the subject and those of others. Everything else being equal, subjects subtracted more in luck than in skill. In skill game, but not in luck, the subtraction becomes more likely, and the amount larger, as the distance increases. The results show that individuals considered favorable outcomes in luck to be undeserved, and thus felt more justified in subtracting. In the skill game instead, they considered more favorable outcomes (their own as well as others') as signal of ability and perhaps effort, which thus deserved merit; hence, they felt less motivated to subtract. However, a larger size of the unfavorable gap from the others increased the unpleasantness of poor performance, which in turn motivated larger subtraction. In conclusion, merit is attributed if and only if effort or skill significantly affect the outcome. An inequality of outcomes is viewed differently depending on whether merit causes the difference or not. Thus, merit and justice are strongly linked in the human perception of social order. PMID:25490094
Perturbative Gaussianizing transforms for cosmological fields
NASA Astrophysics Data System (ADS)
Hall, Alex; Mead, Alexander
2018-01-01
Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms
NASA Astrophysics Data System (ADS)
Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.
2006-03-01
In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.
ViBe: a universal background subtraction algorithm for video sequences.
Barnich, Olivier; Van Droogenbroeck, Marc
2011-06-01
This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.
Reduction of EEG artefacts induced by vibration in the MR-environment.
Rothlübbers, Sven; Relvas, Vânia; Leal, Alberto; Figueiredo, Patrícia
2013-01-01
The EEG acquired simultaneously with functional magnetic resonance imaging (fMRI) is distorted by a number of artefacts related to the presence of strong magnetic fields. In order to allow for a useful interpretation of the EEG data, it is necessary to reduce these artefacts. For the two most prominent artefacts, associated with magnetic field gradient switching and the heart beat, reduction methods have been developed and applied successfully. Due to their repetitive nature, such artefacts can be reduced by subtraction of the respective template retrieved by averaging across cycles. In this paper, we investigate additional artefacts related to the MR environment and propose a method for the reduction of the vibration artefact caused by the cryo-cooler compression pumps system. Data were collected from the EEG cap placed on an MR head phantom, in order to characterise the MR environment related artefacts. Since the vibration artefact was found to be repetitive, a template subtraction method was developed for its reduction, and this was then adjusted to meet the specific requirements of patient data. The developed methodology successfully reduced the vibration artefact by about 90% in five EEG-fMRI datasets collected from two epilepsy patients.
Image-Subtraction Photometry of Variable Stars in the Globular Clusters NGC 6388 and NGC 6441
NASA Technical Reports Server (NTRS)
Corwin, Michael T.; Sumerel, Andrew N.; Pritzl, Barton J.; Smith, Horace A.; Catelan, M.; Sweigart, Allen V.; Stetson, Peter B.
2006-01-01
We have applied Alard's image subtraction method (ISIS v2.1) to the observations of the globular clusters NGC 6388 and NGC 6441 previously analyzed using standard photometric techniques (DAOPHOT, ALLFRAME). In this reanalysis of observations obtained at CTIO, besides recovering the variables previously detected on the basis of our ground-based images, we have also been able to recover most of the RR Lyrae variables previously detected only in the analysis of Hubble Space Telescope WFPC2 observations of the inner region of NGC 6441. In addition, we report five possible new variables not found in the analysis of the EST observations of NGC 6441. This dramatically illustrates the capabilities of image subtraction techniques applied to ground-based data to recover variables in extremely crowded fields. We have also detected twelve new variables and six possible variables in NGC 6388 not found in our previous groundbased studies. Revised mean periods for RRab stars in NGC 6388 and NGC 6441 are 0.676 day and 0.756 day, respectively. These values are among the largest known for any galactic globular cluster. Additional probable type II Cepheids were identified in NGC 6388, confirming its status as a metal-rich globular cluster rich in Cepheids.
NASA Astrophysics Data System (ADS)
Odaka, Shigeru; Kurihara, Yoshimasa
2016-12-01
An event generator for diphoton (γ γ ) production in hadron collisions that includes associated jet production up to two jets has been developed using a subtraction method based on the limited leading-log subtraction. The parton shower (PS) simulation to restore the subtracted divergent components involves both quantum electrodynamic (QED) and quantum chromodynamic radiation, and QED radiation at very small Q2 is simulated by referring to a fragmentation function (FF). The PS/FF simulation has the ability to enforce the radiation of a given number of energetic photons. The generated events can be fed to PYTHIA to obtain particle (hadron) level event information, which enables us to perform realistic simulations of photon isolation and hadron-jet reconstruction. The simulated events, in which the loop-mediated g g →γ γ process is involved, reasonably reproduce the diphoton kinematics measured at the LHC. Using the developed simulation, we found that the two-jet processes significantly contribute to diphoton production. A large two-jet contribution can be considered as a common feature in electroweak-boson production in hadron collisions although the reason is yet to be understood. Discussion concerning the treatment of the underlying events in photon isolation is necessary for future higher precision measurements.
Drawing lithography for microneedles: a review of fundamentals and biomedical applications.
Lee, Kwang; Jung, Hyungil
2012-10-01
A microneedle is a three-dimensional (3D) micromechanical structure and has been in the spotlight recently as a drug delivery system (DDS). Because a microneedle delivers the target drug after penetrating the skin barrier, the therapeutic effects of microneedles proceed from its 3D structural geometry. Various types of microneedles have been fabricated using subtractive micromanufacturing methods which are based on the inherently planar two-dimensional (2D) geometries. However, traditional subtractive processes are limited for flexible structural microneedles and makes functional biomedical applications for efficient drug delivery difficult. The authors of the present study propose drawing lithography as a unique additive process for the fabrication of a microneedle directly from 2D planar substrates, thus overcoming a subtractive process shortcoming. The present article provides the first overview of the principal drawing lithography technology: fundamentals and biomedical applications. The continuous drawing technique for an ultrahigh-aspect ratio (UHAR) hollow microneedle, stepwise controlled drawing technique for a dissolving microneedle, and drawing technique with antidromic isolation for a hybrid electro-microneedle (HEM) are reviewed, and efficient biomedical applications by drawing lithography-mediated microneedles as an innovative drug and gene delivery system are described. Drawing lithography herein can provide a great breakthrough in the development of materials science and biotechnology. Copyright © 2012 Elsevier Ltd. All rights reserved.
Chen, Yunyun; Sanchez, Carlos; Yue, Yuan; ...
2016-03-25
Background: The potential transfer of engineered nanoparticles (ENPs) from plants into the food chain has raised widespread concerns. In order to investigate the effects of ENPs on plants, young cabbage plants (Brassica oleracea) were exposed to a hydroponic system containing yttrium oxide (yttria) ENPs. The objective of this study was to reveal the impacts of NPs on plants by using K-edge subtraction imaging technique. Results: Using synchrotron dual-e nergy X-ray micro-tomography with K-edge subtraction technique, we studied the uptake, accumulation, distribution and concentration mapping of yttria ENPs in cabbage plants. It was found that yttria ENPs were uptaken by themore » cabbage roots but did not effectively transferred and mobilized through the cabbage stem and leaves. This could be due to the accumulation of yttria ENPs blocked at primary-lateral-root junction. Instead, non-yttria minerals were found in the xylem vessels of roots and stem. Conclusions: Synchrotron dual-energy X-ray micro-tomography is an effective method to observe yttria NPs inside the cabbage plants in both whole body and microscale level. Furthermore, the blockage of a plant's roots by nanoparticles is likely the first and potentially fatal environmental effect of such type of nanoparticles.« less
Manual for the Jet Event and Background Simulation Library(JEBSimLib)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, Matthias; Soltz, Ron; Angerami, Aaron
Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momentamore » are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.« less
Manual for the Jet Event and Background Simulation Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, M.; Soltz, R.; Angerami, A.
Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momentamore » are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.« less
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
... Carotid angiogram; Cervicocerebral catheter-based angiography; Intra-arterial digital subtraction angiography; IADSA ... with the dye are seen. This is called digital subtraction angiography (DSA). After the x-rays are ...
Zeeshan, Farrukh; Tabbassum, Misbah; Jorgensen, Lene; Medlicott, Natalie J
2018-02-01
Protein drugs may encounter conformational perturbations during the formulation processing of lipid-based solid dosage forms. In aqueous protein solutions, attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy can investigate these conformational changes following the subtraction of spectral interference of solvent with protein amide I bands. However, in solid dosage forms, the possible spectral contribution of lipid carriers to protein amide I band may be an obstacle to determine conformational alterations. The objective of this study was to develop an ATR FT-IR spectroscopic method for the analysis of protein secondary structure embedded in solid lipid matrices. Bovine serum albumin (BSA) was chosen as a model protein, while Precirol AT05 (glycerol palmitostearate, melting point 58 ℃) was employed as the model lipid matrix. Bovine serum albumin was incorporated into lipid using physical mixing, melting and mixing, or wet granulation mixing methods. Attenuated total reflection FT-IR spectroscopy and size exclusion chromatography (SEC) were performed for the analysis of BSA secondary structure and its dissolution in aqueous media, respectively. The results showed significant interference of Precirol ATO5 with BSA amide I band which was subtracted up to 90% w/w lipid content to analyze BSA secondary structure. In addition, ATR FT-IR spectroscopy also detected thermally denatured BSA solid alone and in the presence of lipid matrix indicating its suitability for the detection of denatured protein solids in lipid matrices. Despite being in the solid state, conformational changes occurred to BSA upon incorporation into solid lipid matrices. However, the extent of these conformational alterations was found to be dependent on the mixing method employed as indicated by area overlap calculations. For instance, the melting and mixing method imparted negligible effect on BSA secondary structure, whereas the wet granulation mixing method promoted more changes. Size exclusion chromatography analysis depicted the complete dissolution of BSA in the aqueous media employed in the wet granulation method. In conclusion, an ATR FT-IR spectroscopic method was successfully developed to investigate BSA secondary structure in solid lipid matrices following the subtraction of lipid spectral interference. The ATR FT-IR spectroscopy could further be applied to investigate the secondary structure perturbations of therapeutic proteins during their formulation development.
Calendar methods of fertility regulation: a rule of thumb.
Colombo, B; Scarpa, B
1996-01-01
"[Many] illiterate women, particularly in the third world, find [it] difficult to apply usual calendar methods for the regulation of fertility. Some of them are even unable to make simple subtractions. In this paper we are therefore trying to evaluate the applicability and the efficiency of an extremely simple rule which entails only [the ability to count] a number of days, and always the same way." (SUMMARY IN ITA) excerpt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor-Pashow, Kathryn M. L.; Jones, Daniel H.
A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration wasmore » determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.« less
Multislice CT perfusion imaging of the lung in detection of pulmonary embolism
NASA Astrophysics Data System (ADS)
Hong, Helen; Lee, Jeongjin
2006-03-01
We propose a new subtraction technique for accurately imaging lung perfusion and efficiently detecting pulmonary embolism in chest MDCT angiography. Our method is composed of five stages. First, optimal segmentation technique is performed for extracting same volume of the lungs, major airway and vascular structures from pre- and post-contrast images with different lung density. Second, initial registration based on apex, hilar point and center of inertia (COI) of each unilateral lung is proposed to correct the gross translational mismatch. Third, initial alignment is refined by iterative surface registration. For fast and robust convergence of the distance measure to the optimal value, a 3D distance map is generated by the narrow-band distance propagation. Fourth, 3D nonlinear filter is applied to the lung parenchyma to compensate for residual spiral artifacts and artifacts caused by heart motion. Fifth, enhanced vessels are visualized by subtracting registered pre-contrast images from post-contrast images. To facilitate visualization of parenchyma enhancement, color-coded mapping and image fusion is used. Our method has been successfully applied to ten patients of pre- and post-contrast images in chest MDCT angiography. Experimental results show that the performance of our method is very promising compared with conventional methods with the aspects of its visual inspection, accuracy and processing time.
Zhao, Gang; Tan, Wei; Hou, Jiajia; Qiu, Xiaodong; Ma, Weiguang; Li, Zhixin; Dong, Lei; Zhang, Lei; Yin, Wangbao; Xiao, Liantuan; Axner, Ove; Jia, Suotang
2016-01-25
A methodology for calibration-free wavelength modulation spectroscopy (CF-WMS) that is based upon an extensive empirical description of the wavelength-modulation frequency response (WMFR) of DFB laser is presented. An assessment of the WMFR of a DFB laser by the use of an etalon confirms that it consists of two parts: a 1st harmonic component with an amplitude that is linear with the sweep and a nonlinear 2nd harmonic component with a constant amplitude. Simulations show that, among the various factors that affect the line shape of a background-subtracted peak-normalized 2f signal, such as concentration, phase shifts between intensity modulation and frequency modulation, and WMFR, only the last factor has a decisive impact. Based on this and to avoid the impractical use of an etalon, a novel method to pre-determine the parameters of the WMFR by fitting to a background-subtracted peak-normalized 2f signal has been developed. The accuracy of the new scheme to determine the WMFR is demonstrated and compared with that of conventional methods in CF-WMS by detection of trace acetylene. The results show that the new method provides a four times smaller fitting error than the conventional methods and retrieves concentration more accurately.
A simultaneous all-optical half/full-subtraction strategy using cascaded highly nonlinear fibers
NASA Astrophysics Data System (ADS)
Singh, Karamdeep; Kaur, Gurmeet; Singh, Maninder Lal
2018-02-01
Using non-linear effects such as cross-gain modulation (XGM) and cross-phase modulation (XPM) inside two highly non-linear fibres (HNLF) arranged in cascaded configuration, a simultaneous half/full-subtracter is proposed. The proposed simultaneous half/full-subtracter design is attractive due to several features such as input data pattern independence and usage of minimal number of non-linear elements i.e. HNLFs. Proof of concept simulations have been conducted at 100 Gbps rate, indicating fine performance, as extinction ratio (dB) > 6.28 dB and eye opening factors (EO) > 77.1072% are recorded for each implemented output. The proposed simultaneous half/full-subtracter can be used as a key component in all-optical information processing circuits.
Stand-off transmission lines and method for making same
Tuckerman, D.B.
1991-05-21
Standoff transmission lines in an integrated circuit structure are formed by etching away or removing the portion of the dielectric layer separating the microstrip metal lines and the ground plane from the regions that are not under the lines. The microstrip lines can be fabricated by a subtractive process of etching a metal layer, an additive process of direct laser writing fine lines followed by plating up the lines or a subtractive/additive process in which a trench is etched over a nucleation layer and the wire is electrolytically deposited. Microstrip lines supported on freestanding posts of dielectric material surrounded by air gaps are produced. The average dielectric constant between the lines and ground plane is reduced, resulting in higher characteristic impedance, less crosstalk between lines, increased signal propagation velocities, and reduced wafer stress. 16 figures.
Reference layer adaptive filtering (RLAF) for EEG artifact reduction in simultaneous EEG-fMRI.
Steyrl, David; Krausz, Gunther; Koschutnig, Karl; Edlinger, Günter; Müller-Putz, Gernot R
2017-04-01
Simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) combines advantages of both methods, namely high temporal resolution of EEG and high spatial resolution of fMRI. However, EEG quality is limited due to severe artifacts caused by fMRI scanners. To improve EEG data quality substantially, we introduce methods that use a reusable reference layer EEG cap prototype in combination with adaptive filtering. The first method, reference layer adaptive filtering (RLAF), uses adaptive filtering with reference layer artifact data to optimize artifact subtraction from EEG. In the second method, multi band reference layer adaptive filtering (MBRLAF), adaptive filtering is performed on bandwidth limited sub-bands of the EEG and the reference channels. The results suggests that RLAF outperforms the baseline method, average artifact subtraction, in all settings and also its direct predecessor, reference layer artifact subtraction (RLAS), in lower (<35 Hz) frequency ranges. MBRLAF is computationally more demanding than RLAF, but highly effective in all EEG frequency ranges. Effectivity is determined by visual inspection, as well as root-mean-square voltage reduction and power reduction of EEG provided that physiological EEG components such as occipital EEG alpha power and visual evoked potentials (VEP) are preserved. We demonstrate that both, RLAF and MBRLAF, improve VEP quality. For that, we calculate the mean-squared-distance of single trial VEP to the mean VEP and estimate single trial VEP classification accuracies. We found that the average mean-squared-distance is lowest and the average classification accuracy is highest after MBLAF. RLAF was second best. In conclusion, the results suggests that RLAF and MBRLAF are potentially very effective in improving EEG quality of simultaneous EEG-fMRI. Highlights We present a new and reusable reference layer cap prototype for simultaneous EEG-fMRI We introduce new algorithms for reducing EEG artifacts due to simultaneous fMRI The algorithms combine a reference layer and adaptive filtering Several evaluation criteria suggest superior effectivity in terms of artifact reduction We demonstrate that physiological EEG components are preserved.
Reference layer adaptive filtering (RLAF) for EEG artifact reduction in simultaneous EEG-fMRI
NASA Astrophysics Data System (ADS)
Steyrl, David; Krausz, Gunther; Koschutnig, Karl; Edlinger, Günter; Müller-Putz, Gernot R.
2017-04-01
Objective. Simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) combines advantages of both methods, namely high temporal resolution of EEG and high spatial resolution of fMRI. However, EEG quality is limited due to severe artifacts caused by fMRI scanners. Approach. To improve EEG data quality substantially, we introduce methods that use a reusable reference layer EEG cap prototype in combination with adaptive filtering. The first method, reference layer adaptive filtering (RLAF), uses adaptive filtering with reference layer artifact data to optimize artifact subtraction from EEG. In the second method, multi band reference layer adaptive filtering (MBRLAF), adaptive filtering is performed on bandwidth limited sub-bands of the EEG and the reference channels. Main results. The results suggests that RLAF outperforms the baseline method, average artifact subtraction, in all settings and also its direct predecessor, reference layer artifact subtraction (RLAS), in lower (<35 Hz) frequency ranges. MBRLAF is computationally more demanding than RLAF, but highly effective in all EEG frequency ranges. Effectivity is determined by visual inspection, as well as root-mean-square voltage reduction and power reduction of EEG provided that physiological EEG components such as occipital EEG alpha power and visual evoked potentials (VEP) are preserved. We demonstrate that both, RLAF and MBRLAF, improve VEP quality. For that, we calculate the mean-squared-distance of single trial VEP to the mean VEP and estimate single trial VEP classification accuracies. We found that the average mean-squared-distance is lowest and the average classification accuracy is highest after MBLAF. RLAF was second best. Significance. In conclusion, the results suggests that RLAF and MBRLAF are potentially very effective in improving EEG quality of simultaneous EEG-fMRI. Highlights We present a new and reusable reference layer cap prototype for simultaneous EEG-fMRI We introduce new algorithms for reducing EEG artifacts due to simultaneous fMRI The algorithms combine a reference layer and adaptive filtering Several evaluation criteria suggest superior effectivity in terms of artifact reduction We demonstrate that physiological EEG components are preserved
1994-03-01
labels of a, which is called significance levels. The hypothesis tests are done based on the a levels . The maximum probabilities of making type II error...critical values at specific a levels . This procedure is done for each of the 50,000 samples. The number of the samples passing each test at those specific... a levels is counted. The ratio of the number of accepted samples to 50,000 gives the percentage point. Then, subtracting that value from one would
Dynamic pulse difference circuit
Erickson, Gerald L.
1978-01-01
A digital electronic circuit of especial use for subtracting background activity pulses in gamma spectrometry comprises an up-down counter connected to count up with signal-channel pulses and to count down with background-channel pulses. A detector responsive to the count position of the up-down counter provides a signal when the up-down counter has completed one scaling sequence cycle of counts in the up direction. In an alternate embodiment, a detector responsive to the count position of the up-down counter provides a signal upon overflow of the counter.
Strong running coupling at τ and Z(0) mass scales from lattice QCD.
Blossier, B; Boucaud, Ph; Brinet, M; De Soto, F; Du, X; Morenas, V; Pène, O; Petrov, K; Rodríguez-Quintero, J
2012-06-29
This Letter reports on the first computation, from data obtained in lattice QCD with u, d, s, and c quarks in the sea, of the running strong coupling via the ghost-gluon coupling renormalized in the momentum-subtraction Taylor scheme. We provide readers with estimates of α(MS[over ¯])(m(τ)(2)) and α(MS[over ¯])(m(Z)(2)) in very good agreement with experimental results. Including a dynamical c quark makes the needed running of α(MS[over ¯]) safer.
Comparison of prosthetic models produced by traditional and additive manufacturing methods.
Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul
2015-08-01
The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.
The Sixth Data Release of the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Adelman-McCarthy, Jennifer K.; Agüeros, Marcel A.; Allam, Sahar S.; Allende Prieto, Carlos; Anderson, Kurt S. J.; Anderson, Scott F.; Annis, James; Bahcall, Neta A.; Bailer-Jones, C. A. L.; Baldry, Ivan K.; Barentine, J. C.; Bassett, Bruce A.; Becker, Andrew C.; Beers, Timothy C.; Bell, Eric F.; Berlind, Andreas A.; Bernardi, Mariangela; Blanton, Michael R.; Bochanski, John J.; Boroski, William N.; Brinchmann, Jarle; Brinkmann, J.; Brunner, Robert J.; Budavári, Tamás; Carliles, Samuel; Carr, Michael A.; Castander, Francisco J.; Cinabro, David; Cool, R. J.; Covey, Kevin R.; Csabai, István; Cunha, Carlos E.; Davenport, James R. A.; Dilday, Ben; Doi, Mamoru; Eisenstein, Daniel J.; Evans, Michael L.; Fan, Xiaohui; Finkbeiner, Douglas P.; Friedman, Scott D.; Frieman, Joshua A.; Fukugita, Masataka; Gänsicke, Boris T.; Gates, Evalyn; Gillespie, Bruce; Glazebrook, Karl; Gray, Jim; Grebel, Eva K.; Gunn, James E.; Gurbani, Vijay K.; Hall, Patrick B.; Harding, Paul; Harvanek, Michael; Hawley, Suzanne L.; Hayes, Jeffrey; Heckman, Timothy M.; Hendry, John S.; Hindsley, Robert B.; Hirata, Christopher M.; Hogan, Craig J.; Hogg, David W.; Hyde, Joseph B.; Ichikawa, Shin-ichi; Ivezić, Željko; Jester, Sebastian; Johnson, Jennifer A.; Jorgensen, Anders M.; Jurić, Mario; Kent, Stephen M.; Kessler, R.; Kleinman, S. J.; Knapp, G. R.; Kron, Richard G.; Krzesinski, Jurek; Kuropatkin, Nikolay; Lamb, Donald Q.; Lampeitl, Hubert; Lebedeva, Svetlana; Lee, Young Sun; French Leger, R.; Lépine, Sébastien; Lima, Marcos; Lin, Huan; Long, Daniel C.; Loomis, Craig P.; Loveday, Jon; Lupton, Robert H.; Malanushenko, Olena; Malanushenko, Viktor; Mandelbaum, Rachel; Margon, Bruce; Marriner, John P.; Martínez-Delgado, David; Matsubara, Takahiko; McGehee, Peregrine M.; McKay, Timothy A.; Meiksin, Avery; Morrison, Heather L.; Munn, Jeffrey A.; Nakajima, Reiko; Neilsen, Eric H., Jr.; Newberg, Heidi Jo; Nichol, Robert C.; Nicinski, Tom; Nieto-Santisteban, Maria; Nitta, Atsuko; Okamura, Sadanori; Owen, Russell; Oyaizu, Hiroaki; Padmanabhan, Nikhil; Pan, Kaike; Park, Changbom; Peoples, John, Jr.; Pier, Jeffrey R.; Pope, Adrian C.; Purger, Norbert; Raddick, M. Jordan; Re Fiorentin, Paola; Richards, Gordon T.; Richmond, Michael W.; Riess, Adam G.; Rix, Hans-Walter; Rockosi, Constance M.; Sako, Masao; Schlegel, David J.; Schneider, Donald P.; Schreiber, Matthias R.; Schwope, Axel D.; Seljak, Uroš; Sesar, Branimir; Sheldon, Erin; Shimasaku, Kazu; Sivarani, Thirupathi; Allyn Smith, J.; Snedden, Stephanie A.; Steinmetz, Matthias; Strauss, Michael A.; SubbaRao, Mark; Suto, Yasushi; Szalay, Alexander S.; Szapudi, István; Szkody, Paula; Tegmark, Max; Thakar, Aniruddha R.; Tremonti, Christy A.; Tucker, Douglas L.; Uomoto, Alan; Vanden Berk, Daniel E.; Vandenberg, Jan; Vidrih, S.; Vogeley, Michael S.; Voges, Wolfgang; Vogt, Nicole P.; Wadadekar, Yogesh; Weinberg, David H.; West, Andrew A.; White, Simon D. M.; Wilhite, Brian C.; Yanny, Brian; Yocum, D. R.; York, Donald G.; Zehavi, Idit; Zucker, Daniel B.
2008-04-01
This paper describes the Sixth Data Release of the Sloan Digital Sky Survey. With this data release, the imaging of the northern Galactic cap is now complete. The survey contains images and parameters of roughly 287 million objects over 9583 deg2, including scans over a large range of Galactic latitudes and longitudes. The survey also includes 1.27 million spectra of stars, galaxies, quasars, and blank sky (for sky subtraction) selected over 7425 deg2. This release includes much more stellar spectroscopy than was available in previous data releases and also includes detailed estimates of stellar temperatures, gravities, and metallicities. The results of improved photometric calibration are now available, with uncertainties of roughly 1% in g, r, i, and z, and 2% in u, substantially better than the uncertainties in previous data releases. The spectra in this data release have improved wavelength and flux calibration, especially in the extreme blue and extreme red, leading to the qualitatively better determination of stellar types and radial velocities. The spectrophotometric fluxes are now tied to point-spread function magnitudes of stars rather than fiber magnitudes. This gives more robust results in the presence of seeing variations, but also implies a change in the spectrophotometric scale, which is now brighter by roughly 0.35 mag. Systematic errors in the velocity dispersions of galaxies have been fixed, and the results of two independent codes for determining spectral classifications and redshifts are made available. Additional spectral outputs are made available, including calibrated spectra from individual 15 minute exposures and the sky spectrum subtracted from each exposure. We also quantify a recently recognized underestimation of the brightnesses of galaxies of large angular extent due to poor sky subtraction; the bias can exceed 0.2 mag for galaxies brighter than r = 14 mag.
PSF subtraction to search for distant Jupiters with SPITZER
NASA Astrophysics Data System (ADS)
Rameau, Julien; Artigau, Etienne; Baron, Frédérique; Lafrenière, David; Doyon, Rene; Malo, Lison; Naud, Marie-Eve; Delorme, Philippe; Janson, Markus; Albert, Loic; Gagné, Jonathan; Beichman, Charles
2015-12-01
In the course of the search for extrasolar planets, a focus has been made towards rocky planets very close (within few AUs) to their parent stars. However, planetary systems might host gas giants as well, possibly at larger separation from the central star. Direct imaging is the only technique able to probe the outer part of planetary systems. With the advent of the new generation of planet finders like GPI and SPHERE, extrasolar systems are now studied at the solar system scale. Nevertheless, very extended planetary systems do exist and have been found (Gu Ps, AB Pic b, etc.). They are easier to detect and characterize. They are also excellent proxy for close-in gas giants that are detected from the ground. These planets have no equivalent in our solar system and their origin remain a matter of speculation. In this sense, studying planetary systems from its innermost to its outermost part is therefore mandatory to have a clear understanding of its architecture, hence hints of its formation and evolution. We are carrying out a space-based survey using SPITZER to search for distant companions around a well-characterized sample of 120 young and nearby stars. We designed an observing strategy that allows building a very homogeneous PSF library. With this library, we perform a PSF subtraction to search for planets from 10’’ down to 1’’. In this poster, I will present the library, the different algorithms used to subtract the PSF, and the promising detection sensitivity that we are able to reach with this survey. This project to search for the most extreme planetary systems is unique in the exoplanet community. It is also the only realistic mean of directly imaging and subsequently obtaining spectroscopy of young Saturn or Jupiter mass planets in the JWST-era.
Effective real-time vehicle tracking using discriminative sparse coding on local patches
NASA Astrophysics Data System (ADS)
Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei
2016-01-01
A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.
Soares, Marcelo Bento; Bonaldo, Maria de Fatima
1998-01-01
This invention provides a method to normalize a cDNA library comprising: (a) constructing a directionally cloned library containing cDNA inserts wherein the insert is capable of being amplified by polymerase chain reaction; (b) converting a double-stranded cDNA library into single-stranded DNA circles; (c) generating single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) by polymerase chain reaction with appropriate primers; (d) hybridizing the single-stranded DNA circles converted in step (b) with the complementary single-stranded nucleic acid molecules generated in step (c) to produce partial duplexes to an appropriate Cot; and (e) separating the unhybridized single-stranded DNA circles from the hybridized DNA circles, thereby generating a normalized cDNA library. This invention also provides a method to normalize a cDNA library wherein the generating of single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) is by excising cDNA inserts from the double-stranded cDNA library; purifying the cDNA inserts from cloning vectors; and digesting the cDNA inserts with an exonuclease. This invention further provides a method to construct a subtractive cDNA library following the steps described above. This invention further provides normalized and/or subtractive cDNA libraries generated by the above methods.
Soares, M.B.; Fatima Bonaldo, M. de
1998-12-08
This invention provides a method to normalize a cDNA library comprising: (a) constructing a directionally cloned library containing cDNA inserts wherein the insert is capable of being amplified by polymerase chain reaction; (b) converting a double-stranded cDNA library into single-stranded DNA circles; (c) generating single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) by polymerase chain reaction with appropriate primers; (d) hybridizing the single-stranded DNA circles converted in step (b) with the complementary single-stranded nucleic acid molecules generated in step (c) to produce partial duplexes to an appropriate Cot; and (e) separating the unhybridized single-stranded DNA circles from the hybridized DNA circles, thereby generating a normalized cDNA library. This invention also provides a method to normalize a cDNA library wherein the generating of single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) is by excising cDNA inserts from the double-stranded cDNA library; purifying the cDNA inserts from cloning vectors; and digesting the cDNA inserts with an exonuclease. This invention further provides a method to construct a subtractive cDNA library following the steps described above. This invention further provides normalized and/or subtractive cDNA libraries generated by the above methods. 25 figs.
A video method to study Drosophila sleep.
Zimmerman, John E; Raizen, David M; Maycock, Matthew H; Maislin, Greg; Pack, Allan I
2008-11-01
To use video to determine the accuracy of the infrared beam-splitting method for measuring sleep in Drosophila and to determine the effect of time of day, sex, genotype, and age on sleep measurements. A digital image analysis method based on frame subtraction principle was developed to distinguish a quiescent from a moving fly. Data obtained using this method were compared with data obtained using the Drosophila Activity Monitoring System (DAMS). The location of the fly was identified based on its centroid location in the subtracted images. The error associated with the identification of total sleep using DAMS ranged from 7% to 95% and depended on genotype, sex, age, and time of day. The degree of the total sleep error was dependent on genotype during the daytime (P < 0.001) and was dependent on age during both the daytime and the nighttime (P < 0.001 for both). The DAMS method overestimated sleep bout duration during both the day and night, and the degree of these errors was genotype dependent (P < 0.001). Brief movements that occur during sleep bouts can be accurately identified using video. Both video and DAMS detected a homeostatic response to sleep deprivation. Video digital analysis is more accurate than DAMS in fly sleep measurements. In particular, conclusions drawn from DAMS measurements regarding daytime sleep and sleep architecture should be made with caution. Video analysis also permits the assessment of fly position and brief movements during sleep.
Nagashima, Shiori; Yoshida, Akihiro; Suzuki, Nao; Ansai, Toshihiro; Takehara, Tadamichi
2005-01-01
Genomic subtractive hybridization was used to design Prevotella nigrescens-specific primers and TaqMan probes. Based on this technique, a TaqMan real-time PCR assay was developed for quantifying four oral black-pigmented Prevotella species. The combination of real-time PCR and genomic subtractive hybridization is useful for preparing species-specific primer-probe sets for closely related species. PMID:15956428
Rowing Sport in Learning Fractions of the Fourth Grade Students
ERIC Educational Resources Information Center
Nasution, Marhamah Fajriyah; Putri, Ratu Ilma Indra; Zulkardi
2018-01-01
This study aimed to produce learning trajectory with rowing context that can help students understand addition and subtraction of fractions. Subject of the research were students IV MIN 2 Palembang. The method used was research design with three stages, those are preparing for the experiment, the design experiments, and the retrospective analysis.…
The Effects of the "Fraction Ruler" Manipulative for Teaching Computation of Fractions
ERIC Educational Resources Information Center
Schiller, Diane Profita
1977-01-01
Explores the hypothesis that students in the fourth, fifth and sixth grade who were exposed to the "fraction ruler" as a manipulative for exploring basic fraction operations would perform more successfully in addition, subtraction and multiplication problems than students taught fraction operations by the traditional method. (Author/RK)
A simple, gravimetric method to quantify inorganic carbon in calcareous soils
USDA-ARS?s Scientific Manuscript database
Total carbon (TC) in calcareous soils has two components: inorganic carbon (IC) as calcite and or dolomite and organic carbon (OC) in the soil organic matter. The IC must be measured and subtracted from TC to obtain OC. Our objective was to develop a simple gravimetric technique to quantify IC. Th...
2, 4, 8: Doubling Snakes, Caterpillars and Goats Made Easy!
ERIC Educational Resources Information Center
Kartambis, Kathy
2007-01-01
Research has established that children's development of addition and subtraction skills progresses through a hierarchy of strategies that begin with counting-by-one methods through to flexible mental strategies using a combination of knowledge of basic facts and understanding of place value. An important transition point is the shift from the…
26 CFR 1.923-1T - Temporary regulations; exempt foreign trade income.
Code of Federal Regulations, 2011 CFR
2011-04-01
... FSC's gross income is determined by subtracting from its foreign trading gross receipts the transfer price determined under the transfer pricing methods of section 925(a). If the FSC is the commission... uses either of the two administrative pricing rules, provided for by sections 925(a)(1) and (2), to...
Human gait recognition by pyramid of HOG feature on silhouette images
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Park, Jeanrok; Man, Hong
2013-03-01
As a uncommon biometric modality, human gait recognition has a great advantage of identify people at a distance without high resolution images. It has attracted much attention in recent years, especially in the fields of computer vision and remote sensing. In this paper, we propose a human gait recognition framework that consists of a reliable background subtraction method followed by the pyramid of Histogram of Gradient (pHOG) feature extraction on the silhouette image, and a Hidden Markov Model (HMM) based classifier. Through background subtraction, the silhouette of human gait in each frame is extracted and normalized from the raw video sequence. After removing the shadow and noise in each region of interest (ROI), pHOG feature is computed on the silhouettes images. Then the pHOG features of each gait class will be used to train a corresponding HMM. In the test stage, pHOG feature will be extracted from each test sequence and used to calculate the posterior probability toward each trained HMM model. Experimental results on the CASIA Gait Dataset B1 demonstrate that with our proposed method can achieve very competitive recognition rate.
NASA Astrophysics Data System (ADS)
Colecchia, Federico
2014-03-01
Low-energy strong interactions are a major source of background at hadron colliders, and methods of subtracting the associated energy flow are well established in the field. Traditional approaches treat the contamination as diffuse, and estimate background energy levels either by averaging over large data sets or by restricting to given kinematic regions inside individual collision events. On the other hand, more recent techniques take into account the discrete nature of background, most notably by exploiting the presence of substructure inside hard jets, i.e. inside collections of particles originating from scattered hard quarks and gluons. However, none of the existing methods subtract background at the level of individual particles inside events. We illustrate the use of an algorithm that will allow particle-by-particle background discrimination at the Large Hadron Collider, and we envisage this as the basis for a novel event filtering procedure upstream of the official reconstruction chains. Our hope is that this new technique will improve physics analysis when used in combination with state-of-the-art algorithms in high-luminosity hadron collider environments.
In Vivo Small Animal Imaging using Micro-CT and Digital Subtraction Angiography
Badea, C.T.; Drangova, M.; Holdsworth, D.W.; Johnson, G.A.
2009-01-01
Small animal imaging has a critical role in phenotyping, drug discovery, and in providing a basic understanding of mechanisms of disease. Translating imaging methods from humans to small animals is not an easy task. The purpose of this work is to review in vivo X-ray based small animal imaging, with a focus on in vivo micro-computed tomography (micro-CT) and digital subtraction angiography (DSA). We present the principles, technologies, image quality parameters and types of applications. We show that both methods can be used not only to provide morphological, but also functional information, such as cardiac function estimation or perfusion. Compared to other modalities, x-ray based imaging is usually regarded as being able to provide higher throughput at lower cost and adequate resolution. The limitations are usually associated with the relatively poor contrast mechanisms and potential radiation damage due to ionizing radiation, although the use of contrast agents and careful design of studies can address these limitations. We hope that the information will effectively address how x-ray based imaging can be exploited for successful in vivo preclinical imaging. PMID:18758005
Humans do not have direct access to retinal flow during walking
Souman, Jan L.; Freeman, Tom C.A.; Eikmeier, Verena; Ernst, Marc O.
2013-01-01
Perceived visual speed has been reported to be reduced during walking. This reduction has been attributed to a partial subtraction of walking speed from visual speed (Durgin & Gigone, 2007; Durgin, Gigone, & Scott, 2005). We tested whether observers still have access to the retinal flow before subtraction takes place. Observers performed a 2IFC visual speed discrimination task while walking on a treadmill. In one condition, walking speed was identical in the two intervals, while in a second condition walking speed differed between intervals. If observers have access to the retinal flow before subtraction, any changes in walking speed across intervals should not affect their ability to discriminate retinal flow speed. Contrary to this “direct-access hypothesis”, we found that observers were worse at discrimination when walking speed differed between intervals. The results therefore suggest that observers do not have access to retinal flow before subtraction. We also found that the amount of subtraction depended on the visual speed presented, suggesting that the interaction between the processing of visual input and of self-motion is more complex than previously proposed. PMID:20884509
Carvalho, Fabiola B; Gonçalves, Marcelo; Tanomaru-Filho, Mário
2007-04-01
The purpose of this study was to describe a new technique by using Adobe Photoshop CS (San Jose, CA) image-analysis software to evaluate the radiographic changes of chronic periapical lesions after root canal treatment by digital subtraction radiography. Thirteen upper anterior human teeth with pulp necrosis and radiographic image of chronic periapical lesion were endodontically treated and radiographed 0, 2, 4, and 6 months after root canal treatment by using a film holder. The radiographic films were automatically developed and digitized. The radiographic images taken 0, 2, 4, and 6 months after root canal therapy were submitted to digital subtraction in pairs (0 and 2 months, 2 and 4 months, and 4 and 6 months) choosing "image," "calculation," "subtract," and "new document" tools from Adobe Photoshop CS image-analysis software toolbar. The resulting images showed areas of periapical healing in all cases. According to this methodology, the healing or expansion of periapical lesions can be evaluated by means of digital subtraction radiography by using Adobe Photoshop CS software.
Robinson, Katherine M; Ninowski, Jerilyn E
2003-12-01
Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturm, C.; Soni, A.; Aoki, Y.
2009-07-01
We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation withmore » substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.« less
Two-dimensional real-time imaging system for subtraction angiography using an iodine filter
NASA Astrophysics Data System (ADS)
Umetani, Keiji; Ueda, Ken; Takeda, Tohoru; Anno, Izumi; Itai, Yuji; Akisada, Masayoshi; Nakajima, Teiichi
1992-01-01
A new type of subtraction imaging system was developed using an iodine filter and a single-energy broad bandwidth monochromatized x ray. The x-ray images of coronary arteries made after intravenous injection of a contrast agent are enhanced by an energy-subtraction technique. Filter chopping of the x-ray beam switches energies rapidly, so that a nearly simultaneous pair of filtered and nonfiltered images can be made. By using a high-speed video camera, a pair of two 512 × 512 pixel images can be obtained within 9 ms. Three hundred eighty-four images (raw data) are stored in a 144-Mbyte frame memory. After phantom studies, in vivo subtracted images of coronary arteries in dogs were obtained at a rate of 15 images/s.
NASA Astrophysics Data System (ADS)
Shairsingh, Kerolyn K.; Jeong, Cheol-Heon; Wang, Jonathan M.; Evans, Greg J.
2018-06-01
Vehicle emissions represent a major source of air pollution in urban districts, producing highly variable concentrations of some pollutants within cities. The main goal of this study was to identify a deconvolving method so as to characterize variability in local, neighbourhood and regional background concentration signals. This method was validated by examining how traffic-related and non-traffic-related sources influenced the different signals. Sampling with a mobile monitoring platform was conducted across the Greater Toronto Area over a seven-day period during summer 2015. This mobile monitoring platform was equipped with instruments for measuring a wide range of pollutants at time resolutions of 1 s (ultrafine particles, black carbon) to 20 s (nitric oxide, nitrogen oxides). The monitored neighbourhoods were selected based on their land use categories (e.g. industrial, commercial, parks and residential areas). The high time-resolution data allowed pollutant concentrations to be separated into signals representing background and local concentrations. The background signals were determined using a spline of minimums; local signals were derived by subtracting the background concentration from the total concentration. Our study showed that temporal scales of 500 s and 2400 s were associated with the neighbourhood and regional background signals respectively. The percent contribution of the pollutant concentration that was attributed to local signals was highest for nitric oxide (NO) (37-95%) and lowest for ultrafine particles (9-58%); the ultrafine particles were predominantly regional (32-87%) in origin on these days. Local concentrations showed stronger associations than total concentrations with traffic intensity in a 100 m buffer (ρ:0.21-0.44). The neighbourhood scale signal also showed stronger associations with industrial facilities than the total concentrations. Given that the signals show stronger associations with different land use suggests that resolving the ambient concentrations differentiates which emission sources drive the variability in each signal. The benefit of this deconvolution method is that it may reduce exposure misclassification when coupled with predictive models.
Video Analytics Evaluation: Survey of Datasets, Performance Metrics and Approaches
2014-09-01
training phase and a fusion of the detector outputs. 6.3.1 Training Techniques 1. Bagging: The basic idea of Bagging is to train multiple classifiers...can reduce more noise interesting points. Person detection and background subtraction methods were used to create hot regions. The hot regions were...detection algorithms are incorporated with MHT to construct one integrated detector /tracker. 6.8 IRDS-CASIA team IRDS-CASIA proposed a method to solve a
Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, S.; Jaiswal, P.; Li, Ye
We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less
Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC
Dawson, S.; Jaiswal, P.; Li, Ye; ...
2016-12-01
We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jornet, N; Carrasco de Fez, P; Jordi, O
Purpose: To evaluate the accuracy in total scatter factor (Sc,p) determination for small fields using commercial plastic scintillator detector (PSD). The manufacturer's spectral discrimination method to subtract Cerenkov light from the signal is discussed. Methods: Sc,p for field sizes ranging from 0.5 to 10 cm were measured using PSD Exradin (Standard Imaging) connected to two channel electrometer measuring the signals in two different spectral regions to subtract the Cerenkov signal from the PSD signal. A Pinpoint ionisation chamber 31006 (PTW) and a non-shielded semiconductor detector EFD (Scanditronix) were used for comparison. Measures were performed for a 6 MV X-ray beam.more » The Sc,p are measured at 10 cm depth in water for a SSD=100 cm and normalized to a 10'10 cm{sup 2} field size at the isocenter. All detectors were placed with their symmetry axis parallel to the beam axis.We followed the manufacturer's recommended calibration methodology to subtract the Cerenkov contribution to the signal as well as a modified method using smaller field sizes. The Sc,p calculated by using both calibration methodologies were compared. Results: Sc,p measured with the semiconductor and the PinPoint detectors agree, within 1.5%, for field sizes between 10'10 and 1'1 cm{sup 2}. Sc,p measured with the PSD using the manufacturer's calibration methodology were systematically 4% higher than those measured with the semiconductor detector for field sizes smaller than 5'5 cm{sup 2}. By using a modified calibration methodology for smalls fields and keeping the manufacturer calibration methodology for fields larger than 5'5cm{sup 2} field Sc,p matched semiconductor results within 2% field sizes larger than 1.5 cm. Conclusion: The calibration methodology proposed by the manufacturer is not appropriate for dose measurements in small fields. The calibration parameters are not independent of the incident radiation spectrum for this PSD. This work was partially financed by grant 2012 of Barcelona board of the AECC.« less
Artifacts and power corrections: Reexamining Z{sub {psi}}(p{sup 2}) and Z{sub V} in the momentum-subtraction scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucaud, Ph.; Leroy, J. P.; Le Yaouanc, A.
2006-08-01
The next-to-leading-order (NLO) term in the operator product expansion (OPE) of the quark propagator vector part Z{sub {psi}} and the vertex function g{sub 1} of the vector current in the Landau gauge should be dominated by the same condensate as in the gluon propagator. On the other hand, the perturbative part has been calculated to a very high precision thanks to Chetyrkin and collaborators. We test this on the lattice, with both clover and overlap fermion actions at {beta}=6.0, 6.4, 6.6, 6.8. Elucidation of discretization artifacts appears to be absolutely crucial. First hypercubic artifacts are eliminated by amore » powerful method, which gives results notably different from the standard democratic method. Then, the presence of unexpected, very large, nonperturbative, O(4) symmetric discretization artifacts, increasing towards small momenta, is demonstrated by considering Z{sub V}{sup MOM}, which should be constant in the absence of such artifacts. They impede in general the analysis of OPE. However, in two special cases with overlap action--(1) for Z{sub {psi}}; (2) for g{sub 1}, but only at large p{sup 2}--we are able to identify the condensate; it agrees with the one resulting from gluonic Green functions. We conclude that the OPE analysis of quark and gluon Green function has reached a quite consistent status, and that the power corrections have been correctly identified. A practical consequence of the whole analysis is that the renormalization constant Z{sub {psi}} (=Z{sub 2}{sup -1} of the momentum-subtraction (MOM) scheme) may differ sizably from the one given by democratic selection methods. More generally, the values of the renormalization constants may be seriously affected by the differences in the treatment of the various types of artifacts, and by the subtraction of power corrections.« less
Three-photon N00N states generated by photon subtraction from double photon pairs.
Kim, Heonoh; Park, Hee Su; Choi, Sang-Kyung
2009-10-26
We describe an experimental demonstration of a novel three-photon N00N state generation scheme using a single source of photons based on spontaneous parametric down-conversion (SPDC). The three-photon entangled state is generated when a photon is subtracted from a double pair of photons and detected by a heralding counter. Interference fringes measured with an emulated three-photon detector reveal the three-photon de Broglie wavelength and exhibit visibility > 70% without background subtraction.
Color Addition and Subtraction Apps
NASA Astrophysics Data System (ADS)
Ruiz, Frances; Ruiz, Michael J.
2015-10-01
Color addition and subtraction apps in HTML5 have been developed for students as an online hands-on experience so that they can more easily master principles introduced through traditional classroom demonstrations. The evolution of the additive RGB color model is traced through the early IBM color adapters so that students can proceed step by step in understanding mathematical representations of RGB color. Finally, color addition and subtraction are presented for the X11 colors from web design to illustrate yet another real-life application of color mixing.
Global Energetics of Thirty-Eight Large Solar Eruptive Events
2012-10-17
is the energy radiated in the narrow GOES band from 1 to 8 Å, obtained directly from background -subtracted data (Section 2.1). The second is the energy... background -subtracted fluxes over the duration of the flare, from the GOES start time (given by NOAA and listed in Table 1) to the time when the flux...had decreased to 10% of the peak value. The background that was subtracted was taken as the lowest flux in the hour or so before and/or after the flare
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Takada, Masahiro; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma
2017-09-01
We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations and their inherent halo catalogues. Using the mock catalogue to study the error covariance matrix of galaxy-galaxy weak lensing, we compare the full covariance with the 'jackknife' (JK) covariance, the method often used in the literature that estimates the covariance from the resamples of the data itself. We show that there exists the variation of JK covariance over realizations of mock lensing measurements, while the average JK covariance over mocks can give a reasonably accurate estimation of the true covariance up to separations comparable with the size of JK subregion. The scatter in JK covariances is found to be ∼10 per cent after we subtract the lensing measurement around random points. However, the JK method tends to underestimate the covariance at the larger separations, more increasingly for a survey with a higher number density of source galaxies. We apply our method to the Sloan Digital Sky Survey (SDSS) data, and show that the 48 mock SDSS catalogues nicely reproduce the signals and the JK covariance measured from the real data. We then argue that the use of the accurate covariance, compared to the JK covariance, allows us to use the lensing signals at large scales beyond a size of the JK subregion, which contains cleaner cosmological information in the linear regime.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-04-01
A novel, non-invasive imaging technique that determines 2D maps of water content in unsaturated porous media is presented. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage / imbibition experiment in a 2D flow tank with inner dimensions of 40 cm x 14 cm x 6 cm (L x W x D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using numerical simulations with a state-of-the-art computational code that solves the Richards. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Application examples to a larger flow tank with various boundary conditions are finally presented to illustrate the potential of the methodology.
NASA Astrophysics Data System (ADS)
Chen, Biao; Jing, Zhenxue; Smith, Andrew P.; Parikh, Samir; Parisky, Yuri
2006-03-01
Dual-energy contrast enhanced digital mammography (DE-CEDM), which is based upon the digital subtraction of low/high-energy image pairs acquired before/after the administration of contrast agents, may provide physicians physiologic and morphologic information of breast lesions and help characterize their probability of malignancy. This paper proposes to use only one pair of post-contrast low / high-energy images to obtain digitally subtracted dual-energy contrast-enhanced images with an optimal weighting factor deduced from simulated characteristics of the imaging chain. Based upon our previous CEDM framework, quantitative characteristics of the materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filters, breast tissues / lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systemically modeled. Using the base-material (polyethylene-PMMA) decomposition method based on entrance low / high-energy x-ray spectra and breast thickness, the optimal weighting factor was calculated to cancel the contrast between fatty and glandular tissues while enhancing the contrast of iodized lesions. By contrast, previous work determined the optimal weighting factor through either a calibration step or through acquisition of a pre-contrast low/high-energy image pair. Computer simulations were conducted to determine weighting factors, lesions' contrast signal values, and dose levels as functions of x-ray techniques and breast thicknesses. Phantom and clinical feasibility studies were performed on a modified Selenia full field digital mammography system to verify the proposed method and computer-simulated results. The resultant conclusions from the computer simulations and phantom/clinical feasibility studies will be used in the upcoming clinical study.
NASA Astrophysics Data System (ADS)
Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung
2016-09-01
Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on our results, we recommend using this technique for high image quality.
NASA Astrophysics Data System (ADS)
Malinauskas, Mangirdas; Lukoševičius, Laurynas; MackevičiÅ«tÄ--, DovilÄ--; BalčiÅ«nas, Evaldas; RekštytÄ--, Sima; Paipulas, Domas
2014-05-01
A novel approach for efficient manufacturing of three-dimensional (3D) microstructured scaffolds designed for cell studies and tissue engineering applications is presented. A thermal extrusion (fused filament fabrication) 3D printer is employed as a simple and low-cost tabletop device enabling rapid materialization of CAD models out of biocompatible and biodegradable polylactic acid (PLA). Here it was used to produce cm- scale microporous (pore size varying from 100 to 400 µm) scaffolds. The fabricated objects were further laser processed in a direct laser writing (DLW) subtractive (ablation) and additive (lithography) manners. The first approach enables precise surface modification by creating micro-craters, holes and grooves thus increasing the surface roughness. An alternative way is to immerse the 3D PLA scaffold in a monomer solution and use the same DLW setup to refine its inner structure by fabricating dots, lines or a fine mesh on top as well as inside the pores of previously produced scaffolds. The DLW technique is empowered by ultrafast lasers - it allows 3D structuring with high spatial resolution in a great variety of photosensitive materials. Structure geometry on macro- to micro- scales could be finely tuned by combining these two fabrication techniques. Such artificial 3D substrates could be used for cell growth or as biocompatible-biodegradable implants. This combination of distinct material processing techniques enables rapid fabrication of diverse functional micro- featured and integrated devices. Hopefully, the proposed approach will find numerous applications in the field of ms, microfluidics, microoptics and many others.
Quantum implications of a scale invariant regularization
NASA Astrophysics Data System (ADS)
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min
2008-07-01
In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.
A method to characterise site, urban and regional ambient background radiation.
Passmore, C; Kirr, M
2011-03-01
Control dosemeters are routinely provided to customers to monitor the background radiation so that it can be subtracted from the gross response of the dosemeter to arrive at the occupational dose. Landauer, the largest dosimetry processor in the world with subsidiaries in Australia, Brazil, China, France, Japan, Mexico and the UK, has clients in approximately 130 countries. The Glenwood facility processes over 1.1 million controls per year. This network of clients around the world provides a unique ability to monitor the world's ambient background radiation. Control data can be mined to provide useful historical information regarding ambient background rates and provide a historical baseline for geographical areas. Historical baseline can be used to provide site or region-specific background subtraction values, document the variation in ambient background radiation around a client's site or provide a baseline for measuring the efficiency of clean-up efforts in urban areas after a dirty bomb detonation.
An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc
2010-12-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.
NASA Astrophysics Data System (ADS)
Boughezal, Radja; Isgrò, Andrea; Petriello, Frank
2018-04-01
We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.
Validation of two ribosomal RNA removal methods for microbial metatranscriptomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shaomei; Wurtzel, Omri; Singh, Kanwar
2010-10-01
The predominance of rRNAs in the transcriptome is a major technical challenge in sequence-based analysis of cDNAs from microbial isolates and communities. Several approaches have been applied to deplete rRNAs from (meta)transcriptomes, but no systematic investigation of potential biases introduced by any of these approaches has been reported. Here we validated the effectiveness and fidelity of the two most commonly used approaches, subtractive hybridization and exonuclease digestion, as well as combinations of these treatments, on two synthetic five-microorganism metatranscriptomes using massively parallel sequencing. We found that the effectiveness of rRNA removal was a function of community composition and RNA integritymore » for these treatments. Subtractive hybridization alone introduced the least bias in relative transcript abundance, whereas exonuclease and in particular combined treatments greatly compromised mRNA abundance fidelity. Illumina sequencing itself also can compromise quantitative data analysis by introducing a G+C bias between runs.« less
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S.; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-01-01
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400–1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557–701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce. PMID:26610510
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging.
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-11-20
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400-1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557-701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce.
Evolution of digital angiography systems.
Brigida, Raffaela; Misciasci, Teresa; Martarelli, Fabiola; Gangitano, Guido; Ottaviani, Pierfrancesco; Rollo, Massimo; Marano, Pasquale
2003-01-01
The innovations introduced by digital subtraction angiography in digital radiography are briefly illustrated with the description of its components and functioning. The pros and cons of digital subtraction angiography are analyzed in light of present and future imaging technologies. In particular, among advantages there are: automatic exposure, digital image subtraction, digital post-processing, high number of images per second, possible changes in density and contrast. Among disadvantages there are: small round field of view, geometric distortion at the image periphery, high sensitivity to patient movements, not very high spatial resolution. At present, flat panel detectors represent the most suitable substitutes for digital subtraction angiography, with the introduction of novel solutions for those artifacts which for years have hindered its diagnostic validity. The concept of temporal artifact, reset light and possible future evolutions of this technology that may afford both diagnostic and protectionist advantages, are analyzed.
Evaluating attention in delirium: A comparison of bedside tests of attention.
Adamis, Dimitrios; Meagher, David; Murray, Orla; O'Neill, Donagh; O'Mahony, Edmond; Mulligan, Owen; McCarthy, Geraldine
2016-09-01
Impaired attention is a core diagnostic feature for delirium. The present study examined the discriminating properties for patients with delirium versus those with dementia and/or no neurocognitive disorder of four objective tests of attention: digit span, vigilance "A" test, serial 7s subtraction and months of the year backwards together with global clinical subjective rating of attention. This as a prospective study of older patients admitted consecutively in a general hospital. Participants were assessed using the Confusion Assessment Method, Delirium Rating Scale-98 Revised and Montreal Cognitive Assessment scales, and months of the year backwards. Pre-existing dementia was diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders fourth edition criteria. The sample consisted of 200 participants (mean age 81.1 ± 6.5 years; 50% women; pre-existing cognitive impairment in 126 [63%]). A total of 34 (17%) were identified with delirium (Confusion Assessment Method +). The five approaches to assessing attention had statistically significant correlations (P < 0.05). Discriminant analysis showed that clinical subjective rating of attention in conjunction with the months of the year backwards had the best discriminatory ability to identify Confusion Assessment Method-defined delirium, and to discriminate patients with delirium from those with dementia and/or normal cognition. Both of these approaches had high sensitivity, but modest specificity. Objective tests are useful for prediction of non-delirium, but lack specificity for a delirium diagnosis. Global attentional deficits were more indicative of delirium than deficits of specific domains of attention. Geriatr Gerontol Int 2016; 16: 1028-1035. © 2015 The Authors. Geriatrics & Gerontology International published by. Wiley Publishing Asia Pty Ltd on behalf of Japanese Geriatrics Society.
On the Difference Between Additive and Subtractive QM/MM Calculations
Cao, Lili; Ryde, Ulf
2018-01-01
The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e., the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic, and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended. PMID:29666794
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
NASA Astrophysics Data System (ADS)
Facio, Jorge I.; Betancourth, D.; Cejas Bolecek, N. R.; Jorge, G. A.; Pedrazzini, Pablo; Correa, V. F.; Cornaglia, Pablo S.; Vildosola, V.; García, D. J.
2016-06-01
We analyze theoretically a common experimental process used to obtain the magnetic contribution to the specific heat of a given magnetic material. In the procedure, the specific heat of a non-magnetic analog is measured and used to subtract the non-magnetic contributions, which are generally dominated by the lattice degrees of freedom in a wide range of temperatures. We calculate the lattice contribution to the specific heat for the magnetic compounds GdMIn5 (M=Co, Rh) and for the non-magnetic YMIn5 and LaMIn5 (M=Co, Rh), using density functional theory based methods. We find that the best non-magnetic analog for the subtraction depends on the magnetic material and on the range of temperatures. While the phonon specific heat contribution of YRhIn5 is an excellent approximation to the one of GdCoIn5 in the full temperature range, for GdRhIn5 we find a better agreement with LaCoIn5, in both cases, as a result of an optimum compensation effect between masses and volumes. We present measurements of the specific heat of the compounds GdMIn5 (M=Co, Rh) up to room temperature where it surpasses the value expected from the Dulong-Petit law. We obtain a good agreement between theory and experiment when we include anharmonic effects in the calculations.
GUIMARÃES, Maria do Carmo Machado; PASSANEZI, Euloir; SANT’ANA, Adriana Campos Passanezi; GREGHI, Sebastião Luiz Aguiar; TABA JUNIOR, Mario
2010-01-01
Objectives This study assessed the bone density gain and its relationship with the periodontal clinical parameters in a case series of a regenerative therapy procedure. Material and Methods Using a split-mouth study design, 10 pairs of infrabony defects from 15 patients were treated with a pool of bovine bone morphogenetic proteins associated with collagen membrane (test sites) or collagen membrane only (control sites). The periodontal healing was clinically and radiographically monitored for six months. Standardized presurgical and 6-month postoperative radiographs were digitized for digital subtraction analysis, which showed relative bone density gain in both groups of 0.034 ± 0.423 and 0.105 ± 0.423 in the test and control group, respectively (p>0.05). Results As regards the area size of bone density change, the influence of the therapy was detected in 2.5 mm2 in the test group and 2 mm2 in the control group (p>0.05). Additionally, no correlation was observed between the favorable clinical results and the bone density gain measured by digital subtraction radiography (p>0.05). Conclusions The findings of this study suggest that the clinical benefit of the regenerative therapy observed did not come with significant bone density gains. Long-term evaluation may lead to a different conclusions. PMID:20835573
On the difference between additive and subtractive QM/MM calculations
NASA Astrophysics Data System (ADS)
Cao, Lili; Ryde, Ulf
2018-04-01
The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e. the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended.
Kreich, Eliane Maria; Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar
2016-03-01
This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment.
Reprocessing of Archival Direct Imaging Data of Herbig Ae/Be Stars
NASA Astrophysics Data System (ADS)
Safsten, Emily; Stephens, Denise C.
2017-01-01
Herbig Ae/Be (HAeBe) stars are intermediate mass (2-10 solar mass) pre-main sequence stars with circumstellar disks. They are the higher mass analogs of the better-known T Tauri stars. Observing planets within these young disks would greatly aid in understanding planet formation processes and timescales, particularly around massive stars. So far, only one planet, HD 100546b, has been confirmed to orbit a HAeBe star. With over 250 HAeBe stars known, and several observed to have disks with structures thought to be related to planet formation, it seems likely that there are as yet undiscovered planetary companions within the circumstellar disks of some of these young stars.Direct detection of a low-luminosity companion near a star requires high contrast imaging, often with the use of a coronagraph, and the subtraction of the central star's point spread function (PSF). Several processing algorithms have been developed in recent years to improve PSF subtraction and enhance the signal-to-noise of sources close to the central star. However, many HAeBe stars were observed via direct imaging before these algorithms came out. We present here current work with the PSF subtraction program PynPoint, which employs a method of principal component analysis, to reprocess archival images of HAeBe stars to increase the likelihood of detecting a planet in their disks.
Estimated Student Score Gain on the ACT COMP Exam: Valid Tool for Institutional Assessment?
ERIC Educational Resources Information Center
Banta, Trudy W.; And Others
1987-01-01
An institution can test seniors with the ACT College Outcome Measures Project (COMP) exam, then subtract from the senior score an estimated freshman score. Studies at the University of Tennessee, Knoxville, indicate that this method is not reliable to make judgments about the quality of general education programs. (Author/MLW)
Back to Basics: Algebraic Foundations of the Statement of Cash Flows
ERIC Educational Resources Information Center
Joyner, Donald T.; Banatte, Jean-Marie; Dondeti, V. Reddy
2014-01-01
The indirect method for preparing the statement of cash flows, as described in many standard textbooks, involves an item-by-item approach, telling you to add to or subtract from the net income, the increases or decreases in the balance sheet items, such as accounts payable or accounts receivable. Many business students, especially at the…
26 CFR 1.412(c)(2)-1 - Valuation of plan assets; reasonable actuarial valuation methods.
Code of Federal Regulations, 2014 CFR
2014-04-01
... computed by— (i) Determining the fair market value of plan assets at least annually, (ii) Adding the...) In determining the adjusted value of plan assets for a prior valuation date, there is added to the... market value, amounts are subtracted from this account and added, to the extent necessary, to raise the...
26 CFR 1.451-4 - Accounting for redemption of trading stamps and coupons.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 6 2013-04-01 2013-04-01 false Accounting for redemption of trading stamps and... Included § 1.451-4 Accounting for redemption of trading stamps and coupons. (a) In general—(1) Subtraction from receipts. If an accrual method taxpayer issues trading stamps or premium coupons with sales, or an...
26 CFR 1.451-4 - Accounting for redemption of trading stamps and coupons.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 6 2012-04-01 2012-04-01 false Accounting for redemption of trading stamps and... Included § 1.451-4 Accounting for redemption of trading stamps and coupons. (a) In general—(1) Subtraction from receipts. If an accrual method taxpayer issues trading stamps or premium coupons with sales, or an...
26 CFR 1.451-4 - Accounting for redemption of trading stamps and coupons.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 6 2014-04-01 2014-04-01 false Accounting for redemption of trading stamps and... Included § 1.451-4 Accounting for redemption of trading stamps and coupons. (a) In general—(1) Subtraction from receipts. If an accrual method taxpayer issues trading stamps or premium coupons with sales, or an...
Developmental dissociation in the neural responses to simple multiplication and subtraction problems
Prado, Jérôme; Mutreja, Rachna; Booth, James R.
2014-01-01
Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a cross-sectional design to measure the neural activity associated with single-digit subtraction and multiplication in 34 children from 2nd to 7th grade. The neural correlates of language and numerical processing were also identified in each child via localizer scans. Although multiplication and subtraction were undistinguishable in terms of behavior, we found a striking developmental dissociation in their neural correlates. First, we observed grade-related increases of activity for multiplication, but not for subtraction, in a language-related region of the left temporal cortex. Second, we found grade-related increases of activity for subtraction, but not for multiplication, in a region of the right parietal cortex involved in the procedural manipulation of numerical quantities. The present results suggest that fluency in simple arithmetic in children may be achieved by both increasing reliance on verbal retrieval and by greater use of efficient quantity-based procedures, depending on the operation. PMID:25089323
Summation and subtraction using a modified autoshaping procedure in pigeons.
Ploog, Bertram O
2008-06-01
A modified autoshaping paradigm (significantly different from those previously reported in the summation literature) was employed to allow for the simultaneous assessment of stimulus summation and subtraction in pigeons. The response requirements and the probability of food delivery were adjusted such that towards the end of training 12 of 48 trials ended in food delivery, the same proportion as under testing. Stimuli (outlines of squares of three sizes and colors: A, B, and C) were used that could be presented separately or in any combination of two or three stimuli. Twelve of the pigeons (summation groups) were trained with either A, B, and C or with AB, BC, and CA, and tested with ABC. The remaining 12 pigeons (subtraction groups) received training with ABC but were tested with A, B, and C or with AB, BC, and CA. These groups were further subdivided according to whether stimulus elements were presented either in a concentric or dispersed manner. Summation did not occur; subtraction occurred in the two concentric groups. For interpretation of the results, configural theory, the Rescorla-Wagner model, and the composite-stimulus control model were considered. The results suggest different mechanisms responsible for summation and subtraction.
Improvements in floating point addition/subtraction operations
Farmwald, P.M.
1984-02-24
Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
2015-03-31
FD scheme is only consistent for classical solutions of the PDE . For this reason, we implement the method of singularity subtraction as a means for...regularity due to the boundary conditions. This is because the FD scheme is only consistent for classical solutions of the PDE . For this reason, we...Introduction In the present work, we develop a high-order numerical method for solving linear elliptic PDEs with well-behaved variable coefficients on
Background suppression of infrared small target image based on inter-frame registration
NASA Astrophysics Data System (ADS)
Ye, Xiubo; Xue, Bindang
2018-04-01
We propose a multi-frame background suppression method for remote infrared small target detection. Inter-frame information is necessary when the heavy background clutters make it difficult to distinguish real targets and false alarms. A registration procedure based on points matching in image patches is used to compensate the local deformation of background. Then the target can be separated by background subtraction. Experiments show our method serves as an effective preliminary of target detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Kyung J.; Cho, David R.
Purpose: To evaluate the safety and the effectiveness of CO{sub 2} splenoportography with the 'skinny' needle. Methods: A flexible, 22 gauge needle ('skinny' needle) was introduced into the exteriorized spleens of five pigs. After checking the intrasplenic positioning withCO{sub 2} injection, increasing doses of CO{sub 2} (10-60cm{sup 3}) were injected using a dedicated CO{sub 2}injector with digital imaging. The puncture sites were observed during and after CO{sub 2} injections, and after removal of the needle.The spleens were then removed for gross and microscopic examination. Results: In all animals digital subtractionCO{sub 2} splenoportograms showed the splenic, extra- and intrahepatic portal veins,more » and the most distal portion of the superiormesenteric vein. No CO{sub 2} extravasation occurred in the spleen. There was no significant bleeding from the puncture site after removal of the needle. Gross and microscopic examination revealed no evidence of splenic rupture or intrasplenic hematoma. Conclusion: CO{sub 2} splenoportography with the 'skinny' needle is a safe and simple method of visualizing the portal vein and its branches. Careful appraisals of the clinical usefulness of the method will be needed in various clinical settings.« less
Spectral K-edge subtraction imaging
NASA Astrophysics Data System (ADS)
Zhu, Y.; Samadi, N.; Martinson, M.; Bassey, B.; Wei, Z.; Belev, G.; Chapman, D.
2014-05-01
We describe a spectral x-ray transmission method to provide images of independent material components of an object using a synchrotron x-ray source. The imaging system and process is similar to K-edge subtraction (KES) imaging where two imaging energies are prepared above and below the K-absorption edge of a contrast element and a quantifiable image of the contrast element and a water equivalent image are obtained. The spectral method, termed ‘spectral-KES’ employs a continuous spectrum encompassing an absorption edge of an element within the object. The spectrum is prepared by a bent Laue monochromator with good focal and energy dispersive properties. The monochromator focuses the spectral beam at the object location, which then diverges onto an area detector such that one dimension in the detector is an energy axis. A least-squares method is used to interpret the transmitted spectral data with fits to either measured and/or calculated absorption of the contrast and matrix material-water. The spectral-KES system is very simple to implement and is comprised of a bent Laue monochromator, a stage for sample manipulation for projection and computed tomography imaging, and a pixelated area detector. The imaging system and examples of its applications to biological imaging are presented. The system is particularly well suited for a synchrotron bend magnet beamline with white beam access.
Foreground extraction for moving RGBD cameras
NASA Astrophysics Data System (ADS)
Junejo, Imran N.; Ahmed, Naveed
2017-02-01
In this paper, we propose a simple method to perform foreground extraction for a moving RGBD camera. These cameras have now been available for quite some time. Their popularity is primarily due to their low cost and ease of availability. Although the field of foreground extraction or background subtraction has been explored by the computer vision researchers since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this paper, we make a novel use RGB and RGBD data: from the RGB frame, we extract corner features (FAST) and then represent these features with the histogram of oriented gradients (HoG) descriptor. We train a non-linear SVM on these descriptors. During the test phase, we make used of the fact that the foreground object has distinct depth ordering with respect to the rest of the scene. That is, we use the positively classified FAST features on the test frame to initiate a region growing to obtain the accurate segmentation of the foreground object from just the RGBD data. We demonstrate the proposed method of a synthetic datasets, and demonstrate encouraging quantitative and qualitative results.
Ding, Liang-Hao; Xie, Yang; Park, Seongmi; Xiao, Guanghua; Story, Michael D.
2008-01-01
Despite the tremendous growth of microarray usage in scientific studies, there is a lack of standards for background correction methodologies, especially in single-color microarray platforms. Traditional background subtraction methods often generate negative signals and thus cause large amounts of data loss. Hence, some researchers prefer to avoid background corrections, which typically result in the underestimation of differential expression. Here, by utilizing nonspecific negative control features integrated into Illumina whole genome expression arrays, we have developed a method of model-based background correction for BeadArrays (MBCB). We compared the MBCB with a method adapted from the Affymetrix robust multi-array analysis algorithm and with no background subtraction, using a mouse acute myeloid leukemia (AML) dataset. We demonstrated that differential expression ratios obtained by using the MBCB had the best correlation with quantitative RT–PCR. MBCB also achieved better sensitivity in detecting differentially expressed genes with biological significance. For example, we demonstrated that the differential regulation of Tnfr2, Ikk and NF-kappaB, the death receptor pathway, in the AML samples, could only be detected by using data after MBCB implementation. We conclude that MBCB is a robust background correction method that will lead to more precise determination of gene expression and better biological interpretation of Illumina BeadArray data. PMID:18450815
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abajyan, T.; Abbott, B.
2013-05-15
A measurement of splitting scales, as defined by the k T clustering algorithm, is presented for final states containing a W boson produced in proton–proton collisions at a centre-of-mass energy of 7 TeV. The measurement is based on the full 2010 data sample corresponding to an integrated luminosity of 36 pb -1 which was collected using the ATLAS detector at the CERN Large Hadron Collider. Cluster splitting scales are measured in events containing W bosons decaying to electrons or muons. The measurement comprises the four hardest splitting scales in a k T cluster sequence of the hadronic activity accompanying themore » W boson, and ratios of these splitting scales. Backgrounds such as multi-jet and top-quark-pair production are subtracted and the results are corrected for detector effects. Predictions from various Monte Carlo event generators at particle level are compared to the data. Overall, reasonable agreement is found with all generators, but larger deviations between the predictions and the data are evident in the soft regions of the splitting scales.« less
Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A
2012-07-01
Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.
Analysis of Acoustic Ambient Noise in Monterey Bay, California.
1982-12-01
6th line - 1/3-octave band levels, calculated from subroutines " Sub7 ", "Sub8", or "SubS" for center frequencies of 125 Hz (.213). and 25Q H z (.218) for...256 bins. (-191; calculates overall band levels for "corrected plots" based on analyzer scale selected (201); 77 ŕ Subroutines " Sub7 ", "SuhS", and...levels are calculated as positive values to be added to other values in eq. (3). vice negative values that would be subtracted)_ : " Sub7 ": calculates 1
Bridwell, Keith H
2006-09-01
Author experience and literature review. To investigate and discuss decision-making on when to perform a Smith-Petersen osteotomy as opposed to a pedicle subtraction procedure and/or a vertebral column resection. Articles have been published regarding Smith-Petersen osteotomies, pedicle subtraction procedures, and vertebral column resections. Expectations and complications have been reviewed. However, decision-making regarding which of the 3 procedures is most useful for a particular spinal deformity case is not clearly investigated. Discussed in this manuscript is the author's experience and the literature regarding the operative options for a fixed coronal or sagittal deformity. There are roles for Smith-Petersen osteotomy, pedicle subtraction, and vertebral column resection. Each has specific applications and potential complications. As the magnitude of resection increases, the ability to correct deformity improves, but also the risk of complication increases. Therein, an understanding of potential applications and complications is helpful.
Sky Subtraction with Fiber-Fed Spectrograph
NASA Astrophysics Data System (ADS)
Rodrigues, Myriam
2017-09-01
"Historically, fiber-fed spectrographs had been deemed inadequate for the observation of faint targets, mainly because of the difficulty to achieve high accuracy on the sky subtraction. The impossibility to sample the sky in the immediate vicinity of the target in fiber instruments has led to a commonly held view that a multi-object fibre spectrograph cannot achieve an accurate sky subtraction under 1% contrary to their slit counterpart. The next generation of multi-objects spectrograph at the VLT (MOONS) and the planed MOS for the E-ELT (MOSAIC) are fiber-fed instruments, and are aimed to observed targets fainter than the sky continuum level. In this talk, I will present the state-of-art on sky subtraction strategies and data reduction algorithm specifically developed for fiber-fed spectrographs. I will also present the main results of an observational campaign to better characterise the sky spatial and temporal variations ( in particular the continuum and faint sky lines)."
NASA Astrophysics Data System (ADS)
Hemdan, A.
2016-07-01
Three simple, selective, and accurate spectrophotometric methods have been developed and then validated for the analysis of Benazepril (BENZ) and Amlodipine (AML) in bulk powder and pharmaceutical dosage form. The first method is the absorption factor (AF) for zero order and amplitude factor (P-F) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 238 nm or from their first order spectra at 253 nm. The second method is the constant multiplication coupled with constant subtraction (CM-CS) for zero order and successive derivative subtraction-constant multiplication (SDS-CM) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 240 nm and 238 nm, respectively, or from their first order spectra at 214 nm and 253 nm for Benazepril and Amlodipine respectively. The third method is the novel constant multiplication coupled with derivative zero crossing (CM-DZC) which is a stability indicating assay method for determination of Benazepril and Amlodipine in presence of the main degradation product of Benazepril which is Benazeprilate (BENZT). The three methods were validated as per the ICH guidelines and the standard curves were found to be linear in the range of 5-60 μg/mL for Benazepril and 5-30 for Amlodipine, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits.
Hemdan, A
2016-07-05
Three simple, selective, and accurate spectrophotometric methods have been developed and then validated for the analysis of Benazepril (BENZ) and Amlodipine (AML) in bulk powder and pharmaceutical dosage form. The first method is the absorption factor (AF) for zero order and amplitude factor (P-F) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 238nm or from their first order spectra at 253nm. The second method is the constant multiplication coupled with constant subtraction (CM-CS) for zero order and successive derivative subtraction-constant multiplication (SDS-CM) for first order spectrum, where both BENZ and AML can be measured from their resolved zero order spectra at 240nm and 238nm, respectively, or from their first order spectra at 214nm and 253nm for Benazepril and Amlodipine respectively. The third method is the novel constant multiplication coupled with derivative zero crossing (CM-DZC) which is a stability indicating assay method for determination of Benazepril and Amlodipine in presence of the main degradation product of Benazepril which is Benazeprilate (BENZT). The three methods were validated as per the ICH guidelines and the standard curves were found to be linear in the range of 5-60μg/mL for Benazepril and 5-30 for Amlodipine, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Fink, P. W.; Khayat, M. A.; Wilton, D. R.
2005-01-01
It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.
Unbiased methods for removing systematics from galaxy clustering measurements
NASA Astrophysics Data System (ADS)
Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.
2016-02-01
Measuring the angular clustering of galaxies as a function of redshift is a powerful method for extracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterize and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to the basic mode projection algorithm, we prove it to be free of any bias, whereas we conclude that results computed with extended mode projection are biased. Within a simplified setup, we derive analytical expressions for the bias and discuss the options for correcting it in more realistic configurations. Common to all three methods is an increased estimator variance induced by the cleaning process, albeit at different levels. These results enable unbiased high-precision clustering measurements in the presence of spatially varying systematics, an essential step towards realizing the full potential of current and planned galaxy surveys.
To BG or not to BG: Background Subtraction for EIT Coronal Loops
NASA Astrophysics Data System (ADS)
Beene, J. E.; Schmelz, J. T.
2003-05-01
One of the few observational tests for various coronal heating models is to determine the temperature profile along coronal loops. Since loops are such an abundant coronal feature, this method originally seemed quite promising - that the coronal heating problem might actually be solved by determining the temperature as a function of arc length and comparing these observations with predictions made by different models. But there are many instruments currently available to study loops, as well as various techniques used to determine their temperature characteristics. Consequently, there are many different, mostly conflicting temperature results. We chose data for ten coronal loops observed with the Extreme ultraviolet Imaging Telescope (EIT), and chose specific pixels along each loop, as well as corresponding nearby background pixels where the loop emission was not present. Temperature analysis from the 171-to-195 and 195-to-284 angstrom image ratios was then performed on three forms of the data: the original data alone, the original data with a uniform background subtraction, and the original data with a pixel-by-pixel background subtraction. The original results show loops of constant temperature, as other authors have found before us, but the 171-to-195 and 195-to-284 results are significantly different. Background subtraction does not change the constant-temperature result or the value of the temperature itself. This does not mean that loops are isothermal, however, because the background pixels, which are not part of any contiguous structure, also produce a constant-temperature result with the same value as the loop pixels. These results indicate that EIT temperature analysis should not be trusted, and the isothermal loops that result from EIT (and TRACE) analysis may be an artifact of the analysis process. Solar physics research at the University of Memphis is supported by NASA grants NAG5-9783 and NAG5-12096.
Quantifying the Relative Contributions of Divisive and Subtractive Feedback to Rhythm Generation
Tabak, Joël; Rinzel, John; Bertram, Richard
2011-01-01
Biological systems are characterized by a high number of interacting components. Determining the role of each component is difficult, addressed here in the context of biological oscillations. Rhythmic behavior can result from the interplay of positive feedback that promotes bistability between high and low activity, and slow negative feedback that switches the system between the high and low activity states. Many biological oscillators include two types of negative feedback processes: divisive (decreases the gain of the positive feedback loop) and subtractive (increases the input threshold) that both contribute to slowly move the system between the high- and low-activity states. Can we determine the relative contribution of each type of negative feedback process to the rhythmic activity? Does one dominate? Do they control the active and silent phase equally? To answer these questions we use a neural network model with excitatory coupling, regulated by synaptic depression (divisive) and cellular adaptation (subtractive feedback). We first attempt to apply standard experimental methodologies: either passive observation to correlate the variations of a variable of interest to system behavior, or deletion of a component to establish whether a component is critical for the system. We find that these two strategies can lead to contradictory conclusions, and at best their interpretive power is limited. We instead develop a computational measure of the contribution of a process, by evaluating the sensitivity of the active (high activity) and silent (low activity) phase durations to the time constant of the process. The measure shows that both processes control the active phase, in proportion to their speed and relative weight. However, only the subtractive process plays a major role in setting the duration of the silent phase. This computational method can be used to analyze the role of negative feedback processes in a wide range of biological rhythms. PMID:21533065
Portes, L da Silva; Kioshima, E S; de Camargo, Z P; Batista, W L; Xander, P
2017-11-01
Paracoccidioidomycosis (PCM) is a systemic granulomatous disease endemic in Latin America whose aetiologic agents are the thermodimorphic fungi Paracoccidioides brasiliensis and Paracoccidioides lutzii. Despite technological advances, some problems have been reported for the fungal antigens used for serological diagnosis, and inconsistencies among laboratories have been reported. The use of synthetic peptides in the serological diagnosis of infectious diseases has proved to be a valuable strategy because in some cases, the reactions are more specific and sensitive. In this study, we used a subtractive selection with a phage display library against purified polyclonal antibodies for negative and positive PCM sera caused by P. brasiliensis. The binding phages were sequenced and tested in a binding assay to evaluate its interaction with sera from normal individuals and PCM patients. Synthetic peptides derived from these phage clones were tested in a serological assay, and we observed a significant recognition of LP15 by sera from PCM patients infected with P. brasiliensis. Our results demonstrated that subtractive phage display selection may be useful for identifying new epitopes that can be applied to the serodiagnosis of PCM caused by P. brasiliensis. Currently, there is no standardized method for the preparation of paracoccidioidomycosis (PCM) antigens, which has resulted in differences in the antigens used for serological diagnosis. Here, we report a procedure that uses subtractive phage display selection to select and identify new epitopes for the serodiagnosis of PCM caused by Paracoccidioides brasiliensis. A synthetic peptide obtained using this methodology was successfully recognized by sera from PCM patients, thus demonstrating its potential use for improving the serodiagnosis of this mycosis. The development of synthetic peptides for the serodiagnosis of PCM could be a promising alternative for the better standardization of diagnoses among laboratories. © 2017 The Society for Applied Microbiology.
Determination of Surface Charge of Titanium Dioxide (Anatase) at High Ionic Strength
NASA Astrophysics Data System (ADS)
Schoonen, M. A.; Strongin, D. R.
2014-12-01
Charge development on mineral surfaces is an important control on the fate of minor and trace elements in a wide range of environments, including in possible radioactive waste repositories. Formation waters have often a high ionic strength. In this study, we determined the zeta potential (ζ) of anatase in potassium chloride solutions with concentrations up to 3M (25°C). The zeta potential is the potential at the hydrodynamic shear plane. In this study, we made use of the electro-acoustic effect. This effect is based on the development of a measureable potential/current when the electrical double layer outside the shearplane is separated from a charged particle through rapid oscillation induced by a sound wave. The advantage of this type of measurement is that the particles are not subjected to a high electric field (common to typical zeta potential measurements), which leads to electrode reactions and a shift of solution pH. Measurements were collected by subtracting the ion vibration current (IVI) due to the presence of potassium and chloride ions from the CVI. The correction is necessary for measurements in solutions with I > 0.25 M. This subtraction was done at each of the measurement conditions by centrifuging the slurrly, measuring the IVI of the supernatant, reconstituting the slurry, and then measuring CVI of the slurry. Subtraction of IVI at each condition is critical because IVI changes with pH and accounts for most of raw signal. The results show that the anatase isoelectric point shifts from a pH ~6.5 to a value of ~4.5 at 1M KCl. At ionic strength in excess of 1 M KCl, the surface appears to be slightly negatively charged accross the pH range accessible by this technique (pH 2.5-10). The loss of an isoelectric point suggests that KCl is no longer an indifferent electrolyte at 1 M KCl and higher. The results are in disagreement with earlier measurements in which anatase was shown to have a positive charge at high ionic strength across the pH scale. The difference between the current and earlier work is likely a result of the IVI correction. While anatase is unlikely to be of importance in a waste environment, the work provides a method to determine charge on more relevant mineral surfaces. This can then lead to a better representation of the fate for radionuclides in the subsurface.
Pediatric head and neck lesions: assessment of vascularity by MR digital subtraction angiography.
Chooi, Weng Kong; Woodhouse, Neil; Coley, Stuart C; Griffiths, Paul D
2004-08-01
Pediatric head and neck lesions can be difficult to characterize on clinical grounds alone. We investigated the use of dynamic MR digital subtraction angiography as a noninvasive adjunct for the assessment of the vascularity of these abnormalities. Twelve patients (age range, 2 days to 16 years) with known or suspected vascular abnormalities were studied. Routine MR imaging, time-of-flight MR angiography, and MR digital subtraction angiography were performed in all patients. The dynamic sequence was acquired in two planes at one frame per second by using a thick section (6-10 cm) selective radio-frequency spoiled fast gradient-echo sequence and an IV administered bolus of contrast material. The images were subtracted from a preliminary mask sequence and viewed as a video-inverted cine loop. In all cases, MR digital subtraction angiography was successfully performed. The technique showed the following: 1) slow flow lesions (two choroidal angiomas, eyelid hemangioma, and scalp venous malformation); 2) high flow lesions that were not always suspected by clinical examination alone (parotid hemangioma, scalp, occipital, and eyelid arteriovenous malformations plus a palatal teratoma); 3) a hypovascular tumor for which a biopsy could be safely performed (Burkitt lymphoma); and 4) a hypervascular tumor of the palate (cystic teratoma). Our early experience suggests that MR digital subtraction angiography can be reliably performed in children of all ages without complication. The technique provided a noninvasive assessment of the vascularity of each lesion that could not always have been predicted on the basis of clinical examination or routine MR imaging alone.
Rosenberg-Lee, Miriam; Ashkenazi, Sarit; Chen, Tianwen; Young, Christina B.; Geary, David C.; Menon, Vinod
2014-01-01
Developmental dyscalculia (DD) is marked by specific deficits in processing numerical and mathematical information despite normal intelligence (IQ) and reading ability. We examined how brain circuits used by young children with DD to solve simple addition and subtraction problems differ from those used by typically developing (TD) children who were matched on age, IQ, reading ability, and working memory. Children with DD were slower and less accurate during problem solving than TD children, and were especially impaired on their ability to solve subtraction problems. Children with DD showed significantly greater activity in multiple parietal, occipito-temporal and prefrontal cortex regions while solving addition and subtraction problems. Despite poorer performance during subtraction, children with DD showed greater activity in multiple intra-parietal sulcus (IPS) and superior parietal lobule subdivisions in the dorsal posterior parietal cortex as well as fusiform gyrus in the ventral occipito-temporal cortex. Critically, effective connectivity analyses revealed hyper-connectivity, rather than reduced connectivity, between the IPS and multiple brain systems including the lateral fronto-parietal and default mode networks in children with DD during both addition and subtraction. These findings suggest the IPS and its functional circuits are a major locus of dysfunction during both addition and subtraction problem solving in DD, and that inappropriate task modulation and hyper-connectivity, rather than under-engagement and under-connectivity, are the neural mechanisms underlying problem solving difficulties in children with DD. We discuss our findings in the broader context of multiple levels of analysis and performance issues inherent in neuroimaging studies of typical and atypical development. PMID:25098903
Rosenberg-Lee, Miriam; Ashkenazi, Sarit; Chen, Tianwen; Young, Christina B; Geary, David C; Menon, Vinod
2015-05-01
Developmental dyscalculia (DD) is marked by specific deficits in processing numerical and mathematical information despite normal intelligence (IQ) and reading ability. We examined how brain circuits used by young children with DD to solve simple addition and subtraction problems differ from those used by typically developing (TD) children who were matched on age, IQ, reading ability, and working memory. Children with DD were slower and less accurate during problem solving than TD children, and were especially impaired on their ability to solve subtraction problems. Children with DD showed significantly greater activity in multiple parietal, occipito-temporal and prefrontal cortex regions while solving addition and subtraction problems. Despite poorer performance during subtraction, children with DD showed greater activity in multiple intra-parietal sulcus (IPS) and superior parietal lobule subdivisions in the dorsal posterior parietal cortex as well as fusiform gyrus in the ventral occipito-temporal cortex. Critically, effective connectivity analyses revealed hyper-connectivity, rather than reduced connectivity, between the IPS and multiple brain systems including the lateral fronto-parietal and default mode networks in children with DD during both addition and subtraction. These findings suggest the IPS and its functional circuits are a major locus of dysfunction during both addition and subtraction problem solving in DD, and that inappropriate task modulation and hyper-connectivity, rather than under-engagement and under-connectivity, are the neural mechanisms underlying problem solving difficulties in children with DD. We discuss our findings in the broader context of multiple levels of analysis and performance issues inherent in neuroimaging studies of typical and atypical development. © 2014 John Wiley & Sons Ltd.
Appearance of the canine meninges in subtraction magnetic resonance images.
Lamb, Christopher R; Lam, Richard; Keenihan, Erin K; Frean, Stephen
2014-01-01
The canine meninges are not visible as discrete structures in noncontrast magnetic resonance (MR) images, and are incompletely visualized in T1-weighted, postgadolinium images, reportedly appearing as short, thin curvilinear segments with minimal enhancement. Subtraction imaging facilitates detection of enhancement of tissues, hence may increase the conspicuity of meninges. The aim of the present study was to describe qualitatively the appearance of canine meninges in subtraction MR images obtained using a dynamic technique. Images were reviewed of 10 consecutive dogs that had dynamic pre- and postgadolinium T1W imaging of the brain that was interpreted as normal, and had normal cerebrospinal fluid. Image-anatomic correlation was facilitated by dissection and histologic examination of two canine cadavers. Meningeal enhancement was relatively inconspicuous in postgadolinium T1-weighted images, but was clearly visible in subtraction images of all dogs. Enhancement was visible as faint, small-rounded foci compatible with vessels seen end on within the sulci, a series of larger rounded foci compatible with vessels of variable caliber on the dorsal aspect of the cerebral cortex, and a continuous thin zone of moderate enhancement around the brain. Superimposition of color-encoded subtraction images on pregadolinium T1- and T2-weighted images facilitated localization of the origin of enhancement, which appeared to be predominantly dural, with relatively few leptomeningeal structures visible. Dynamic subtraction MR imaging should be considered for inclusion in clinical brain MR protocols because of the possibility that its use may increase sensitivity for lesions affecting the meninges. © 2014 American College of Veterinary Radiology.
NASA Astrophysics Data System (ADS)
Wei, Xiaohua; Zhang, Mingfang
2010-12-01
Climatic variability and forest disturbance are commonly recognized as two major drivers influencing streamflow change in large-scale forested watersheds. The greatest challenge in evaluating quantitative hydrological effects of forest disturbance is the removal of climatic effect on hydrology. In this paper, a method was designed to quantify respective contributions of large-scale forest disturbance and climatic variability on streamflow using the Willow River watershed (2860 km2) located in the central part of British Columbia, Canada. Long-term (>50 years) data on hydrology, climate, and timber harvesting history represented by equivalent clear-cutting area (ECA) were available to discern climatic and forestry influences on streamflow by three steps. First, effective precipitation, an integrated climatic index, was generated by subtracting evapotranspiration from precipitation. Second, modified double mass curves were developed by plotting accumulated annual streamflow against annual effective precipitation, which presented a much clearer picture of the cumulative effects of forest disturbance on streamflow following removal of climatic influence. The average annual streamflow changes that were attributed to forest disturbances and climatic variability were then estimated to be +58.7 and -72.4 mm, respectively. The positive (increasing) and negative (decreasing) values in streamflow change indicated opposite change directions, which suggest an offsetting effect between forest disturbance and climatic variability in the study watershed. Finally, a multivariate Autoregressive Integrated Moving Average (ARIMA) model was generated to establish quantitative relationships between accumulated annual streamflow deviation attributed to forest disturbances and annual ECA. The model was then used to project streamflow change under various timber harvesting scenarios. The methodology can be effectively applied to any large-scale single watershed where long-term data (>50 years) are available.
Chouhan, Manil D; Mookerjee, Rajeshwar P; Bainbridge, Alan; Punwani, Shonit; Jones, Helen; Davies, Nathan; Walker-Samuel, Simon; Patch, David; Jalan, Rajiv; Halligan, Steve; Lythgoe, Mark F; Taylor, Stuart A
2017-03-01
Caval subtraction phase-contrast magnetic resonance imaging (PCMRI) noninvasive measurements of total liver blood flow (TLBF) and hepatic arterial (HA) flow have been validated in animal models and translated into normal volunteers, but not patients. This study aims to demonstrate its use in patients with liver cirrhosis, evaluate measurement consistency, correlate measurements with portal hypertension severity, and invasively validate TLBF measurements. Local research ethics committee approval was obtained. Twelve patients (mean, 50.8 ± 3.1 years; 10 men) with histologically confirmed cirrhosis were recruited prospectively, undergoing 2-dimensional PCMRI of the portal vein (PV) and the infrahepatic and suprahepatic inferior vena cava. Total liver blood flow and HA flow were estimated by subtracting infrahepatic from suprahepatic inferior vena cava flow and PV flow from estimated TLBF, respectively. Invasive hepatic venous pressure gradient (HVPG) and indocyanine green (ICG) clearance TLBF were measured within 7 days of PCMRI. Bland-Altman (BA) analysis of agreement, coefficients of variation, and Pearson correlation coefficients were calculated for comparisons with direct inflow PCMRI, HVPG, and ICG clearance. The mean difference between caval subtraction TLBF and direct inflow PCMRI was 6.3 ± 4.2 mL/min/100 g (BA 95% limits of agreement, ±28.7 mL/min/100 g). Significant positive correlations were observed between HVPG and caval subtraction HA fraction (r = 0.780, P = 0.014), but not for HA flow (r = 0.625, P = 0.053), PV flow (r = 0.244, P = 0.469), or caval subtraction TLBF (r = 0.473, P = 0.141). Caval subtraction and ICG TLBF agreement was modest (mean difference, -32.6 ± 16.6 mL/min/100 g; BA 95% limits of agreement, ±79.7 mL/min/100 g), but coefficients of variation were not different (65.7% vs 48.1%, P = 0.28). In this proof-of-principle study, caval subtraction PCMRI measurements are consistent with direct inflow PCMRI, correlate with portal hypertension severity, and demonstrate modest agreement with invasive TLBF measurements. Larger studies investigating the clinical role of TLBF and HA flow measurement in patients with liver disease are justified.
Three-Jet Production in Electron-Positron Collisions at Next-to-Next-to-Leading Order Accuracy
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Trócsányi, Zoltán
2016-10-01
We introduce a completely local subtraction method for fully differential predictions at next-to-next-to-leading order (NNLO) accuracy for jet cross sections and use it to compute event shapes in three-jet production in electron-positron collisions. We validate our method on two event shapes, thrust and C parameter, which are already known in the literature at NNLO accuracy and compute for the first time oblateness and the energy-energy correlation at the same accuracy.