Sample records for spectral subtraction algorithm

  1. Adaptive Noise Suppression Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Kozel, David; Nelson, Richard

    1996-01-01

    A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.

  2. Power Spectral Density Error Analysis of Spectral Subtraction Type of Speech Enhancement Methods

    NASA Astrophysics Data System (ADS)

    Händel, Peter

    2006-12-01

    A theoretical framework for analysis of speech enhancement algorithms is introduced for performance assessment of spectral subtraction type of methods. The quality of the enhanced speech is related to physical quantities of the speech and noise (such as stationarity time and spectral flatness), as well as to design variables of the noise suppressor. The derived theoretical results are compared with the outcome of subjective listening tests as well as successful design strategies, performed by independent research groups.

  3. Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging

    PubMed Central

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S.; Cho, Hyunjeong; Cho, Byoung-Kwan

    2015-01-01

    Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400–1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557–701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce. PMID:26610510

  4. Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging.

    PubMed

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S; Cho, Hyunjeong; Cho, Byoung-Kwan

    2015-11-20

    Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400-1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557-701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce.

  5. A multi-band spectral subtraction-based algorithm for real-time noise cancellation applied to gunshot acoustics

    NASA Astrophysics Data System (ADS)

    Ramos, António L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald

    2013-06-01

    Acoustical sniper positioning is based on the detection and direction-of-arrival estimation of the shockwave and the muzzle blast acoustical signals. In real-life situations, the detection and direction-of-arrival estimation processes is usually performed under the influence of background noise sources, e.g., vehicles noise, and might result in non-negligible inaccuracies than can affect the system performance and reliability negatively, specially when detecting the muzzle sound under long range distance and absorbing terrains. This paper introduces a multi-band spectral subtraction based algorithm for real-time noise reduction, applied to gunshot acoustical signals. The ballistic shockwave and the muzzle blast signals exhibit distinct frequency contents that are affected differently by additive noise. In most real situations, the noise component is colored and a multi-band spectral subtraction approach for noise reduction contributes to reducing the presence of artifacts in denoised signals. The proposed algorithm is tested using a dataset generated by combining signals from real gunshots and real vehicle noise. The noise component was generated using a steel tracked military tank running on asphalt and includes, therefore, the sound from the vehicle engine, which varies slightly in frequency over time according to the engine's rpm, and the sound from the steel tracks as the vehicle moves.

  6. SEARCHING FOR THE HR 8799 DEBRIS DISK WITH HST /STIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerard, B.; Marois, C.; Tannock, M.

    We present a new algorithm for space telescope high contrast imaging of close-to-face-on planetary disks called Optimized Spatially Filtered (OSFi) normalization. This algorithm is used on HR 8799 Hubble Space Telescope (HST) Space Telescope Imaging Spectrograph (STIS) coronagraphic archival data, showing an over-luminosity after reference star point-spread function (PSF) subtraction that may be from the inner disk and/or planetesimal belt components of this system. The PSF-subtracted radial profiles in two separate epochs from 2011 and 2012 are consistent with one another, and self-subtraction shows no residual in both epochs. We explore a number of possible false-positive scenarios that could explainmore » this residual flux, including telescope breathing, spectral differences between HR 8799 and the reference star, imaging of the known warm inner disk component, OSFi algorithm throughput and consistency with the standard spider normalization HST PSF subtraction technique, and coronagraph misalignment from pointing accuracy. In comparison to another similar STIS data set, we find that the over-luminosity is likely a result of telescope breathing and spectral difference between HR 8799 and the reference star. Thus, assuming a non-detection, we derive upper limits on the HR 8799 dust belt mass in small grains. In this scenario, we find that the flux of these micron-sized dust grains leaving the system due to radiation pressure is small enough to be consistent with measurements of other debris disk halos.« less

  7. Speech enhancement based on modified phase-opponency detectors

    NASA Astrophysics Data System (ADS)

    Deshmukh, Om D.; Espy-Wilson, Carol Y.

    2005-09-01

    A speech enhancement algorithm based on a neural model was presented by Deshmukh et al., [149th meeting of the Acoustical Society America, 2005]. The algorithm consists of a bank of Modified Phase Opponency (MPO) filter pairs tuned to different center frequencies. This algorithm is able to enhance salient spectral features in speech signals even at low signal-to-noise ratios. However, the algorithm introduces musical noise and sometimes misses a spectral peak that is close in frequency to a stronger spectral peak. Refinement in the design of the MPO filters was recently made that takes advantage of the falling spectrum of the speech signal in sonorant regions. The modified set of filters leads to better separation of the noise and speech signals, and more accurate enhancement of spectral peaks. The improvements also lead to a significant reduction in musical noise. Continuity algorithms based on the properties of speech signals are used to further reduce the musical noise effect. The efficiency of the proposed method in enhancing the speech signal when the level of the background noise is fluctuating will be demonstrated. The performance of the improved speech enhancement method will be compared with various spectral subtraction-based methods. [Work supported by NSF BCS0236707.

  8. Speech enhancement on smartphone voice recording

    NASA Astrophysics Data System (ADS)

    Tris Atmaja, Bagus; Nur Farid, Mifta; Arifianto, Dhany

    2016-11-01

    Speech enhancement is challenging task in audio signal processing to enhance the quality of targeted speech signal while suppress other noises. In the beginning, the speech enhancement algorithm growth rapidly from spectral subtraction, Wiener filtering, spectral amplitude MMSE estimator to Non-negative Matrix Factorization (NMF). Smartphone as revolutionary device now is being used in all aspect of life including journalism; personally and professionally. Although many smartphones have two microphones (main and rear) the only main microphone is widely used for voice recording. This is why the NMF algorithm widely used for this purpose of speech enhancement. This paper evaluate speech enhancement on smartphone voice recording by using some algorithms mentioned previously. We also extend the NMF algorithm to Kulback-Leibler NMF with supervised separation. The last algorithm shows improved result compared to others by spectrogram and PESQ score evaluation.

  9. New algorithm for lossless hyper-spectral image compression with mixing transform to eliminate redundancy

    NASA Astrophysics Data System (ADS)

    Xie, ChengJun; Xu, Lin

    2008-03-01

    This paper presents a new algorithm based on mixing transform to eliminate redundancy, SHIRCT and subtraction mixing transform is used to eliminate spectral redundancy, 2D-CDF(2,2)DWT to eliminate spatial redundancy, This transform has priority in hardware realization convenience, since it can be fully implemented by add and shift operation. Its redundancy elimination effect is better than (1D+2D)CDF(2,2)DWT. Here improved SPIHT+CABAC mixing compression coding algorithm is used to implement compression coding. The experiment results show that in lossless image compression applications the effect of this method is a little better than the result acquired using (1D+2D)CDF(2,2)DWT+improved SPIHT+CABAC, still it is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, NMST and MST. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, on the average the compression ratio of this algorithm exceeds the above algorithms by 42%,37%,35%,30%,16%,13%,11% respectively.

  10. Dereverberation and denoising based on generalized spectral subtraction by multi-channel LMS algorithm using a small-scale microphone array

    NASA Astrophysics Data System (ADS)

    Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko

    2012-12-01

    A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech recognition and under various parameter estimation conditions.

  11. Hyper-spectral image compression algorithm based on mixing transform of wave band grouping to eliminate redundancy

    NASA Astrophysics Data System (ADS)

    Xie, ChengJun; Xu, Lin

    2008-03-01

    This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.

  12. Subjective comparison and evaluation of speech enhancement algorithms

    PubMed Central

    Hu, Yi; Loizou, Philipos C.

    2007-01-01

    Making meaningful comparisons between the performance of the various speech enhancement algorithms proposed over the years, has been elusive due to lack of a common speech database, differences in the types of noise used and differences in the testing methodology. To facilitate such comparisons, we report on the development of a noisy speech corpus suitable for evaluation of speech enhancement algorithms. This corpus is subsequently used for the subjective evaluation of 13 speech enhancement methods encompassing four classes of algorithms: spectral subtractive, subspace, statistical-model based and Wiener-type algorithms. The subjective evaluation was performed by Dynastat, Inc. using the ITU-T P.835 methodology designed to evaluate the speech quality along three dimensions: signal distortion, noise distortion and overall quality. This paper reports the results of the subjective tests. PMID:18046463

  13. Comparison of Three Instructional Sequences for the Addition and Subtraction Algorithms. Technical Report 273.

    ERIC Educational Resources Information Center

    Wiles, Clyde A.

    The study's purpose was to investigate the differential effects on the achievement of second-grade students that could be attributed to three instructional sequences for the learning of the addition and subtraction algorithms. One sequence presented the addition algorithm first (AS), the second presented the subtraction algorithm first (SA), and…

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    SEDLACEK,III, A.J.FINFROCK,C.

    As a member of the science-support part of the ITT-lead LISA development program, BNL is tasked with the acquisition of UV Raman spectral fingerprints and associated scattering cross-sections for those chemicals-of-interest to the program's sponsor. In support of this role, the present report contains the first installment of UV Raman spectral fingerprint data on the initial subset of chemicals. Because of the unique nature associated with the acquisition of spectral fingerprints for use in spectral pattern matching algorithms (i.e., CLS, PLS, ANN) great care has been undertaken to maximize the signal-to-noise and to minimize unnecessary spectral subtractions, in an effortmore » to provide the highest quality spectral fingerprints. This report is divided into 4 sections. The first is an Experimental section that outlines how the Raman spectra are performed. This is then followed by a section on Sample Handling. Following this, the spectral fingerprints are presented in the Results section where the data reduction process is outlined. Finally, a Photographs section is included.« less

  15. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    ERIC Educational Resources Information Center

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  16. Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Horne, William C.

    2015-01-01

    An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.

  17. Digital Noise Reduction: An Overview

    PubMed Central

    Bentler, Ruth; Chiou, Li-Kuei

    2006-01-01

    Digital noise reduction schemes are being used in most hearing aids currently marketed. Unlike the earlier analog schemes, these manufacturer-specific algorithms are developed to acoustically analyze the incoming signal and alter the gain/output characteristics according to their predetermined rules. Although most are modulation-based schemes (ie, differentiating speech from noise based on temporal characteristics), spectral subtraction techniques are being applied as well. The purpose of this article is to overview these schemes in terms of their differences and similarities. PMID:16959731

  18. Communication system with adaptive noise suppression

    NASA Technical Reports Server (NTRS)

    Kozel, David (Inventor); Devault, James A. (Inventor); Birr, Richard B. (Inventor)

    2007-01-01

    A signal-to-noise ratio dependent adaptive spectral subtraction process eliminates noise from noise-corrupted speech signals. The process first pre-emphasizes the frequency components of the input sound signal which contain the consonant information in human speech. Next, a signal-to-noise ratio is determined and a spectral subtraction proportion adjusted appropriately. After spectral subtraction, low amplitude signals can be squelched. A single microphone is used to obtain both the noise-corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoiced frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Spectral subtraction may be performed on a composite noise-corrupted signal, or upon individual sub-bands of the noise-corrupted signal. Pre-averaging of the input signal's magnitude spectrum over multiple time frames may be performed to reduce musical noise.

  19. Spectral characterization of plastic scintillation detector response as a function of magnetic field strength

    NASA Astrophysics Data System (ADS)

    Simiele, E.; Kapsch, R.-P.; Ankerhold, U.; Culberson, W.; DeWerd, L.

    2018-04-01

    The purpose of this work was to characterize intensity and spectral response changes in a plastic scintillation detector (PSD) as a function of magnetic field strength. Spectra measurements as a function of magnetic field strength were performed using an optical spectrometer. The response of both a PSD and PMMA fiber were investigated to isolate the changes in response from the scintillator and the noise signal as a function of magnetic field strength. All irradiations were performed in water at a photon beam energy of 6 MV. Magnetic field strengths of (0, ±0.35, ±0.70, ±1.05, and  ±1.40) T were investigated. Four noise subtraction techniques were investigated to evaluate the impact on the resulting noise-subtracted scintillator response with magnetic field strength. The noise subtraction methods included direct spectral subtraction, the spectral method, and variants thereof. The PMMA fiber exhibited changes in response of up to 50% with magnetic field strength due to the directional light emission from \\breve{C} erenkov radiation. The PSD showed increases in response of up to 10% when not corrected for the noise signal, which agrees with previous investigations of scintillator response in magnetic fields. Decreases in the \\breve{C} erenkov light ratio with negative field strength were observed with a maximum change at  ‑1.40 T of 3.2% compared to 0 T. The change in the noise-subtracted PSD response as a function of magnetic field strength varied with the noise subtraction technique used. Even after noise subtraction, the PSD exhibited changes in response of up to 5.5% over the four noise subtraction methods investigated.

  20. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  1. New subtraction algorithms for evaluation of lesions on dynamic contrast-enhanced MR mammography.

    PubMed

    Choi, Byung Gil; Kim, Hak Hee; Kim, Euy Neyng; Kim, Bum-soo; Han, Ji-Youn; Yoo, Seung-Schik; Park, Seog Hee

    2002-12-01

    We report new subtraction algorithms for the detection of lesions in dynamic contrast-enhanced MR mammography(CE MRM). Twenty-five patients with suspicious breast lesions underwent dynamic CE MRM using 3D fast low-angle shot. After the acquisition of the T1-weighted scout images, dynamic images were acquired six times after the bolus injection of contrast media. Serial subtractions, step-by-step subtractions, and reverse subtractions, were performed. Two radiologists attempted to differentiate benign from malignant lesion in consensus. The sensitivity, specificity, and accuracy of the method leading to the differentiation of malignant tumor from benign lesions were 85.7, 100, and 96%, respectively. Subtraction images allowed for better visualization of the enhancement as well as its temporal pattern than visual inspection of dynamic images alone. Our findings suggest that the new subtraction algorithm is adequate for screening malignant breast lesions and can potentially replace the time-intensity profile analysis on user-selected regions of interest.

  2. DFT Calculation of IR Absorption Spectra for PCE-nH2O, TCE-nH2O, DCE-nH2O, VC-nH2O for Small and Water-Dominated Molecular Clusters

    DTIC Science & Technology

    2017-10-31

    of isolated molecules and that of bulk systems. DFT calculated absorption spectra represent quantitative estimates that can be correlated with...spectra, can be correlated with the presence of these hydrocarbons (see reference [1]). Accordingly, the molecular structure and IR absorption spectra of...associated with different types of ambient molecules, e.g., H2O, in order to apply background subtraction or spectral-signature- correlation algorithms

  3. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  4. Informed baseline subtraction of proteomic mass spectrometry data aided by a novel sliding window algorithm.

    PubMed

    Stanford, Tyman E; Bagley, Christopher J; Solomon, Patty J

    2016-01-01

    Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios ( m / z ), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel 'continuous' line segment algorithm that efficiently operates over data with a transformed m / z -axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m / z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Several of the transformations investigated were able to reduce, if not entirely remove, the peak width and peak location relationship resulting in near-optimal baseline subtraction using the automated pipeline. The proposed novel 'continuous' line segment algorithm is shown to far outperform naive sliding window algorithms with regard to the computational time required. The improvement in computational time was at least four-fold on real MALDI TOF-MS data and at least an order of magnitude on many simulated datasets. The advantages of the proposed pipeline include informed and data specific input arguments for baseline subtraction methods, the avoidance of time-intensive and subjective piecewise baseline subtraction, and the ability to automate baseline subtraction completely. Moreover, individual steps can be adopted as stand-alone routines.

  5. Noise suppression methods for robust speech processing

    NASA Astrophysics Data System (ADS)

    Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.

    1980-05-01

    Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.

  6. Fast sparse Raman spectral unmixing for chemical fingerprinting and quantification

    NASA Astrophysics Data System (ADS)

    Yaghoobi, Mehrdad; Wu, Di; Clewes, Rhea J.; Davies, Mike E.

    2016-10-01

    Raman spectroscopy is a well-established spectroscopic method for the detection of condensed phase chemicals. It is based on scattered light from exposure of a target material to a narrowband laser beam. The information generated enables presumptive identification from measuring correlation with library spectra. Whilst this approach is successful in identification of chemical information of samples with one component, it is more difficult to apply to spectral mixtures. The capability of handling spectral mixtures is crucial for defence and security applications as hazardous materials may be present as mixtures due to the presence of degradation, interferents or precursors. A novel method for spectral unmixing is proposed here. Most modern decomposition techniques are based on the sparse decomposition of mixture and the application of extra constraints to preserve the sum of concentrations. These methods have often been proposed for passive spectroscopy, where spectral baseline correction is not required. Most successful methods are computationally expensive, e.g. convex optimisation and Bayesian approaches. We present a novel low complexity sparsity based method to decompose the spectra using a reference library of spectra. It can be implemented on a hand-held spectrometer in near to real-time. The algorithm is based on iteratively subtracting the contribution of selected spectra and updating the contribution of each spectrum. The core algorithm is called fast non-negative orthogonal matching pursuit, which has been proposed by the authors in the context of nonnegative sparse representations. The iteration terminates when the maximum number of expected chemicals has been found or the residual spectrum has a negligible energy, i.e. in the order of the noise level. A backtracking step removes the least contributing spectrum from the list of detected chemicals and reports it as an alternative component. This feature is particularly useful in detection of chemicals with small contributions, which are normally not detected. The proposed algorithm is easily reconfigurable to include new library entries and optional preferential threat searches in the presence of predetermined threat indicators. Under Ministry of Defence funding, we have demonstrated the algorithm for fingerprinting and rough quantification of the concentration of chemical mixtures using a set of reference spectral mixtures. In our experiments, the algorithm successfully managed to detect the chemicals with concentrations below 10 percent. The running time of the algorithm is in the order of one second, using a single core of a desktop computer.

  7. Robust Speech Enhancement Using Two-Stage Filtered Minima Controlled Recursive Averaging

    NASA Astrophysics Data System (ADS)

    Ghourchian, Negar; Selouani, Sid-Ahmed; O'Shaughnessy, Douglas

    In this paper we propose an algorithm for estimating noise in highly non-stationary noisy environments, which is a challenging problem in speech enhancement. This method is based on minima-controlled recursive averaging (MCRA) whereby an accurate, robust and efficient noise power spectrum estimation is demonstrated. We propose a two-stage technique to prevent the appearance of musical noise after enhancement. This algorithm filters the noisy speech to achieve a robust signal with minimum distortion in the first stage. Subsequently, it estimates the residual noise using MCRA and removes it with spectral subtraction. The proposed Filtered MCRA (FMCRA) performance is evaluated using objective tests on the Aurora database under various noisy environments. These measures indicate the higher output SNR and lower output residual noise and distortion.

  8. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    NASA Astrophysics Data System (ADS)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Wang, Jason J.; Pueyo, Laurent; Nielsen, Eric L.; De Rosa, Robert J.; Czekala, Ian; Marley, Mark S.; Arriaga, Pauline; Bailey, Vanessa P.; Barman, Travis; Bulger, Joanna; Chilcote, Jeffrey; Cotten, Tara; Doyon, Rene; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Gerard, Benjamin L.; Goodsell, Stephen J.; Graham, James R.; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Marois, Christian; Metchev, Stanimir; Millar-Blanchaer, Maxwell A.; Morzinski, Katie M.; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall; Poyneer, Lisa; Rajan, Abhijith; Rameau, Julien; Rantakyrö, Fredrik T.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane; Wolff, Schuyler

    2017-06-01

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.

  9. Wind profiling for a coherent wind Doppler lidar by an auto-adaptive background subtraction approach.

    PubMed

    Wu, Yanwei; Guo, Pan; Chen, Siying; Chen, He; Zhang, Yinchao

    2017-04-01

    Auto-adaptive background subtraction (AABS) is proposed as a denoising method for data processing of the coherent Doppler lidar (CDL). The method is proposed specifically for a low-signal-to-noise-ratio regime, in which the drifting power spectral density of CDL data occurs. Unlike the periodogram maximum (PM) and adaptive iteratively reweighted penalized least squares (airPLS), the proposed method presents reliable peaks and is thus advantageous in identifying peak locations. According to the analysis results of simulated and actually measured data, the proposed method outperforms the airPLS method and the PM algorithm in the furthest detectable range. The proposed method improves the detection range approximately up to 16.7% and 40% when compared to the airPLS method and the PM method, respectively. It also has smaller mean wind velocity and standard error values than the airPLS and PM methods. The AABS approach improves the quality of Doppler shift estimates and can be applied to obtain the whole wind profiling by the CDL.

  10. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  11. A comparative intelligibility study of single-microphone noise reduction algorithms.

    PubMed

    Hu, Yi; Loizou, Philipos C

    2007-09-01

    The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.

  12. Subtraction CT angiography in head and neck with low radiation and contrast dose dual-energy spectral CT using rapid kV-switching technique.

    PubMed

    Ma, Guangming; Yu, Yong; Duan, Haifeng; Dou, Yuequn; Jia, Yongjun; Zhang, Xirong; Yang, Chuangbo; Chen, Xiaoxia; Han, Dong; Guo, Changyi; He, Taiping

    2018-06-01

    To investigate the application of low radiation and contrast dose spectral CT angiology using rapid kV-switching technique in the head and neck with subtraction method for bone removal. This prospective study was approved by the local ethics committee. 64 cases for head and neck CT angiology were randomly divided into Groups A (n = 32) and B (n = 32). Group A underwent unenhanced CT with 100 kVp, 200 mA and contrast-enhanced CT with spectral CT mode with body mass index-dependent low dose protocols. Group B used conventional helical scanning with 120 kVp, auto mA for noise index of 12 HU (Hounsfield unit) for both the unenhanced and contrast-enhanced CT. Subtraction images were formed by subtracting the unenhanced images from enhanced images (with the 65 keV-enhanced spectral CT image in Group A). CT numbers and their standard deviations in aortic arch, carotid arteries, middle cerebral artery and air were measured in the subtraction images. The signal-to-noise ratio and contrast-to-noise ratio for the common and internal carotid arteries and middle cerebral artery were calculated. Image quality in terms of bone removal effect was evaluated by two experienced radiologists independently and blindly using a 4-point system. Radiation dose and total iodine load were recorded. Measurements were statistically compared between the two groups. The two groups had same demographic results. There was no difference in the CT number, signal-to-noise and contrast-to-noise ratio values for carotid arteries and middle cerebral artery in the subtraction images between the two groups (p > 0.05). However, the bone removal effect score [median (min-max)] in Group A [4 (3-4)] was rated better than in Group B [3 (2-4)] (p < 0.001), with excellent agreement between the two observers (κ > 0.80). The radiation dose in Group A (average of 2.64 mSv) was 57% lower than the 6.18 mSv in Group B (p < 0.001). The total iodine intake in Group A was 13.5g, 36% lower than the 21g in Group B. Spectral CT imaging with rapid kV-switching in the subtraction angiography in head and neck provides better bone removal with significantly reduced radiation and contrast dose compared with conventional subtraction method. Advances in knowledge: This novel method provides better bone removal with significant radiation and contrast dose reduction compared with the conventional subtraction CT, and maybe used clinically to protect the thyroid gland and ocular lenses from unnecessary high radiation.

  13. Transactional Algorithm for Subtracting Fractions: Go Shopping

    ERIC Educational Resources Information Center

    Pinckard, James Seishin

    2009-01-01

    The purpose of this quasi-experimental research study was to examine the effects of an alternative or transactional algorithm for subtracting mixed numbers within the middle school setting. Initial data were gathered from the student achievement of four mathematics teachers at three different school sites. The results indicated students who…

  14. Novel full-spectral flow cytometry with multiple spectrally-adjacent fluorescent proteins and fluorochromes and visualization of in vivo cellular movement.

    PubMed

    Futamura, Koji; Sekino, Masashi; Hata, Akihiro; Ikebuchi, Ryoyo; Nakanishi, Yasutaka; Egawa, Gyohei; Kabashima, Kenji; Watanabe, Takeshi; Furuki, Motohiro; Tomura, Michio

    2015-09-01

    Flow cytometric analysis with multicolor fluoroprobes is an essential method for detecting biological signatures of cells. Here, we present a new full-spectral flow cytometer (spectral-FCM). Unlike conventional flow cytometer, this spectral-FCM acquires the emitted fluorescence for all probes across the full-spectrum from each cell with 32 channels sequential PMT unit after dispersion with prism, and extracts the signals of each fluoroprobe based on the spectral shape of each fluoroprobe using unique algorithm in high speed, high sensitive, accurate, automatic and real-time. The spectral-FCM detects the continuous changes in emission spectra from green to red of the photoconvertible protein, KikGR with high-spectral resolution and separates spectrally-adjacent fluoroprobes, such as FITC (Emission peak (Em) 519 nm) and EGFP (Em 507 nm). Moreover, the spectral-FCM can measure and subtract autofluorescence of each cell providing increased signal-to-noise ratios and improved resolution of dim samples, which leads to a transformative technology for investigation of single cell state and function. These advances make it possible to perform 11-color fluorescence analysis to visualize movement of multilinage immune cells by using KikGR-expressing mice. Thus, the novel spectral flow cytometry improves the combinational use of spectrally-adjacent various FPs and multicolor fluorochromes in metabolically active cell for the investigation of not only the immune system but also other research and clinical fields of use. © 2015 International Society for Advancement of Cytometry.

  15. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  16. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratiomore » (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.« less

  17. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves.

    PubMed

    Gitelson, Anatoly A; Gritz, Yuri; Merzlyak, Mark N

    2003-03-01

    Leaf chlorophyll content provides valuable information about physiological status of plants. Reflectance measurement makes it possible to quickly and non-destructively assess, in situ, the chlorophyll content in leaves. Our objective was to investigate the spectral behavior of the relationship between reflectance and chlorophyll content and to develop a technique for non-destructive chlorophyll estimation in leaves with a wide range of pigment content and composition using reflectance in a few broad spectral bands. Spectral reflectance of maple, chestnut, wild vine and beech leaves in a wide range of pigment content and composition was investigated. It was shown that reciprocal reflectance (R lambda)-1 in the spectral range lambda from 520 to 550 nm and 695 to 705 nm related closely to the total chlorophyll content in leaves of all species. Subtraction of near infra-red reciprocal reflectance, (RNIR)-1, from (R lambda)-1 made index [(R lambda)(-1)-(RNIR)-1] linearly proportional to the total chlorophyll content in spectral ranges lambda from 525 to 555 nm and from 695 to 725 nm with coefficient of determination r2 > 0.94. To adjust for differences in leaf structure, the product of the latter index and NIR reflectance [(R lambda)(-1)-(RNIR)-1]*(RNIR) was used; this further increased the accuracy of the chlorophyll estimation in the range lambda from 520 to 585 nm and from 695 to 740 nm. Two independent data sets were used to validate the developed algorithms. The root mean square error of the chlorophyll prediction did not exceed 50 mumol/m2 in leaves with total chlorophyll ranged from 1 to 830 mumol/m2.

  18. Detection and Characterization of Exoplanets using Projections on Karhunen-Loeve Eigenimages: Forward Modeling

    NASA Astrophysics Data System (ADS)

    Pueyo, Laurent

    2016-01-01

    A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.

  19. An HF and lower VHF spectrum assessment system exploiting instantaneously wideband capture

    NASA Astrophysics Data System (ADS)

    Barnes, Rod I.; Singh, Malkiat; Earl, Fred

    2017-09-01

    We report on a spectral environment evaluation and recording (SEER) system, for instantaneously wideband spectral capture and characterization in the HF and lower VHF band, utilizing a direct digital receiver coupled to a data recorder. The system is designed to contend with a wide variety of electromagnetic environments and to provide accurately calibrated spectral characterization and display from very short (ms) to synoptic scales. The system incorporates a novel RF front end involving automated gain and equalization filter selection which provides an analogue frequency-dependent gain characteristic that mitigates the high dynamic range found across the HF and lower VHF spectrum. The system accurately calibrates its own internal noise and automatically subtracts this from low variance, external spectral estimates, further extending the dynamic range over which robust characterization is possible. Laboratory and field experiments demonstrate that the implementation of these concepts has been effective. Sensitivity to varying antenna load impedance of the internal noise reduction process has been examined. Examples of software algorithms to provide extraction and visualization of spectral behavior over narrowband, wideband, short, and synoptic scales are provided. Application in HF noise spectral density monitoring, spectral signal strength assessment, and electromagnetic interference detection is possible with examples provided. The instantaneously full bandwidth collection provides some innovative applications, and this is demonstrated by the collection of discrete lightning emissions, which form fast ionograms called "flashagrams" in power-delay-frequency plots.

  20. Contexts for Column Addition and Subtraction

    ERIC Educational Resources Information Center

    Lopez Fernandez, Jorge M.; Velazquez Estrella, Aileen

    2011-01-01

    In this article, the authors discuss their approach to column addition and subtraction algorithms. Adapting an original idea of Paul Cobb and Erna Yackel's from "A Contextual Investigation of Three-Digit Addition and Subtraction" related to packing and unpacking candy in a candy factory, the authors provided an analogous context by…

  1. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  2. Optimizing Energy Consumption in Vehicular Sensor Networks by Clustering Using Fuzzy C-Means and Fuzzy Subtractive Algorithms

    NASA Astrophysics Data System (ADS)

    Ebrahimi, A.; Pahlavani, P.; Masoumi, Z.

    2017-09-01

    Traffic monitoring and managing in urban intelligent transportation systems (ITS) can be carried out based on vehicular sensor networks. In a vehicular sensor network, vehicles equipped with sensors such as GPS, can act as mobile sensors for sensing the urban traffic and sending the reports to a traffic monitoring center (TMC) for traffic estimation. The energy consumption by the sensor nodes is a main problem in the wireless sensor networks (WSNs); moreover, it is the most important feature in designing these networks. Clustering the sensor nodes is considered as an effective solution to reduce the energy consumption of WSNs. Each cluster should have a Cluster Head (CH), and a number of nodes located within its supervision area. The cluster heads are responsible for gathering and aggregating the information of clusters. Then, it transmits the information to the data collection center. Hence, the use of clustering decreases the volume of transmitting information, and, consequently, reduces the energy consumption of network. In this paper, Fuzzy C-Means (FCM) and Fuzzy Subtractive algorithms are employed to cluster sensors and investigate their performance on the energy consumption of sensors. It can be seen that the FCM algorithm and Fuzzy Subtractive have been reduced energy consumption of vehicle sensors up to 90.68% and 92.18%, respectively. Comparing the performance of the algorithms implies the 1.5 percent improvement in Fuzzy Subtractive algorithm in comparison.

  3. Image Processing Of Images From Peripheral-Artery Digital Subtraction Angiography (DSA) Studies

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Tarbox, Lawrence R.; Cist, David B.; Faul, David D.

    1988-06-01

    A system is being developed to test the possibility of doing peripheral, digital subtraction angiography (DSA) with a single contrast injection using a moving gantry system. Given repositioning errors that occur between the mask and contrast-containing images, factors affecting the success of subtractions following image registration have been investigated theoretically and experimentally. For a 1 mm gantry displacement, parallax and geometric image distortion (pin-cushion) both give subtraction errors following registration that are approximately 25% of the error resulting from no registration. Image processing techniques improve the subtractions. The geometric distortion effect is reduced using a piece-wise, 8 parameter unwarping method. Plots of image similarity measures versus pixel shift are well behaved and well fit by a parabola, leading to the development of an iterative, automatic registration algorithm that uses parabolic prediction of the new minimum. The registration algorithm converges quickly (less than 1 second on a MicroVAX) and is relatively immune to the region of interest (ROI) selected.

  4. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  5. Topographic prominence discriminator for the detection of short-latency spikes of retinal ganglion cells

    NASA Astrophysics Data System (ADS)

    Choi, Myoung-Hwan; Ahn, Jungryul; Park, Dae Jin; Lee, Sang Min; Kim, Kwangsoo; Cho, Dong-il Dan; Senok, Solomon S.; Koo, Kyo-in; Goo, Yong Sook

    2017-02-01

    Objective. Direct stimulation of retinal ganglion cells in degenerate retinas by implanting epi-retinal prostheses is a recognized strategy for restoration of visual perception in patients with retinitis pigmentosa or age-related macular degeneration. Elucidating the best stimulus-response paradigms in the laboratory using multielectrode arrays (MEA) is complicated by the fact that the short-latency spikes (within 10 ms) elicited by direct retinal ganglion cell (RGC) stimulation are obscured by the stimulus artifact which is generated by the electrical stimulator. Approach. We developed an artifact subtraction algorithm based on topographic prominence discrimination, wherein the duration of prominences within the stimulus artifact is used as a strategy for identifying the artifact for subtraction and clarifying the obfuscated spikes which are then quantified using standard thresholding. Main results. We found that the prominence discrimination based filters perform creditably in simulation conditions by successfully isolating randomly inserted spikes in the presence of simple and even complex residual artifacts. We also show that the algorithm successfully isolated short-latency spikes in an MEA-based recording from degenerate mouse retinas, where the amplitude and frequency characteristics of the stimulus artifact vary according to the distance of the recording electrode from the stimulating electrode. By ROC analysis of false positive and false negative first spike detection rates in a dataset of one hundred and eight RGCs from four retinal patches, we found that the performance of our algorithm is comparable to that of a generally-used artifact subtraction filter algorithm which uses a strategy of local polynomial approximation (SALPA). Significance. We conclude that the application of topographic prominence discrimination is a valid and useful method for subtraction of stimulation artifacts with variable amplitudes and shapes. We propose that our algorithm may be used as stand-alone or supplementary to other artifact subtraction algorithms like SALPA.

  6. Miniature Color Display Phase 4

    DTIC Science & Technology

    1993-05-01

    is used to generate full color. By spectral tuning of the xenon arc-lamp backlight and the color polarizers, a color gamut comparable to that of a...5 1.2 Phase IV Accom plishments ................................... 5 1.2.1 Subtractive Color Gamut ...Technical Achievem ents .............................................. 8 2.1 Subtractive Color Gamut 2.1.1 Sub Color LC Technology

  7. Multivariate Spatial Condition Mapping Using Subtractive Fuzzy Cluster Means

    PubMed Central

    Sabit, Hakilo; Al-Anbuky, Adnan

    2014-01-01

    Wireless sensor networks are usually deployed for monitoring given physical phenomena taking place in a specific space and over a specific duration of time. The spatio-temporal distribution of these phenomena often correlates to certain physical events. To appropriately characterise these events-phenomena relationships over a given space for a given time frame, we require continuous monitoring of the conditions. WSNs are perfectly suited for these tasks, due to their inherent robustness. This paper presents a subtractive fuzzy cluster means algorithm and its application in data stream mining for wireless sensor systems over a cloud-computing-like architecture, which we call sensor cloud data stream mining. Benchmarking on standard mining algorithms, the k-means and the FCM algorithms, we have demonstrated that the subtractive fuzzy cluster means model can perform high quality distributed data stream mining tasks comparable to centralised data stream mining. PMID:25313495

  8. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  9. Improving chlorophyll-a retrievals and cross-sensor consistency through the OCI algorithm concept

    NASA Astrophysics Data System (ADS)

    Feng, L.; Hu, C.; Lee, Z.; Franz, B. A.

    2016-02-01

    Abstract: The recently developed band-subtraction based OCI chlorophyll-a algorithm is more tolerant than the band-ratio OCx algorithms to errors from atmospheric correction and other sources in oligotrophic oceans (Chl ≤ 0.25 mg m-3), and it has been implemented by NASA as the default algorithm to produce global Chl data from all ocean color missions. However, two areas still require improvements in its current implementation. Firstly, the originally proposed algorithm switch between oligotrophic and more productive waters has been changed from 0.25 - 0.3 mg m-3 to 0.15 - 0.2 mg m-3 to account for the observed discontinuity in data statistics. Additionally, the algorithm does not account for variable proportions of colored dissolved organic matter (CDOM) in different ocean basins. Here, new step-wise regression equations with fine-tuned regression coefficients are used to improve raise the algorithm switch zone and to improve data statistics as well as retrieval accuracy. A new CDOM index (CDI) based on three spectral bands (412, 443 and 490 nm) is used as a weighting factor to adjust the algorithm for the optical disparities between different oceans. The updated Chl OCI algorithm is then evaluated for its overall accuracy using field observations through the SeaBASS data archive, and for its cross-sensor consistency using multi-sensor observations over the global oceans. Keywords: Chlorophyll-a, Remote sensing, Ocean color, OCI, OCx, CDOM, MODIS, SeaWiFS, VIIRS

  10. Developing Essential Understanding of Addition and Subtraction for Teaching Mathematics in Pre-K-Grade 2

    ERIC Educational Resources Information Center

    Karp, Karen; Caldwell, Janet; Zbiek, Rose Mary; Bay-Williams, Jennifer

    2011-01-01

    What is the relationship between addition and subtraction? How do individuals know whether an algorithm will always work? Can they explain why order matters in subtraction but not in addition, or why it is false to assert that the sum of any two whole numbers is greater than either number? It is organized around two big ideas and supported by…

  11. Subtraction with hadronic initial states at NLO: an NNLO-compatible scheme

    NASA Astrophysics Data System (ADS)

    Somogyi, Gábor

    2009-05-01

    We present an NNLO-compatible subtraction scheme for computing QCD jet cross sections of hadron-initiated processes at NLO accuracy. The scheme is constructed specifically with those complications in mind, that emerge when extending the subtraction algorithm to next-to-next-to-leading order. It is therefore possible to embed the present scheme in a full NNLO computation without any modifications.

  12. A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) for extraction of drug metabolites in liquid chromatography/mass spectrometry data from biological matrices.

    PubMed

    Zhu, Peijuan; Ding, Wei; Tong, Wei; Ghosal, Anima; Alton, Kevin; Chowdhury, Swapan

    2009-06-01

    A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) is implemented using the statistical programming language R to remove non-drug-related ion signals from accurate mass liquid chromatography/mass spectrometry (LC/MS) data. The background-subtraction part of the algorithm is similar to a previously published procedure (Zhang H and Yang Y. J. Mass Spectrom. 2008, 43: 1181-1190). The noise reduction algorithm (NoRA) is an add-on feature to help further clean up the residual matrix ion noises after background subtraction. It functions by removing ion signals that are not consistent across many adjacent scans. The effectiveness of BgS-NoRA was examined in biological matrices by spiking blank plasma extract, bile and urine with diclofenac and ibuprofen that have been pre-metabolized by microsomal incubation. Efficient removal of background ions permitted the detection of drug-related ions in in vivo samples (plasma, bile, urine and feces) obtained from rats orally dosed with (14)C-loratadine with minimal interference. Results from these experiments demonstrate that BgS-NoRA is more effective in removing analyte-unrelated ions than background subtraction alone. NoRA is shown to be particularly effective in the early retention region for urine samples and middle retention region for bile samples, where the matrix ion signals still dominate the total ion chromatograms (TICs) after background subtraction. In most cases, the TICs after BgS-NoRA are in excellent qualitative correlation to the radiochromatograms. BgS-NoRA will be a very useful tool in metabolite detection and identification work, especially in first-in-human (FIH) studies and multiple dose toxicology studies where non-radio-labeled drugs are administered. Data from these types of studies are critical to meet the latest FDA guidance on Metabolite in Safety Testing (MIST). Copyright (c) 2009 John Wiley & Sons, Ltd.

  13. On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.

    2004-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.

  14. Compressive Sensing for Background Subtraction

    DTIC Science & Technology

    2009-12-20

    i) reconstructing an image using only a single optical pho- todiode (infrared, hyperspectral, etc.) along with a digital micromirror device (DMD... curves , we use the full images, run the background subtraction algorithm proposed in [19], and obtain baseline background subtracted images. We then...the images to generate the ROC curve . 5.5 Silhouettes vs. Difference Images We have used a multi camera set up for a 3D voxel reconstruction using the

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, David R.; Cherinka, Brian; Yan, Renbin

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 A and an average footprint of ~500 arcsec 2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ~100 million raw-frame spectra and ~10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ~8500 A and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec -2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s -1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s -1.« less

  16. Advances in structure elucidation of small molecules using mass spectrometry

    PubMed Central

    Fiehn, Oliver

    2010-01-01

    The structural elucidation of small molecules using mass spectrometry plays an important role in modern life sciences and bioanalytical approaches. This review covers different soft and hard ionization techniques and figures of merit for modern mass spectrometers, such as mass resolving power, mass accuracy, isotopic abundance accuracy, accurate mass multiple-stage MS(n) capability, as well as hybrid mass spectrometric and orthogonal chromatographic approaches. The latter part discusses mass spectral data handling strategies, which includes background and noise subtraction, adduct formation and detection, charge state determination, accurate mass measurements, elemental composition determinations, and complex data-dependent setups with ion maps and ion trees. The importance of mass spectral library search algorithms for tandem mass spectra and multiple-stage MS(n) mass spectra as well as mass spectral tree libraries that combine multiple-stage mass spectra are outlined. The successive chapter discusses mass spectral fragmentation pathways, biotransformation reactions and drug metabolism studies, the mass spectral simulation and generation of in silico mass spectra, expert systems for mass spectral interpretation, and the use of computational chemistry to explain gas-phase phenomena. A single chapter discusses data handling for hyphenated approaches including mass spectral deconvolution for clean mass spectra, cheminformatics approaches and structure retention relationships, and retention index predictions for gas and liquid chromatography. The last section reviews the current state of electronic data sharing of mass spectra and discusses the importance of software development for the advancement of structure elucidation of small molecules. Electronic supplementary material The online version of this article (doi:10.1007/s12566-010-0015-9) contains supplementary material, which is available to authorized users. PMID:21289855

  17. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    PubMed Central

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896

  18. True ion pick (TIPick): a denoising and peak picking algorithm to extract ion signals from liquid chromatography/mass spectrometry data.

    PubMed

    Ho, Tsung-Jung; Kuo, Ching-Hua; Wang, San-Yuan; Chen, Guan-Yuan; Tseng, Yufeng J

    2013-02-01

    Liquid Chromatography-Time of Flight Mass Spectrometry has become an important technique for toxicological screening and metabolomics. We describe TIPick a novel algorithm that accurately and sensitively detects target compounds in biological samples. TIPick comprises two main steps: background subtraction and peak picking. By subtracting a blank chromatogram, TIPick eliminates chemical signals of blank injections and reduces false positive results. TIPick detects peaks by calculating the S(CC(INI)) values of extracted ion chromatograms (EICs) without considering peak shapes, and it is able to detect tailing and fronting peaks. TIPick also uses duplicate injections to enhance the signals of the peaks and thus improve the peak detection power. Commonly seen split peaks caused by either saturation of the mass spectrometer detector or a mathematical background subtraction algorithm can be resolved by adjusting the mass error tolerance of the EICs and by comparing the EICs before and after background subtraction. The performance of TIPick was tested in a data set containing 297 standard mixtures; the recall, precision and F-score were 0.99, 0.97 and 0.98, respectively. TIPick was successfully used to construct and analyze the NTU MetaCore metabolomics chemical standards library, and it was applied for toxicological screening and metabolomics studies. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Optical constants of solid ammonia in the infrared

    NASA Technical Reports Server (NTRS)

    Robertson, C. W.; Downing, H. D.; Curnutte, B.; Williams, D.

    1975-01-01

    No direct measurements of the refractive index for solid ammonia could be obtained because of failures in attempts to map the reflection spectrum. Kramers-Kronig techniques were, therefore, used in the investigation. The subtractive Kramers-Kronig techniques employed are similar to those discussed by Ahrenkiel (1971). The subtractive method provides a more-rapid convergence than the conventional techniques when data are available over a limited spectral range.

  20. Spectral K-edge subtraction imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Samadi, N.; Martinson, M.; Bassey, B.; Wei, Z.; Belev, G.; Chapman, D.

    2014-05-01

    We describe a spectral x-ray transmission method to provide images of independent material components of an object using a synchrotron x-ray source. The imaging system and process is similar to K-edge subtraction (KES) imaging where two imaging energies are prepared above and below the K-absorption edge of a contrast element and a quantifiable image of the contrast element and a water equivalent image are obtained. The spectral method, termed ‘spectral-KES’ employs a continuous spectrum encompassing an absorption edge of an element within the object. The spectrum is prepared by a bent Laue monochromator with good focal and energy dispersive properties. The monochromator focuses the spectral beam at the object location, which then diverges onto an area detector such that one dimension in the detector is an energy axis. A least-squares method is used to interpret the transmitted spectral data with fits to either measured and/or calculated absorption of the contrast and matrix material-water. The spectral-KES system is very simple to implement and is comprised of a bent Laue monochromator, a stage for sample manipulation for projection and computed tomography imaging, and a pixelated area detector. The imaging system and examples of its applications to biological imaging are presented. The system is particularly well suited for a synchrotron bend magnet beamline with white beam access.

  1. Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem

    NASA Astrophysics Data System (ADS)

    Auteri, F.; Quartapelle, L.; Vigevano, L.

    2002-08-01

    This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.

  2. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.

  3. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  4. [The backgroud sky subtraction around [OIII] line in LAMOST QSO spectra].

    PubMed

    Shi, Zhi-Xin; Comte, Georges; Luo, A-Li; Tu, Liang-Ping; Zhao, Yong-Heng; Wu, Fu-Chao

    2014-11-01

    At present, most sky-subtraction methods focus on the full spectrum, not the particular location, especially for the backgroud sky around [OIII] line which is very important to low redshift quasars. A new method to precisely subtract sky lines in local region is proposed in the present paper, which sloves the problem that the width of Hβ-[OIII] line is effected by the backgroud sky subtraction. The exprimental results show that, for different redshift quasars, the spectral quality has been significantly improved using our method relative to the original batch program by LAMOST. It provides a complementary solution for the small part of LAMOST spectra which are not well handled by LAMOST 2D pipeline. Meanwhile, This method has been used in searching for candidates of double-peaked Active Galactic Nuclei.

  5. STATCONT: A statistical continuum level determination method for line-rich sources

    NASA Astrophysics Data System (ADS)

    Sánchez-Monge, Á.; Schilke, P.; Ginsburg, A.; Cesaroni, R.; Schmiedeke, A.

    2018-01-01

    STATCONT is a python-based tool designed to determine the continuum emission level in spectral data, in particular for sources with a line-rich spectrum. The tool inspects the intensity distribution of a given spectrum and automatically determines the continuum level by using different statistical approaches. The different methods included in STATCONT are tested against synthetic data. We conclude that the sigma-clipping algorithm provides the most accurate continuum level determination, together with information on the uncertainty in its determination. This uncertainty can be used to correct the final continuum emission level, resulting in the here called `corrected sigma-clipping method' or c-SCM. The c-SCM has been tested against more than 750 different synthetic spectra reproducing typical conditions found towards astronomical sources. The continuum level is determined with a discrepancy of less than 1% in 50% of the cases, and less than 5% in 90% of the cases, provided at least 10% of the channels are line free. The main products of STATCONT are the continuum emission level, together with a conservative value of its uncertainty, and datacubes containing only spectral line emission, i.e., continuum-subtracted datacubes. STATCONT also includes the option to estimate the spectral index, when different files covering different frequency ranges are provided.

  6. Adaptive handling of Rayleigh and Raman scatter of fluorescence data based on evaluation of the degree of spectral overlap

    NASA Astrophysics Data System (ADS)

    Hu, Yingtian; Liu, Chao; Wang, Xiaoping; Zhao, Dongdong

    2018-06-01

    At present the general scatter handling methods are unsatisfactory when scatter and fluorescence seriously overlap in excitation emission matrix. In this study, an adaptive method for scatter handling of fluorescence data is proposed. Firstly, the Raman scatter was corrected by subtracting the baseline of deionized water which was collected in each experiment to adapt to the intensity fluctuations. Then, the degrees of spectral overlap between Rayleigh scatter and fluorescence were classified into three categories based on the distance between the spectral peaks. The corresponding algorithms, including setting to zero, fitting on single or both sides, were implemented after the evaluation of the degree of overlap for individual emission spectra. The proposed method minimized the number of fitting and interpolation processes, which reduced complexity, saved time, avoided overfitting, and most importantly assured the authenticity of data. Furthermore, the effectiveness of this procedure on the subsequent PARAFAC analysis was assessed and compared to Delaunay interpolation by conducting experiments with four typical organic chemicals and real water samples. Using this method, we conducted long-term monitoring of tap water and river water near a dyeing and printing plant. This method can be used for improving adaptability and accuracy in the scatter handling of fluorescence data.

  7. Relational Thinking: What's the Difference?

    ERIC Educational Resources Information Center

    Whitacre, Ian; Schoen, Robert C.; Champagne, Zachary; Goddard, Andrea

    2017-01-01

    Data (Schoen et al. 2016) suggests that because many students' understanding of subtraction is limited by thinking about the operation only as take-away or by using a default procedure, such as the standard subtraction algorithm in the United States, second graders are much more likely to solve 100 minus 3 correctly than 201 minus 199. This…

  8. A Novel Sky-Subtraction Method Based on Non-negative Matrix Factorisation with Sparsity for Multi-object Fibre Spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zhang, Long; Ye, Zhongfu

    2016-12-01

    A novel sky-subtraction method based on non-negative matrix factorisation with sparsity is proposed in this paper. The proposed non-negative matrix factorisation with sparsity method is redesigned for sky-subtraction considering the characteristics of the skylights. It has two constraint terms, one for sparsity and the other for homogeneity. Different from the standard sky-subtraction techniques, such as the B-spline curve fitting methods and the Principal Components Analysis approaches, sky-subtraction based on non-negative matrix factorisation with sparsity method has higher accuracy and flexibility. The non-negative matrix factorisation with sparsity method has research value for the sky-subtraction on multi-object fibre spectroscopic telescope surveys. To demonstrate the effectiveness and superiority of the proposed algorithm, experiments are performed on Large Sky Area Multi-Object Fiber Spectroscopic Telescope data, as the mechanisms of the multi-object fibre spectroscopic telescopes are similar.

  9. The Data Reduction Pipeline for The SDSS-IV Manga IFU Galaxy Survey

    DOE PAGES

    Law, David R.; Cherinka, Brian; Yan, Renbin; ...

    2016-09-12

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 A and an average footprint of ~500 arcsec 2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ~100 million raw-frame spectra and ~10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ~8500 A and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec -2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s -1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s -1.« less

  10. THE DATA REDUCTION PIPELINE FOR THE SDSS-IV MaNGA IFU GALAXY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, David R.; Cherinka, Brian; Yan, Renbin

    2016-10-01

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622–10354 Å and an average footprint of ∼500 arcsec{sup 2} per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ∼100 million raw-frame spectra and ∼10 millionmore » reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ∼8500 Å and reach a typical 10 σ limiting continuum surface brightness μ  = 23.5 AB arcsec{sup −2} in a five-arcsecond-diameter aperture in the g -band. The wavelength calibration of the MaNGA data is accurate to 5 km s{sup −1} rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ  = 72 km s{sup −1}.« less

  11. The Data Reduction Pipeline for the SDSS-IV MaNGA IFU Galaxy Survey

    NASA Astrophysics Data System (ADS)

    Law, David R.; Cherinka, Brian; Yan, Renbin; Andrews, Brett H.; Bershady, Matthew A.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blanton, Michael R.; Bolton, Adam S.; Brownstein, Joel R.; Bundy, Kevin; Chen, Yanmei; Drory, Niv; D'Souza, Richard; Fu, Hai; Jones, Amy; Kauffmann, Guinevere; MacDonald, Nicholas; Masters, Karen L.; Newman, Jeffrey A.; Parejko, John K.; Sánchez-Gallego, José R.; Sánchez, Sebastian F.; Schlegel, David J.; Thomas, Daniel; Wake, David A.; Weijmans, Anne-Marie; Westfall, Kyle B.; Zhang, Kai

    2016-10-01

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is an optical fiber-bundle integral-field unit (IFU) spectroscopic survey that is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV). With a spectral coverage of 3622-10354 Å and an average footprint of ˜500 arcsec2 per IFU the scientific data products derived from MaNGA will permit exploration of the internal structure of a statistically large sample of 10,000 low-redshift galaxies in unprecedented detail. Comprising 174 individually pluggable science and calibration IFUs with a near-constant data stream, MaNGA is expected to obtain ˜100 million raw-frame spectra and ˜10 million reduced galaxy spectra over the six-year lifetime of the survey. In this contribution, we describe the MaNGA Data Reduction Pipeline algorithms and centralized metadata framework that produce sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations. For the 1390 galaxy data cubes released in Summer 2016 as part of SDSS-IV Data Release 13, we demonstrate that the MaNGA data have nearly Poisson-limited sky subtraction shortward of ˜8500 Å and reach a typical 10σ limiting continuum surface brightness μ = 23.5 AB arcsec-2 in a five-arcsecond-diameter aperture in the g-band. The wavelength calibration of the MaNGA data is accurate to 5 km s-1 rms, with a median spatial resolution of 2.54 arcsec FWHM (1.8 kpc at the median redshift of 0.037) and a median spectral resolution of σ = 72 km s-1.

  12. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  13. [An Algorithm to Eliminate Power Frequency Interference in ECG Using Template].

    PubMed

    Shi, Guohua; Li, Jiang; Xu, Yan; Feng, Liang

    2017-01-01

    Researching an algorithm to eliminate power frequency interference in ECG. The algorithm first creates power frequency interference template, then, subtracts the template from the original ECG signals, final y, the algorithm gets the ECG signals without interference. Experiment shows the algorithm can eliminate interference effectively and has none side effect to normal signal. It’s efficient and suitable for practice.

  14. Linear: A Novel Algorithm for Reconstructing Slitless Spectroscopy from HST/WFC3

    NASA Astrophysics Data System (ADS)

    Ryan, R. E., Jr.; Casertano, S.; Pirzkal, N.

    2018-03-01

    We present a grism extraction package (LINEAR) designed to reconstruct 1D spectra from a collection of slitless spectroscopic images, ideally taken at a variety of orientations, dispersion directions, and/or dither positions. Our approach is to enumerate every transformation between all direct image positions (i.e., a potential source) and the collection of grism images at all relevant wavelengths. This leads to solving a large, sparse system of linear equations, which we invert using the standard LSQR algorithm. We implement a number of color and geometric corrections (such as flat field, pixel-area map, source morphology, and spectral bandwidth), but assume many effects have been calibrated out (such as basic reductions, background subtraction, and astrometric refinement). We demonstrate the power of our approach with several Monte Carlo simulations and the analysis of archival data. The simulations include astrometric and photometric uncertainties, sky-background estimation, and signal-to-noise calculations. The data are G141 observations obtained with the Wide-Field Camera 3 of the Hubble Ultra-Deep Field, and show the power of our formalism by improving the spectral resolution without sacrificing the signal-to-noise (a tradeoff that is often made by current approaches). Additionally, our approach naturally accounts for source contamination, which is only handled heuristically by present softwares. We conclude with a discussion of various observations where our approach will provide much improved spectral 1D spectra, such as crowded fields (star or galaxy clusters), spatially resolved spectroscopy, or surveys with strict completeness requirements. At present our software is heavily geared for Wide-Field Camera 3 IR, however we plan extend the codebase for additional instruments.

  15. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  16. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  17. Nonrigid Image Registration in Digital Subtraction Angiography Using Multilevel B-Spline

    PubMed Central

    2013-01-01

    We address the problem of motion artifact reduction in digital subtraction angiography (DSA) using image registration techniques. Most of registration algorithms proposed for application in DSA, have been designed for peripheral and cerebral angiography images in which we mainly deal with global rigid motions. These algorithms did not yield good results when applied to coronary angiography images because of complex nonrigid motions that exist in this type of angiography images. Multiresolution and iterative algorithms are proposed to cope with this problem, but these algorithms are associated with high computational cost which makes them not acceptable for real-time clinical applications. In this paper we propose a nonrigid image registration algorithm for coronary angiography images that is significantly faster than multiresolution and iterative blocking methods and outperforms competing algorithms evaluated on the same data sets. This algorithm is based on a sparse set of matched feature point pairs and the elastic registration is performed by means of multilevel B-spline image warping. Experimental results with several clinical data sets demonstrate the effectiveness of our approach. PMID:23971026

  18. Modeling Self-subtraction in Angular Differential Imaging: Application to the HD 32297 Debris Disk

    NASA Astrophysics Data System (ADS)

    Esposito, Thomas M.; Fitzgerald, Michael P.; Graham, James R.; Kalas, Paul

    2014-01-01

    We present a new technique for forward-modeling self-subtraction of spatially extended emission in observations processed with angular differential imaging (ADI) algorithms. High-contrast direct imaging of circumstellar disks is limited by quasi-static speckle noise, and ADI is commonly used to suppress those speckles. However, the application of ADI can result in self-subtraction of the disk signal due to the disk's finite spatial extent. This signal attenuation varies with radial separation and biases measurements of the disk's surface brightness, thereby compromising inferences regarding the physical processes responsible for the dust distribution. To compensate for this attenuation, we forward model the disk structure and compute the form of the self-subtraction function at each separation. As a proof of concept, we apply our method to 1.6 and 2.2 μm Keck adaptive optics NIRC2 scattered-light observations of the HD 32297 debris disk reduced using a variant of the "locally optimized combination of images" algorithm. We are able to recover disk surface brightness that was otherwise lost to self-subtraction and produce simplified models of the brightness distribution as it appears with and without self-subtraction. From the latter models, we extract radial profiles for the disk's brightness, width, midplane position, and color that are unbiased by self-subtraction. Our analysis of these measurements indicates a break in the brightness profile power law at r ≈ 110 AU and a disk width that increases with separation from the star. We also verify disk curvature that displaces the midplane by up to 30 AU toward the northwest relative to a straight fiducial midplane.

  19. Novel full‐spectral flow cytometry with multiple spectrally‐adjacent fluorescent proteins and fluorochromes and visualization of in vivo cellular movement

    PubMed Central

    Futamura, Koji; Sekino, Masashi; Hata, Akihiro; Ikebuchi, Ryoyo; Nakanishi, Yasutaka; Egawa, Gyohei; Kabashima, Kenji; Watanabe, Takeshi; Furuki, Motohiro

    2015-01-01

    Abstract Flow cytometric analysis with multicolor fluoroprobes is an essential method for detecting biological signatures of cells. Here, we present a new full‐spectral flow cytometer (spectral‐FCM). Unlike conventional flow cytometer, this spectral‐FCM acquires the emitted fluorescence for all probes across the full‐spectrum from each cell with 32 channels sequential PMT unit after dispersion with prism, and extracts the signals of each fluoroprobe based on the spectral shape of each fluoroprobe using unique algorithm in high speed, high sensitive, accurate, automatic and real‐time. The spectral‐FCM detects the continuous changes in emission spectra from green to red of the photoconvertible protein, KikGR with high‐spectral resolution and separates spectrally‐adjacent fluoroprobes, such as FITC (Emission peak (Em) 519 nm) and EGFP (Em 507 nm). Moreover, the spectral‐FCM can measure and subtract autofluorescence of each cell providing increased signal‐to‐noise ratios and improved resolution of dim samples, which leads to a transformative technology for investigation of single cell state and function. These advances make it possible to perform 11‐color fluorescence analysis to visualize movement of multilinage immune cells by using KikGR‐expressing mice. Thus, the novel spectral flow cytometry improves the combinational use of spectrally‐adjacent various FPs and multicolor fluorochromes in metabolically active cell for the investigation of not only the immune system but also other research and clinical fields of use. © 2015 The Authors. Cytometry Part A Published by Wiley Periodicals, Inc. on behalf of ISAC PMID:26217952

  20. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  1. Volumetric display containing multiple two-dimensional color motion pictures

    NASA Astrophysics Data System (ADS)

    Hirayama, R.; Shiraki, A.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T.

    2014-06-01

    We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.

  2. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  3. Real-time out-of-plane artifact subtraction tomosynthesis imaging using prior CT for scanning beam digital x-ray system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca

    2014-11-01

    Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less

  4. Demonstration of a single-wavelength spectral-imaging-based Thai jasmine rice identification

    NASA Astrophysics Data System (ADS)

    Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2011-07-01

    A single-wavelength spectral-imaging-based Thai jasmine rice breed identification is demonstrated. Our nondestructive identification approach relies on a combination of fluorescent imaging and simple image processing techniques. Especially, we apply simple image thresholding, blob filtering, and image subtracting processes to either a 545 or a 575nm image in order to identify our desired Thai jasmine rice breed from others. Other key advantages include no waste product and fast identification time. In our demonstration, UVC light is used as our exciting light, a liquid crystal tunable optical filter is used as our wavelength seclector, and a digital camera with 640activepixels×480activepixels is used to capture the desired spectral image. Eight Thai rice breeds having similar size and shape are tested. Our experimental proof of concept shows that by suitably applying image thresholding, blob filtering, and image subtracting processes to the selected fluorescent image, the Thai jasmine rice breed can be identified with measured false acceptance rates of <22.9% and <25.7% for spectral images at 545 and 575nm wavelengths, respectively. A measured fast identification time is 25ms, showing high potential for real-time applications.

  5. A Low-Stress Algorithm for Fractions

    ERIC Educational Resources Information Center

    Ruais, Ronald W.

    1978-01-01

    An algorithm is given for the addition and subtraction of fractions based on dividing the sum of diagonal numerator and denominator products by the product of the denominators. As an explanation of the teaching method, activities used in teaching are demonstrated. (MN)

  6. Excess Hα emission in chromospherically active binaries.

    NASA Astrophysics Data System (ADS)

    Montes, D.; Fernandez-Figueroa, M. J.; de Castro, E.; Cornide, M.

    1995-02-01

    We study the behaviour of the excess Hα emission in a sample of 51 chromospherically active binary systems (RS CVn and BY Dra classes), of different activity levels. This sample include the 27 stars analysed by Fernandez-Figueroa et al. (1994) and the new observations of 24 systems described by Montes et al. (1994b). By using the spectral subtraction technique (subtraction of a synthesized stellar spectrum constructed from reference stars of similar spectral type and luminosity class) we obtain the active-chromosphere contribution to the Hα line in these 51 systems. We have determined the excess Hα emission equivalent widths and converted it to surface fluxes. The Hα emissions arising from each component star were obtained when it was possible to deblend both contributions. The comparison of the excess Hα emission, obtained with the spectral subtraction technique, with other Hα activity indices allows us to conclude that this is the preferable activity indicator for binaries. The behaviour of the excess Hα emission as a function of the rotation has been analyzed. A slight decline toward longer rotational periods, P_rot_, and larger Rossby numbers, R_0_, is present in agreement with previous results using others activity indicators. We have compared the derived excess Hα emission fluxes with those obtained in the Ca II K and Hɛ lines finding that a good correlation exits between these three chromospheric activity indicators. The Hα losses seem to be more important than Ca II K losses for cooler stars, in fact all the system with Hα emission above the continuum are cooler than 5000K. Correlations with other activity indicators, (C IV in the transition region, and X-rays in the corona) indicate that the exponents of the power-law relations increase with the formation temperature of the spectral features.

  7. Mitigating fluorescence spectral overlap in wide-field endoscopic imaging

    PubMed Central

    Hou, Vivian; Nelson, Leonard Y.; Seibel, Eric J.

    2013-01-01

    Abstract. The number of molecular species suitable for multispectral fluorescence imaging is limited due to the overlap of the emission spectra of indicator fluorophores, e.g., dyes and nanoparticles. To remove fluorophore emission cross-talk in wide-field multispectral fluorescence molecular imaging, we evaluate three different solutions: (1) image stitching, (2) concurrent imaging with cross-talk ratio subtraction algorithm, and (3) frame-sequential imaging. A phantom with fluorophore emission cross-talk is fabricated, and a 1.2-mm ultrathin scanning fiber endoscope (SFE) is used to test and compare these approaches. Results show that fluorophore emission cross-talk could be successfully avoided or significantly reduced. Near term, the concurrent imaging method of wide-field multispectral fluorescence SFE is viable for early stage cancer detection and localization in vivo. Furthermore, a means to enhance exogenous fluorescence target-to-background ratio by the reduction of tissue autofluorescence background is demonstrated. PMID:23966226

  8. Theoretical and Monte Carlo optimization of a stacked three-layer flat-panel x-ray imager for applications in multi-spectral diagnostic medical imaging

    NASA Astrophysics Data System (ADS)

    Lopez Maurino, Sebastian; Badano, Aldo; Cunningham, Ian A.; Karim, Karim S.

    2016-03-01

    We propose a new design of a stacked three-layer flat-panel x-ray detector for dual-energy (DE) imaging. Each layer consists of its own scintillator of individual thickness and an underlying thin-film-transistor-based flat-panel. Three images are obtained simultaneously in the detector during the same x-ray exposure, thereby eliminating any motion artifacts. The detector operation is two-fold: a conventional radiography image can be obtained by combining all three layers' images, while a DE subtraction image can be obtained from the front and back layers' images, where the middle layer acts as a mid-filter that helps achieve spectral separation. We proceed to optimize the detector parameters for two sample imaging tasks that could particularly benefit from this new detector by obtaining the best possible signal to noise ratio per root entrance exposure using well-established theoretical models adapted to fit our new design. These results are compared to a conventional DE temporal subtraction detector and a single-shot DE subtraction detector with a copper mid-filter, both of which underwent the same theoretical optimization. The findings are then validated using advanced Monte Carlo simulations for all optimized detector setups. Given the performance expected from initial results and the recent decrease in price for digital x-ray detectors, the simplicity of the three-layer stacked imager approach appears promising to usher in a new generation of multi-spectral digital x-ray diagnostics.

  9. Demonstration of spectral correlation control in a source of polarization-entangled photon pairs at telecom wavelength.

    PubMed

    Lutz, Thomas; Kolenderski, Piotr; Jennewein, Thomas

    2014-03-15

    Spectrally correlated photon pairs can be used to improve the performance of long-range fiber-based quantum communication protocols. We present a source based on spontaneous parametric downconversion, which allows one to control spectral correlations within the entangled photon pair without spectral filtering by changing the pump-pulse duration or the characteristics of the coupled spatial modes. The spectral correlations and polarization entanglement are characterized. We find that the generated photon pairs can feature both positive spectral correlations, decorrelation, or negative correlations at the same time as polarization entanglement with a high fidelity of 0.97 (no background subtraction) with the expected Bell state.

  10. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  11. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    PubMed

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  12. Correction of Atmospheric Haze in RESOURCESAT-1 LISS-4 MX Data for Urban Analysis: AN Improved Dark Object Subtraction Approach

    NASA Astrophysics Data System (ADS)

    Mustak, S.

    2013-09-01

    The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.

  13. Parallel optoelectronic trinary signed-digit division

    NASA Astrophysics Data System (ADS)

    Alam, Mohammad S.

    1999-03-01

    The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.

  14. An automatic detection software for differential reflection spectroscopy

    NASA Astrophysics Data System (ADS)

    Yuksel, Seniha Esen; Dubroca, Thierry; Hummel, Rolf E.; Gader, Paul D.

    2012-06-01

    Recent terrorist attacks have sprung a need for a large scale explosive detector. Our group has developed differential reflection spectroscopy which can detect explosive residue on surfaces such as parcel, cargo and luggage. In short, broad band ultra-violet and visible light is shone onto a material (such as a parcel) moving on a conveyor belt. Upon reflection off the surface, the light intensity is recorded with a spectrograph (spectrometer in combination with a CCD camera). This reflected light intensity is then subtracted and normalized with the next data point collected, resulting in differential reflection spectra in the 200-500 nm range. Explosives show spectral finger-prints at specific wavelengths, for example, the spectrum of 2,4,6, trinitrotoluene (TNT) shows an absorption edge at 420 nm. Additionally, we have developed an automated software which detects the characteristic features of explosives. One of the biggest challenges for the algorithm is to reach a practical limit of detection. In this study, we introduce our automatic detection software which is a combination of principal component analysis and support vector machines. Finally we present the sensitivity and selectivity response of our algorithm as a function of the amount of explosive detected on a given surface.

  15. Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms

    NASA Astrophysics Data System (ADS)

    Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.

    2006-03-01

    In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.

  16. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  17. Quantitative Analysis of Drugs with Highly Different Concentrations of Pharmaceutical Components Using Spectral Subtraction Techniques

    NASA Astrophysics Data System (ADS)

    Ayoub, B. M.

    2017-11-01

    Two simple spectrophotometric methods were developed for determination of empagliflozin and metformin by manipulating their ratio spectra with application on a recently approved pharmaceutical combination, Synjardy® tablets. A spiking technique was used to increase the concentration of empagliflozin after extraction from the tablets to allow its simultaneous determination with metformin. Validation parameters according to ICH guidelines were acceptable over the concentration range of 2-12 μg/mL for both drugs using constant multiplication and spectrum subtraction methods. The optimized methods are suitable for QC labs.

  18. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    NASA Astrophysics Data System (ADS)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  19. VizieR Online Data Catalog: Excess CaII H&K emission in active binaries (Montes+, 1996)

    NASA Astrophysics Data System (ADS)

    Montes, D.; Fernandez-Figueroa, M. J.; Cornide, M.; de Castro, E.

    1996-05-01

    In this work we analyze the behaviour of the excess CaII H & K and H_epsilon emissions in a sample of 73 chromospherically active binary systems (RS CVn and BY Dra classes), of different activity levels and luminosity classes. This sample includes the 53 stars analyzed by Fernandez-Figueroa et al. (1994) and the observations of 28 systems described by Montes et al. (1995). By using the spectral subtraction technique (subtraction of a synthesized stellar spectrum constructed from reference stars of spectral type and luminosity class similar to those of the binary star components) we obtain the active-chromosphere contribution to the CaII H & K lines in these 73 systems. We have determined the excess CaII H & K emission equivalent widths and converted them into surface fluxes. The emissions arising from each component were obtained when it was possible to deblend both contributions. (4 data files).

  20. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  1. Analytical optimization of digital subtraction mammography with contrast medium using a commercial unit.

    PubMed

    Rosado-Méndez, I; Palma, B A; Brandan, M E

    2008-12-01

    Contrast-medium-enhanced digital mammography (CEDM) is an image subtraction technique which might help unmasking lesions embedded in very dense breasts. Previous works have stated the feasibility of CEDM and the imperative need of radiological optimization. This work presents an extension of a former analytical formalism to predict contrast-to-noise ratio (CNR) in subtracted mammograms. The goal is to optimize radiological parameters available in a clinical mammographic unit (x-ray tube anode/filter combination, voltage, and loading) by maximizing CNR and minimizing total mean glandular dose (D(gT)), simulating the experimental application of an iodine-based contrast medium and the image subtraction under dual-energy nontemporal, and single- or dual-energy temporal modalities. Total breast-entrance air kerma is limited to a fixed 8.76 mGy (1 R, similar to screening studies). Mathematical expressions obtained from the formalism are evaluated using computed mammographic x-ray spectra attenuated by an adipose/glandular breast containing an elongated structure filled with an iodinated solution in various concentrations. A systematic study of contrast, its associated variance, and CNR for different spectral combinations is performed, concluding in the proposal of optimum x-ray spectra. The linearity between contrast in subtracted images and iodine mass thickness is proven, including the determination of iodine visualization limits based on Rose's detection criterion. Finally, total breast-entrance air kerma is distributed between both images in various proportions in order to maximize the figure of merit CNR2/D(gT). Predicted results indicate the advantage of temporal subtraction (either single- or dual-energy modalities) with optimum parameters corresponding to high-voltage, strongly hardened Rh/Rh spectra. For temporal techniques, CNR was found to depend mostly on the energy of the iodinated image, and thus reduction in D(gT) could be achieved if the spectral energy of the noniodinated image is decreased and the breast-entrance air kerma is evenly distributed between both acquisitions. Predicted limits, in terms of iodine concentration, are found to guarantee the visualization of common clinical angiogenic concentrations in the breast.

  2. Spectral assessment of new ASTER SWIR surface reflectance data products for spectroscopic mapping of rocks and minerals

    USGS Publications Warehouse

    Mars, J.C.; Rowan, L.C.

    2010-01-01

    ASTER reflectance spectra from Cuprite, Nevada, and Mountain Pass, California, were compared to spectra of field samples and to ASTER-resampled AVIRIS reflectance data to determine spectral accuracy and spectroscopic mapping potential of two new ASTER SWIR reflectance datasets: RefL1b and AST_07XT. RefL1b is a new reflectance dataset produced for this study using ASTER Level 1B data, crosstalk correction, radiance correction factors, and concurrently acquired level 2 MODIS water vapor data. The AST_07XT data product, available from EDC and ERSDAC, incorporates crosstalk correction and non-concurrently acquired MODIS water vapor data for atmospheric correction. Spectral accuracy was determined using difference values which were compiled from ASTER band 5/6 and 9/8 ratios of AST_07XT or RefL1b data subtracted from similar ratios calculated for field sample and AVIRIS reflectance data. In addition, Spectral Analyst, a statistical program that utilizes a Spectral Feature Fitting algorithm, was used to quantitatively assess spectral accuracy of AST_07XT and RefL1b data.Spectral Analyst matched more minerals correctly and had higher scores for the RefL1b data than for AST_07XT data. The radiance correction factors used in the RefL1b data corrected a low band 5 reflectance anomaly observed in the AST_07XT and AST_07 data but also produced anomalously high band 5 reflectance in RefL1b spectra with strong band 5 absorption for minerals, such as alunite. Thus, the band 5 anomaly seen in the RefL1b data cannot be corrected using additional gain adjustments. In addition, the use of concurrent MODIS water vapor data in the atmospheric correction of the RefL1b data produced datasets that had lower band 9 reflectance anomalies than the AST_07XT data. Although assessment of spectral data suggests that RefL1b data are more consistent and spectrally more correct than AST_07XT data, the Spectral Analyst results indicate that spectral discrimination between some minerals, such as alunite and kaolinite, are still not possible unless additional spectral calibration using site specific spectral data are performed. ?? 2010.

  3. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  4. HYMOSS signal processing for pushbroom spectral imaging

    NASA Astrophysics Data System (ADS)

    Ludwig, David E.

    1991-06-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  5. Observer model optimization of a spectral mammography system

    NASA Astrophysics Data System (ADS)

    Fredenberg, Erik; Åslund, Magnus; Cederström, Björn; Lundqvist, Mats; Danielsson, Mats

    2010-04-01

    Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. Contrast-enhanced spectral imaging has been thoroughly investigated, but unenhanced imaging may be more useful because it comes as a bonus to the conventional non-energy-resolved absorption image at screening; there is no additional radiation dose and no need for contrast medium. We have used a previously developed theoretical framework and system model that include quantum and anatomical noise to characterize the performance of a photon-counting spectral mammography system with two energy bins for unenhanced imaging. The theoretical framework was validated with synthesized images. Optimal combination of the energy-resolved images for detecting large unenhanced tumors corresponded closely, but not exactly, to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, deteriorated detectability. For small microcalcifications or tumors on uniform backgrounds, however, energy subtraction was suboptimal whereas energy weighting provided a minute improvement. The performance was largely independent of beam quality, detector energy resolution, and bin count fraction. It is clear that inclusion of anatomical noise and imaging task in spectral optimization may yield completely different results than an analysis based solely on quantum noise.

  6. Photoelectrochromism in Tungsten Trioxide Colloidal Solutions

    ERIC Educational Resources Information Center

    Chenthamarakshan, C. R.; Tacconi, N. R. de; Xu, Lucy; Rajeshwar, Krishnan

    2004-01-01

    Photophysical and photochemical properties of semiconductor metal oxide colloids are studied in the context of photoelectrochemical conversion and storage of solar energy. The experiment teaches the instrumental principles of UV-visible spectrophotometry, spectral acquisition and background subtraction strategies and diode array spectrometers.

  7. Sky Subtraction with Fiber-Fed Spectrograph

    NASA Astrophysics Data System (ADS)

    Rodrigues, Myriam

    2017-09-01

    "Historically, fiber-fed spectrographs had been deemed inadequate for the observation of faint targets, mainly because of the difficulty to achieve high accuracy on the sky subtraction. The impossibility to sample the sky in the immediate vicinity of the target in fiber instruments has led to a commonly held view that a multi-object fibre spectrograph cannot achieve an accurate sky subtraction under 1% contrary to their slit counterpart. The next generation of multi-objects spectrograph at the VLT (MOONS) and the planed MOS for the E-ELT (MOSAIC) are fiber-fed instruments, and are aimed to observed targets fainter than the sky continuum level. In this talk, I will present the state-of-art on sky subtraction strategies and data reduction algorithm specifically developed for fiber-fed spectrographs. I will also present the main results of an observational campaign to better characterise the sky spatial and temporal variations ( in particular the continuum and faint sky lines)."

  8. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses hidden in vibration signals and performs well for bearing fault diagnosis.

  9. Spectral domain optical coherence tomography with dual-balanced detection

    NASA Astrophysics Data System (ADS)

    Bo, En; Liu, Xinyu; Chen, Si; Luo, Yuemei; Wang, Nanshuo; Wang, Xianghong; Liu, Linbo

    2016-03-01

    We developed a spectral domain optical coherence tomography (SD-OCT) system employing dual-balanced detection (DBD) for direct current term suppression and SNR enhancement, especially for auto-autocorrelation artifacts reduction. The DBD was achieved by using a beam splitter to building a free-space Michelson interferometer, which generated two interferometric spectra with a phase difference of π. These two phase-opposed spectra were guided to the spectrometer through two single mode fibers of the 8 fiber v-groove array and acquired by ultizing the upper two lines of a three-line CCD camera. We rotated this fiber v-groove array by 1.35 degrees to focus two spectra onto the first and second line of the CCD camera. Two spectra were aligned by optimum spectrum matching algorithm. By subtracting one spectrum from the other, this dual-balanced detection system achieved a direct current term suppression of ~30 dB, SNR enhancement of ~3 dB, and auto-autocorrelation artifacts reduction of ~10 dB experimentally. Finally we respectively validated the feasibility and performance of dual-balanced detection by imaging a glass plate and swine corneal tissue ex vivo. The quality of images obtained using dual-balanced detection was significantly improved with regard to the conventional single-detection (SD) images.

  10. Estimating the marine signal in the near infrared for atmospheric correction of satellite ocean-color imagery over turbid waters

    NASA Astrophysics Data System (ADS)

    Bourdet, Alice; Frouin, Robert J.

    2014-11-01

    The classic atmospheric correction algorithm, routinely applied to second-generation ocean-color sensors such as SeaWiFS, MODIS, and MERIS, consists of (i) estimating the aerosol reflectance in the red and near infrared (NIR) where the ocean is considered black (i.e., totally absorbing), and (ii) extrapolating the estimated aerosol reflectance to shorter wavelengths. The marine reflectance is then retrieved by subtraction. Variants and improvements have been made over the years to deal with non-null reflectance in the red and near infrared, a general situation in estuaries and the coastal zone, but the solutions proposed so far still suffer some limitations, due to uncertainties in marine reflectance modeling in the near infrared or difficulty to extrapolate the aerosol signal to the blue when using observations in the shortwave infrared (SWIR), a spectral range far from the ocean-color wavelengths. To estimate the marine signal (i.e., the product of marine reflectance and atmospheric transmittance) in the near infrared, the proposed approach is to decompose the aerosol reflectance in the near infrared to shortwave infrared into principal components. Since aerosol scattering is smooth spectrally, a few components are generally sufficient to represent the perturbing signal, i.e., the aerosol reflectance in the near infrared can be determined from measurements in the shortwave infrared where the ocean is black. This gives access to the marine signal in the near infrared, which can then be used in the classic atmospheric correction algorithm. The methodology is evaluated theoretically from simulations of the top-of-atmosphere reflectance for a wide range of geophysical conditions and angular geometries and applied to actual MODIS imagery acquired over the Gulf of Mexico. The number of discarded pixels is reduced by over 80% using the PC modeling to determine the marine signal in the near infrared prior to applying the classic atmospheric correction algorithm.

  11. Iterative atmospheric correction scheme and the polarization color of alpine snow

    NASA Astrophysics Data System (ADS)

    Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond

    2012-07-01

    Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories.In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction.In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and a likely high water content due to melting. The analysis benefits from ancillary information provided by the NASA Langley High Spectral Resolution Lidar deployed on the same aircraft.The results obtained from the iterative scheme are contrasted against the surface polarized reflectance obtained ignoring multiple reflections, via the simplistic subtraction of the atmospheric scattering contribution. Finally, the retrieved reflectance is modeled after the scattering properties of a dense collection of ice crystals at the surface. Confirming that the polarized reflectance of snow is spectrally flat would allow to extend the techniques already in use for polarimetric retrievals of aerosol properties over land to the large portion of snow-covered pixels plaguing orbital and suborbital observations.

  12. Iterative Atmospheric Correction Scheme and the Polarization Color of Alpine Snow

    NASA Technical Reports Server (NTRS)

    Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond

    2012-01-01

    Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories. In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction. In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and a likely high water content due to melting. The analysis benefits from ancillary information provided by the NASA Langley High Spectral Resolution Lidar deployed on the same aircraft. The results obtained from the iterative scheme are contrasted against the surface polarized reflectance obtained ignoring multiple reflections, via the simplistic subtraction of the atmospheric scattering contribution. Finally, the retrieved reflectance is modeled after the scattering properties of a dense collection of ice crystals at the surface. Confirming that the polarized reflectance of snow is spectrally flat would allow to extend the techniques already in use for polarimetric retrievals of aerosol properties over land to the large portion of snow-covered pixels plaguing orbital and suborbital observations.

  13. Spectroscopic photon localization microscopy: breaking the resolution limit of single molecule localization microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Dong, Biqin; Almassalha, Luay Matthew; Urban, Ben E.; Nguyen, The-Quyen; Khuon, Satya; Chew, Teng-Leong; Backman, Vadim; Sun, Cheng; Zhang, Hao F.

    2017-02-01

    Distinguishing minute differences in spectroscopic signatures is crucial for revealing the fluorescence heterogeneity among fluorophores to achieve a high molecular specificity. Here we report spectroscopic photon localization microscopy (SPLM), a newly developed far-field spectroscopic imaging technique, to achieve nanoscopic resolution based on the principle of single-molecule localization microscopy while simultaneously uncovering the inherent molecular spectroscopic information associated with each stochastic event (Dong et al., Nature Communications 2016, in press). In SPLM, by using a slit-less monochromator, both the zero-order and the first-order diffractions from a grating were recorded simultaneously by an electron multiplying charge-coupled device to reveal the spatial distribution and the associated emission spectra of individual stochastic radiation events, respectively. As a result, the origins of photon emissions from different molecules can be identified according to their spectral differences with sub-nm spectral resolution, even when the molecules are within close proximity. With the newly developed algorithms including background subtraction and spectral overlap unmixing, we established and tested a method which can significantly extend the fundamental spatial resolution limit of single molecule localization microscopy by molecular discrimination through spectral regression. Taking advantage of this unique capability, we demonstrated improvement in spatial resolution of PALM/STORM up to ten fold with selected fluorophores. This technique can be readily adopted by other research groups to greatly enhance the optical resolution of single molecule localization microscopy without the need to modify their existing staining methods and protocols. This new resolving capability can potentially provide new insights into biological phenomena and enable significant research progress to be made in the life sciences.

  14. Improving Arterial Spin Labeling by Using Deep Learning.

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2018-05-01

    Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.

  15. Dynamic cone beam CT angiography of carotid and cerebral arteries using canine model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai Weixing; Zhao Binghui; Conover, David

    2012-01-15

    Purpose: This research is designed to develop and evaluate a flat-panel detector-based dynamic cone beam CT system for dynamic angiography imaging, which is able to provide both dynamic functional information and dynamic anatomic information from one multirevolution cone beam CT scan. Methods: A dynamic cone beam CT scan acquired projections over four revolutions within a time window of 40 s after contrast agent injection through a femoral vein to cover the entire wash-in and wash-out phases. A dynamic cone beam CT reconstruction algorithm was utilized and a novel recovery method was developed to correct the time-enhancement curve of contrast flow.more » From the same data set, both projection-based subtraction and reconstruction-based subtraction approaches were utilized and compared to remove the background tissues and visualize the 3D vascular structure to provide the dynamic anatomic information. Results: Through computer simulations, the new recovery algorithm for dynamic time-enhancement curves was optimized and showed excellent accuracy to recover the actual contrast flow. Canine model experiments also indicated that the recovered time-enhancement curves from dynamic cone beam CT imaging agreed well with that of an IV-digital subtraction angiography (DSA) study. The dynamic vascular structures reconstructed using both projection-based subtraction and reconstruction-based subtraction were almost identical as the differences between them were comparable to the background noise level. At the enhancement peak, all the major carotid and cerebral arteries and the Circle of Willis could be clearly observed. Conclusions: The proposed dynamic cone beam CT approach can accurately recover the actual contrast flow, and dynamic anatomic imaging can be obtained with high isotropic 3D resolution. This approach is promising for diagnosis and treatment planning of vascular diseases and strokes.« less

  16. Efficient algorithm for baseline wander and powerline noise removal from ECG signals based on discrete Fourier series.

    PubMed

    Bahaz, Mohamed; Benzid, Redha

    2018-03-01

    Electrocardiogram (ECG) signals are often contaminated with artefacts and noises which can lead to incorrect diagnosis when they are visually inspected by cardiologists. In this paper, the well-known discrete Fourier series (DFS) is re-explored and an efficient DFS-based method is proposed to reduce contribution of both baseline wander (BW) and powerline interference (PLI) noises in ECG records. In the first step, the determination of the exact number of low frequency harmonics contributing in BW is achieved. Next, the baseline drift is estimated by the sum of all associated Fourier sinusoids components. Then, the baseline shift is discarded efficiently by a subtraction of its approximated version from the original biased ECG signal. Concerning the PLI, the subtraction of the contributing harmonics calculated in the same manner reduces efficiently such type of noise. In addition of visual quality results, the proposed algorithm shows superior performance in terms of higher signal-to-noise ratio and smaller mean square error when faced to the DCT-based algorithm.

  17. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  18. Multipulse technique exploiting the intermodulation of ultrasound waves in a nonlinear medium.

    PubMed

    Biagi, Elena; Breschi, Luca; Vannacci, Enrico; Masotti, Leonardo

    2009-03-01

    In recent years, the nonlinear properties of materials have attracted much interest in nondestructive testing and in ultrasound diagnostic applications. Acoustic nonlinear parameters represent an opportunity to improve the information that can be extracted from a medium such as structural organization and pathologic status of tissue. In this paper, a method called pulse subtraction intermodulation (PSI), based on a multipulse technique, is presented and investigated both theoretically and experimentally. This method allows separation of the intermodulation products, which arise when 2 separate frequencies are transmitted in a nonlinear medium, from fundamental and second harmonic components, making them available for improved imaging techniques or signal processing algorithms devoted to tissue characterization. The theory of intermodulation product generation was developed according the Khokhlov-Zabolotskaya-Kuznetsov (KZK) nonlinear propagation equation, which is consistent with experimental results. The description of the proposed method, characterization of the intermodulation spectral contents, and quantitative results coming from in vitro experimentation are reported and discussed in this paper.

  19. A Novel Technique to Detect Code for SAC-OCDMA System

    NASA Astrophysics Data System (ADS)

    Bharti, Manisha; Kumar, Manoj; Sharma, Ajay K.

    2018-04-01

    The main task of optical code division multiple access (OCDMA) system is the detection of code used by a user in presence of multiple access interference (MAI). In this paper, new method of detection known as XOR subtraction detection for spectral amplitude coding OCDMA (SAC-OCDMA) based on double weight codes has been proposed and presented. As MAI is the main source of performance deterioration in OCDMA system, therefore, SAC technique is used in this paper to eliminate the effect of MAI up to a large extent. A comparative analysis is then made between the proposed scheme and other conventional detection schemes used like complimentary subtraction detection, AND subtraction detection and NAND subtraction detection. The system performance is characterized by Q-factor, BER and received optical power (ROP) with respect to input laser power and fiber length. The theoretical and simulation investigations reveal that the proposed detection technique provides better quality factor, security and received power in comparison to other conventional techniques. The wide opening of eye in case of proposed technique also proves its robustness.

  20. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  1. Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.

    PubMed

    Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald

    2018-04-01

    The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.

  2. Fiber optic sensor for continuous health monitoring in CFRP composite materials

    NASA Astrophysics Data System (ADS)

    Rippert, Laurent; Papy, Jean-Michel; Wevers, Martine; Van Huffel, Sabine

    2002-07-01

    An intensity modulated sensor, based on the microbending concept, has been incorporated in laminates produced from a C/epoxy prepreg. Pencil lead break tests (Hsu-Neilsen sources) and tensile tests have been performed on this material. In this research study, fibre optic sensors will be proven to offer an alternative for the robust piezoelectric transducers used for Acoustic Emission (AE) monitoring. The main emphasis has been put on the use of advanced signal processing techniques based on time-frequency analysis. The signal Short Time Fourier Transform (STFT) has been computed and several robust noise reduction algorithms, such as Wiener adaptive filtering, improved spectral subtraction filtering, and Singular Value Decomposition (SVD) -based filtering, have been applied. An energy and frequency -based detection criterion is put forward to detect transient signals that can be correlated with Modal Acoustic Emission (MAE) results and thus damage in the composite material. There is a strong indication that time-frequency analysis and the Hankel Total Least Squares (HTLS) method can also be used for damage characterization. This study shows that the signal from a quite simple microbend optical sensor contains information on the elastic energy released whenever damage is being introduced in the host material by mechanical loading. Robust algorithms can be used to retrieve and analyze this information.

  3. The Origin of the Excess Near-Infrared Diffuse Sky Brightness: Population III Stars or Zodiacal Light?

    NASA Technical Reports Server (NTRS)

    Dwek, Eli

    2006-01-01

    The intensity of the diffuse 1 to 5 micron sky emission from which solar system and Galactic foregrounds have been subtracted is in excess of that expected from energy released by galaxies and stars that formed during the z < 5 redshift interval. The spectral signature of this excess near-infrared background light (NIRBL) component is almost identical to that of reflected sunlight from the interplanetary dust cloud, and could therefore be the result of the incomplete subtraction of this foreground emission component from the diffuse sky maps. Alternatively, this emission component could be extragalactic. Its spectral signature is consistent with that of redshifted continuum and recombination line emission from H-II regions formed by the first generation of very massive stars. In this talk I will present the implications of this excess emission for our understanding of the zodiacal dust cloud, the formation rate of Pop III stars, and the TeV gamma-ray opacity to nearby blazars.

  4. AccuTyping: new algorithms for automated analysis of data from high-throughput genotyping with oligonucleotide microarrays

    PubMed Central

    Hu, Guohong; Wang, Hui-Yun; Greenawalt, Danielle M.; Azaro, Marco A.; Luo, Minjie; Tereshchenko, Irina V.; Cui, Xiangfeng; Yang, Qifeng; Gao, Richeng; Shen, Li; Li, Honghua

    2006-01-01

    Microarray-based analysis of single nucleotide polymorphisms (SNPs) has many applications in large-scale genetic studies. To minimize the influence of experimental variation, microarray data usually need to be processed in different aspects including background subtraction, normalization and low-signal filtering before genotype determination. Although many algorithms are sophisticated for these purposes, biases are still present. In the present paper, new algorithms for SNP microarray data analysis and the software, AccuTyping, developed based on these algorithms are described. The algorithms take advantage of a large number of SNPs included in each assay, and the fact that the top and bottom 20% of SNPs can be safely treated as homozygous after sorting based on their ratios between the signal intensities. These SNPs are then used as controls for color channel normalization and background subtraction. Genotype calls are made based on the logarithms of signal intensity ratios using two cutoff values, which were determined after training the program with a dataset of ∼160 000 genotypes and validated by non-microarray methods. AccuTyping was used to determine >300 000 genotypes of DNA and sperm samples. The accuracy was shown to be >99%. AccuTyping can be downloaded from . PMID:16982644

  5. GPI Spectra of HR 8799 c, d, and e from 1.5 to 2.4 μm with KLIP Forward Modeling

    NASA Astrophysics Data System (ADS)

    Greenbaum, Alexandra Z.; Pueyo, Laurent; Ruffio, Jean-Baptiste; Wang, Jason J.; De Rosa, Robert J.; Aguilar, Jonathan; Rameau, Julien; Barman, Travis; Marois, Christian; Marley, Mark S.; Konopacky, Quinn; Rajan, Abhijith; Macintosh, Bruce; Ansdell, Megan; Arriaga, Pauline; Bailey, Vanessa P.; Bulger, Joanna; Burrows, Adam S.; Chilcote, Jeffrey; Cotten, Tara; Doyon, Rene; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Gerard, Benjamin; Goodsell, Stephen J.; Graham, James R.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Metchev, Stanimir; Millar-Blanchaer, Maxwell A.; Nielsen, Eric L.; Norton, Andrew; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall D.; Poyneer, Lisa; Rantakyrö, Fredrik T.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Rémi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane; Wolff, Schuyler

    2018-06-01

    We explore KLIP forward modeling spectral extraction on Gemini Planet Imager coronagraphic data of HR 8799, using PyKLIP, and show algorithm stability with varying KLIP parameters. We report new and re-reduced spectrophotometry of HR 8799 c, d, and e in the H and K bands. We discuss a strategy for choosing optimal KLIP PSF subtraction parameters by injecting simulated sources and recovering them over a range of parameters. The K1/K2 spectra for HR 8799 c and d are similar to previously published results from the same data set. We also present a K-band spectrum of HR 8799 e for the first time and show that our H-band spectra agree well with previously published spectra from the VLT/SPHERE instrument. We show that HR 8799 c and d show significant differences in their H and K spectra, but do not find any conclusive differences between d and e, nor between c and e, likely due to large error bars in the recovered spectrum of e. Compared to M-, L-, and T-type field brown dwarfs, all three planets are most consistent with mid- and late-L spectral types. All objects are consistent with low gravity, but a lack of standard spectra for low gravity limit the ability to fit the best spectral type. We discuss how dedicated modeling efforts can better fit HR 8799 planets’ near-IR flux, as well as how differences between the properties of these planets can be further explored.

  6. Semi-supervised spectral algorithms for community detection in complex networks based on equivalence of clustering methods

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoke; Wang, Bingbo; Yu, Liang

    2018-01-01

    Community detection is fundamental for revealing the structure-functionality relationship in complex networks, which involves two issues-the quantitative function for community as well as algorithms to discover communities. Despite significant research on either of them, few attempt has been made to establish the connection between the two issues. To attack this problem, a generalized quantification function is proposed for community in weighted networks, which provides a framework that unifies several well-known measures. Then, we prove that the trace optimization of the proposed measure is equivalent with the objective functions of algorithms such as nonnegative matrix factorization, kernel K-means as well as spectral clustering. It serves as the theoretical foundation for designing algorithms for community detection. On the second issue, a semi-supervised spectral clustering algorithm is developed by exploring the equivalence relation via combining the nonnegative matrix factorization and spectral clustering. Different from the traditional semi-supervised algorithms, the partial supervision is integrated into the objective of the spectral algorithm. Finally, through extensive experiments on both artificial and real world networks, we demonstrate that the proposed method improves the accuracy of the traditional spectral algorithms in community detection.

  7. Spectral mapping tools from the earth sciences applied to spectral microscopy data.

    PubMed

    Harris, A Thomas

    2006-08-01

    Spectral imaging, originating from the field of earth remote sensing, is a powerful tool that is being increasingly used in a wide variety of applications for material identification. Several workers have used techniques like linear spectral unmixing (LSU) to discriminate materials in images derived from spectral microscopy. However, many spectral analysis algorithms rely on assumptions that are often violated in microscopy applications. This study explores algorithms originally developed as improvements on early earth imaging techniques that can be easily translated for use with spectral microscopy. To best demonstrate the application of earth remote sensing spectral analysis tools to spectral microscopy data, earth imaging software was used to analyze data acquired with a Leica confocal microscope with mechanical spectral scanning. For this study, spectral training signatures (often referred to as endmembers) were selected with the ENVI (ITT Visual Information Solutions, Boulder, CO) "spectral hourglass" processing flow, a series of tools that use the spectrally over-determined nature of hyperspectral data to find the most spectrally pure (or spectrally unique) pixels within the data set. This set of endmember signatures was then used in the full range of mapping algorithms available in ENVI to determine locations, and in some cases subpixel abundances of endmembers. Mapping and abundance images showed a broad agreement between the spectral analysis algorithms, supported through visual assessment of output classification images and through statistical analysis of the distribution of pixels within each endmember class. The powerful spectral analysis algorithms available in COTS software, the result of decades of research in earth imaging, are easily translated to new sources of spectral data. Although the scale between earth imagery and spectral microscopy is radically different, the problem is the same: mapping material locations and abundances based on unique spectral signatures. (c) 2006 International Society for Analytical Cytology.

  8. Automatic frequency and phase alignment of in vivo J-difference-edited MR spectra by frequency domain correlation.

    PubMed

    Wiegers, Evita C; Philips, Bart W J; Heerschap, Arend; van der Graaf, Marinette

    2017-12-01

    J-difference editing is often used to select resonances of compounds with coupled spins in 1 H-MR spectra. Accurate phase and frequency alignment prior to subtracting J-difference-edited MR spectra is important to avoid artefactual contributions to the edited resonance. In-vivo J-difference-edited MR spectra were aligned by maximizing the normalized scalar product between two spectra (i.e., the correlation over a spectral region). The performance of our correlation method was compared with alignment by spectral registration and by alignment of the highest point in two spectra. The correlation method was tested at different SNR levels and for a broad range of phase and frequency shifts. In-vivo application of the proposed correlation method showed reduced subtraction errors and increased fit reliability in difference spectra as compared with conventional peak alignment. The correlation method and the spectral registration method generally performed equally well. However, better alignment using the correlation method was obtained for spectra with a low SNR (down to ~2) and for relatively large frequency shifts. Our correlation method for simultaneously phase and frequency alignment is able to correct both small and large phase and frequency drifts and also performs well at low SNR levels.

  9. Limitations and potential of spectral subtractions in fourier-transform infrared (FTIR) spectroscopy of soil samples

    USDA-ARS?s Scientific Manuscript database

    Soil science research is increasingly applying Fourier transform infrared (FTIR) spectroscopy for analysis of soil organic matter (SOM). However, the compositional complexity of soils and the dominance of the mineral component can limit spectroscopic resolution of SOM and other minor components. The...

  10. Spectral matching technology for light-emitting diode-based jaundice photodynamic therapy device

    NASA Astrophysics Data System (ADS)

    Gan, Ru-ting; Guo, Zhen-ning; Lin, Jie-ben

    2015-02-01

    The objective of this paper is to obtain the spectrum of light-emitting diode (LED)-based jaundice photodynamic therapy device (JPTD), the bilirubin absorption spectrum in vivo was regarded as target spectrum. According to the spectral constructing theory, a simple genetic algorithm as the spectral matching algorithm was first proposed in this study. The optimal combination ratios of LEDs were obtained, and the required LEDs number was then calculated. Meanwhile, the algorithm was compared with the existing spectral matching algorithms. The results show that this algorithm runs faster with higher efficiency, the switching time consumed is 2.06 s, and the fitting spectrum is very similar to the target spectrum with 98.15% matching degree. Thus, blue LED-based JPTD can replace traditional blue fluorescent tube, the spectral matching technology that has been put forward can be applied to the light source spectral matching for jaundice photodynamic therapy and other medical phototherapy.

  11. Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction.

    PubMed

    Peng, Chengtao; Qiu, Bensheng; Li, Ming; Guan, Yihui; Zhang, Cheng; Wu, Zhongyi; Zheng, Jian

    2017-01-05

    Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.

  12. Separation of Atmospheric and Surface Spectral Features in Mars Global Surveyor Thermal Emission Spectrometer (TES) Spectra

    NASA Technical Reports Server (NTRS)

    Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.

    2000-01-01

    We present two algorithms for the separation of spectral features caused by atmospheric and surface components in Thermal Emission Spectrometer (TES) data. One algorithm uses radiative transfer and successive least squares fitting to find spectral shapes first for atmospheric dust, then for water-ice aerosols, and then, finally, for surface emissivity. A second independent algorithm uses a combination of factor analysis, target transformation, and deconvolution to simultaneously find dust, water ice, and surface emissivity spectral shapes. Both algorithms have been applied to TES spectra, and both find very similar atmospheric and surface spectral shapes. For TES spectra taken during aerobraking and science phasing periods in nadir-geometry these two algorithms give meaningful and usable surface emissivity spectra that can be used for mineralogical identification.

  13. Software algorithm and hardware design for real-time implementation of new spectral estimator

    PubMed Central

    2014-01-01

    Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214

  14. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  15. The XMM Cluster Survey: X-ray analysis methodology

    NASA Astrophysics Data System (ADS)

    Lloyd-Davies, E. J.; Romer, A. Kathy; Mehrtens, Nicola; Hosmer, Mark; Davidson, Michael; Sabirli, Kivanc; Mann, Robert G.; Hilton, Matt; Liddle, Andrew R.; Viana, Pedro T. P.; Campbell, Heather C.; Collins, Chris A.; Dubois, E. Naomi; Freeman, Peter; Harrison, Craig D.; Hoyle, Ben; Kay, Scott T.; Kuwertz, Emma; Miller, Christopher J.; Nichol, Robert C.; Sahlén, Martin; Stanford, S. A.; Stott, John P.

    2011-11-01

    The XMM Cluster Survey (XCS) is a serendipitous search for galaxy clusters using all publicly available data in the XMM-Newton Science Archive. Its main aims are to measure cosmological parameters and trace the evolution of X-ray scaling relations. In this paper we describe the data processing methodology applied to the 5776 XMM observations used to construct the current XCS source catalogue. A total of 3675 > 4σ cluster candidates with >50 background-subtracted X-ray counts are extracted from a total non-overlapping area suitable for cluster searching of 410 deg2. Of these, 993 candidates are detected with >300 background-subtracted X-ray photon counts, and we demonstrate that robust temperature measurements can be obtained down to this count limit. We describe in detail the automated pipelines used to perform the spectral and surface brightness fitting for these candidates, as well as to estimate redshifts from the X-ray data alone. A total of 587 (122) X-ray temperatures to a typical accuracy of <40 (<10) per cent have been measured to date. We also present the methodology adopted for determining the selection function of the survey, and show that the extended source detection algorithm is robust to a range of cluster morphologies by inserting mock clusters derived from hydrodynamical simulations into real XMMimages. These tests show that the simple isothermal β-profiles is sufficient to capture the essential details of the cluster population detected in the archival XMM observations. The redshift follow-up of the XCS cluster sample is presented in a companion paper, together with a first data release of 503 optically confirmed clusters.

  16. Temporal subtraction contrast-enhanced dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.

    2016-09-01

    The development of a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, intensity difference adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using normalized cross correlation (NCC), symmetric uncertainty coefficient, normalized mutual information (NMI), mean square error (MSE) and target registration error (TRE). The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE (0-16%), NCC (0-6%), NMI (0-13%) and TRE (0-34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake.

  17. An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface

    NASA Astrophysics Data System (ADS)

    Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc

    2010-12-01

    Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.

  18. Automatic detection of typical dust devils from Mars landscape images

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Watanabe, Takeru; Okumura, Susumu; Hatanaka, Yuji

    2018-02-01

    This paper presents an improved algorithm for automatic detection of Martian dust devils that successfully extracts tiny bright dust devils and obscured large dust devils from two subtracted landscape images. These dust devils are frequently observed using visible cameras onboard landers or rovers. Nevertheless, previous research on automated detection of dust devils has not focused on these common types of dust devils, but on dust devils that appear on images to be irregularly bright and large. In this study, we detect these common dust devils automatically using two kinds of parameter sets for thresholding when binarizing subtracted images. We automatically extract dust devils from 266 images taken by the Spirit rover to evaluate our algorithm. Taking dust devils detected by visual inspection to be ground truth, the precision, recall and F-measure values are 0.77, 0.86, and 0.81, respectively.

  19. Eliminating Bias In Acousto-Optical Spectrum Analysis

    NASA Technical Reports Server (NTRS)

    Ansari, Homayoon; Lesh, James R.

    1992-01-01

    Scheme for digital processing of video signals in acousto-optical spectrum analyzer provides real-time correction for signal-dependent spectral bias. Spectrum analyzer described in "Two-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18092), related apparatus described in "Three-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18122). Essence of correction is to average over digitized outputs of pixels in each CCD row and to subtract this from the digitized output of each pixel in row. Signal processed electro-optically with reference-function signals to form two-dimensional spectral image in CCD camera.

  20. Evaluation of ERTS imagery for spectral geological mapping in diverse terranes of New York State

    NASA Technical Reports Server (NTRS)

    Isachsen, Y. W. (Principal Investigator); Rickard, L. V.

    1972-01-01

    The author has identified the following significant results. Preliminary visual examination of film positives of thirty ERTS-1 scenes obtained over New York State and adjacent areas indicates the following: (1) sixty percent of the imagery has a cloud cover of 70-100 percent, twenty-five percent has a cloud cover of 0-30 percent, and the remainder has a cover of 40-65 percent; (2) on the useable imagery, the spectral lines which may turn out to be geologically-linked totals as follows: spectral linears, 5200 km; broadly curved lines (spectral curvilinears), 700 km; major forest boundaries, 3100 km; areas with spectral geological fabric, 3100 sgkm. In the central and northwest Adirondacks, known lineaments and faults were subtracted from the spectral linears leaving a residue which totals 160 km in the central Adirondacks and 230 km in the northwest Adirondacks. It must be emphasized that these are spectral linears which have not yet been checked out against any ground truth except geological.

  1. Spectral Learning for Supervised Topic Models.

    PubMed

    Ren, Yong; Wang, Yining; Zhu, Jun

    2018-03-01

    Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.

  2. Investigation of contrast-enhanced subtracted breast CT images with MAP-EM based on projection-based weighting imaging.

    PubMed

    Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo

    2018-06-01

    Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.

  3. Hyperspectral feature mapping classification based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli

    2016-03-01

    This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.

  4. Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.

    PubMed

    Kim, Dong Sik

    2016-08-01

    The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .

  5. Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline

    DOE PAGES

    Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.

    2016-09-28

    A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.

  6. Intermediate Palomar Transient Factory: Realtime Image Subtraction Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Yi; Nugent, Peter E.; Kasliwal, Mansi M.

    A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting followup observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using highperformance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.

  7. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data

    USGS Publications Warehouse

    Chavez, P.S.

    1988-01-01

    Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.

  8. Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data

    PubMed Central

    Clark, Darin P.; Badea, Cristian T.

    2014-01-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  9. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-07

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  10. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  11. Effects of global signal regression and subtraction methods on resting-state functional connectivity using arterial spin labeling data.

    PubMed

    Silva, João Paulo Santos; Mônaco, Luciana da Mata; Paschoal, André Monteiro; Oliveira, Ícaro Agenor Ferreira de; Leoni, Renata Ferranti

    2018-05-16

    Arterial spin labeling (ASL) is an established magnetic resonance imaging (MRI) technique that is finding broader applications in functional studies of the healthy and diseased brain. To promote improvement in cerebral blood flow (CBF) signal specificity, many algorithms and imaging procedures, such as subtraction methods, were proposed to eliminate or, at least, minimize noise sources. Therefore, this study addressed the main considerations of how CBF functional connectivity (FC) is changed, regarding resting brain network (RBN) identification and correlations between regions of interest (ROI), by different subtraction methods and removal of residual motion artifacts and global signal fluctuations (RMAGSF). Twenty young healthy participants (13 M/7F, mean age = 25 ± 3 years) underwent an MRI protocol with a pseudo-continuous ASL (pCASL) sequence. Perfusion-based images were obtained using simple, sinc and running subtraction. RMAGSF removal was applied to all CBF time series. Independent Component Analysis (ICA) was used for RBN identification, while Pearson' correlation was performed for ROI-based FC analysis. Temporal signal-to-noise ratio (tSNR) was higher in CBF maps obtained by sinc subtraction, although RMAGSF removal had a significant effect on maps obtained with simple and running subtractions. Neither the subtraction method nor the RMAGSF removal directly affected the identification of RBNs. However, the number of correlated and anti-correlated voxels varied for different subtraction and filtering methods. In an ROI-to-ROI level, changes were prominent in FC values and their statistical significance. Our study showed that both RMAGSF filtering and subtraction method might influence resting-state FC results, especially in an ROI level, consequently affecting FC analysis and its interpretation. Taking our results and the whole discussion together, we understand that for an exploratory assessment of the brain, one could avoid removing RMAGSF to not bias FC measures, but could use sinc subtraction to minimize low-frequency contamination. However, CBF signal specificity and frequency range for filtering purposes still need to be assessed in future studies. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Efficient Computation of Difference Vibrational Spectra in Isothermal-Isobaric Ensemble.

    PubMed

    Joutsuka, Tatsuya; Morita, Akihiro

    2016-11-03

    Difference spectroscopy between two close systems is widely used to augment its selectivity to the different parts of the observed system, though the molecular dynamics calculation of tiny difference spectra would be computationally extraordinary demanding by subtraction of two spectra. Therefore, we have proposed an efficient computational algorithm of difference spectra without resorting to the subtraction. The present paper reports our extension of the theoretical method in the isothermal-isobaric (NPT) ensemble. The present theory expands our applications of analysis including pressure dependence of the spectra. We verified that the present theory yields accurate difference spectra in the NPT condition as well, with remarkable computational efficiency over the straightforward subtraction by several orders of magnitude. This method is further applied to vibrational spectra of liquid water with varying pressure and succeeded in reproducing tiny difference spectra by pressure change. The anomalous pressure dependence is elucidated in relation to other properties of liquid water.

  13. Automated image-based colon cleansing for laxative-free CT colonography computer-aided polyp detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linguraru, Marius George; Panjwani, Neil; Fletcher, Joel G.

    2011-12-15

    Purpose: To evaluate the performance of a computer-aided detection (CAD) system for detecting colonic polyps at noncathartic computed tomography colonography (CTC) in conjunction with an automated image-based colon cleansing algorithm. Methods: An automated colon cleansing algorithm was designed to detect and subtract tagged-stool, accounting for heterogeneity and poor tagging, to be used in conjunction with a colon CAD system. The method is locally adaptive and combines intensity, shape, and texture analysis with probabilistic optimization. CTC data from cathartic-free bowel preparation were acquired for testing and training the parameters. Patients underwent various colonic preparations with barium or Gastroview in divided dosesmore » over 48 h before scanning. No laxatives were administered and no dietary modifications were required. Cases were selected from a polyp-enriched cohort and included scans in which at least 90% of the solid stool was visually estimated to be tagged and each colonic segment was distended in either the prone or supine view. The CAD system was run comparatively with and without the stool subtraction algorithm. Results: The dataset comprised 38 CTC scans from prone and/or supine scans of 19 patients containing 44 polyps larger than 10 mm (22 unique polyps, if matched between prone and supine scans). The results are robust on fine details around folds, thin-stool linings on the colonic wall, near polyps and in large fluid/stool pools. The sensitivity of the CAD system is 70.5% per polyp at a rate of 5.75 false positives/scan without using the stool subtraction module. This detection improved significantly (p = 0.009) after automated colon cleansing on cathartic-free data to 86.4% true positive rate at 5.75 false positives/scan. Conclusions: An automated image-based colon cleansing algorithm designed to overcome the challenges of the noncathartic colon significantly improves the sensitivity of colon CAD by approximately 15%.« less

  14. Multi scales based sparse matrix spectral clustering image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  15. Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection

    PubMed Central

    Liu, Wenfen

    2017-01-01

    Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447

  16. Automated detection of jet contrails using the AVHRR split window

    NASA Technical Reports Server (NTRS)

    Engelstad, M.; Sengupta, S. K.; Lee, T.; Welch, R. M.

    1992-01-01

    This paper investigates the automated detection of jet contrails using data from the Advanced Very High Resolution Radiometer. A preliminary algorithm subtracts the 11.8-micron image from the 10.8-micron image, creating a difference image on which contrails are enhanced. Then a three-stage algorithm searches the difference image for the nearly-straight line segments which characterize contrails. First, the algorithm searches for elevated, linear patterns called 'ridges'. Second, it applies a Hough transform to the detected ridges to locate nearly-straight lines. Third, the algorithm determines which of the nearly-straight lines are likely to be contrails. The paper applies this technique to several test scenes.

  17. Noniterative algorithm for improving the accuracy of a multicolor-light-emitting-diode-based colorimeter.

    PubMed

    Yang, Pao-Keng

    2012-05-01

    We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.

  18. Noniterative algorithm for improving the accuracy of a multicolor-light-emitting-diode-based colorimeter

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2012-05-01

    We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.

  19. Residual aneurysm after metal coils treatment detected by spectral CT

    PubMed Central

    Wang, Yang; Gao, Xiaolei; Lu, Aixun; Zhou, Zhengyang; Li, Baoxin

    2012-01-01

    Digital subtraction angiography (DSA) is currently the gold standard for diagnosing the residue or recurrence of aneurysm after treatment, especially in the presence of metal coils. However, DSA is an invasive procedure which may cause additional trauma and economic burden to patients. Spectral CT imaging, as a newly introduced CT imaging mode, produces monochromatic image sets that is able to reduce beam-hardening and other metal-related artifacts, and has found its use in several clinical applications including brain imaging to reduce beam-hardening artifacts. In this study, we describe a case of spectral CT imaging in follow-up of the metal coils treatment and detection of a small leaf of residual aneurysm after metal coils treatment. PMID:23256074

  20. Real time standoff gas detection and environmental monitoring with LWIR hyperspectral imager

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Lavoie, Hugo; Bouffard, François; Thériault, Jean-Marc; Vallieres, Christian; Roy, Claude; Dubé, Denis

    2012-10-01

    MR-i is a dual band Hyperspectral Imaging Spectro-radiometer. This field instrument generates spectral datacubes in the MWIR and LWIR. MR-i is modular and can be configured in different ways. One of its configurations is optimized for the standoff measurements of gases in differential mode. In this mode, the instrument is equipped with a dual-input telescope to perform optical background subtraction. The resulting signal is the differential between the spectral radiance entering each input port. With that method, the signal from the background is automatically removed from the signal of the target of interest. The spectral range of this configuration extends in the VLWIR (cut-off near 14 μm) to take full advantage of the LW atmospheric window.

  1. Implementation of spectral clustering on microarray data of carcinoma using k-means algorithm

    NASA Astrophysics Data System (ADS)

    Frisca, Bustamam, Alhadi; Siswantining, Titin

    2017-03-01

    Clustering is one of data analysis methods that aims to classify data which have similar characteristics in the same group. Spectral clustering is one of the most popular modern clustering algorithms. As an effective clustering technique, spectral clustering method emerged from the concepts of spectral graph theory. Spectral clustering method needs partitioning algorithm. There are some partitioning methods including PAM, SOM, Fuzzy c-means, and k-means. Based on the research that has been done by Capital and Choudhury in 2013, when using Euclidian distance k-means algorithm provide better accuracy than PAM algorithm. So in this paper we use k-means as our partition algorithm. The major advantage of spectral clustering is in reducing data dimension, especially in this case to reduce the dimension of large microarray dataset. Microarray data is a small-sized chip made of a glass plate containing thousands and even tens of thousands kinds of genes in the DNA fragments derived from doubling cDNA. Application of microarray data is widely used to detect cancer, for the example is carcinoma, in which cancer cells express the abnormalities in his genes. The purpose of this research is to classify the data that have high similarity in the same group and the data that have low similarity in the others. In this research, Carcinoma microarray data using 7457 genes. The result of partitioning using k-means algorithm is two clusters.

  2. Reconstructing Spectral Scenes Using Statistical Estimation to Enhance Space Situational Awareness

    DTIC Science & Technology

    2006-12-01

    simultane- ously spatially and spectrally deblur the images collected from ASIS. The algorithms are based on proven estimation theories and do not...collected with any system using a filtering technology known as Electronic Tunable Filters (ETFs). Previous methods to deblur spectral images collected...spectrally deblurring then the previously investigated methods. This algorithm expands on a method used for increasing the spectral resolution in gamma-ray

  3. ViBe: a universal background subtraction algorithm for video sequences.

    PubMed

    Barnich, Olivier; Van Droogenbroeck, Marc

    2011-06-01

    This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.

  4. Highly noise-tolerant hybrid algorithm for phase retrieval from a single-shot spatial carrier fringe pattern

    NASA Astrophysics Data System (ADS)

    Dong, Zhichao; Cheng, Haobo

    2018-01-01

    A highly noise-tolerant hybrid algorithm (NTHA) is proposed in this study for phase retrieval from a single-shot spatial carrier fringe pattern (SCFP), which effectively combines the merits of spatial carrier phase shift method and two dimensional continuous wavelet transform (2D-CWT). NTHA firstly extracts three phase-shifted fringe patterns from the SCFP with one pixel malposition; then calculates phase gradients by subtracting the reference phase from the other two target phases, which are retrieved respectively from three phase-shifted fringe patterns by 2D-CWT; finally, reconstructs the phase map by a least square gradient integration method. Its typical characters include but not limited to: (1) doesn't require the spatial carrier to be constant; (2) the subtraction mitigates edge errors of 2D-CWT; (3) highly noise-tolerant, because not only 2D-CWT is noise-insensitive, but also the noise in the fringe pattern doesn't directly take part in the phase reconstruction as in previous hybrid algorithm. Its feasibility and performances are validated extensively by simulations and contrastive experiments to temporal phase shift method, Fourier transform and 2D-CWT methods.

  5. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.

  6. Systematic toxicological analysis: computer-assisted identification of poisons in biological materials.

    PubMed

    Stimpfl, Th; Demuth, W; Varmuza, K; Vycudilik, W

    2003-06-05

    A new software was developed to improve the chances for identification of a "general unknown" in complex biological materials. To achieve this goal, the total ion current chromatogram was simplified by filtering the acquired mass spectra via an automated subtraction procedure, which removed mass spectra originating from the sample matrix, as well as interfering substances from the extraction procedure. It could be shown that this tool emphasizes mass spectra of exceptional compounds, and therefore provides the forensic toxicologist with further evidence-even in cases where mass spectral data of the unknown compound are not available in "standard" spectral libraries.

  7. New Spectral Evidence of an Unaccounted Component of the Near-infrared Extragalactic Background Light from the CIBER

    NASA Astrophysics Data System (ADS)

    Matsuura, Shuji; Arai, Toshiaki; Bock, James J.; Cooray, Asantha; Korngut, Phillip M.; Kim, Min Gyu; Lee, Hyung Mok; Lee, Dae Hee; Levenson, Louis R.; Matsumoto, Toshio; Onishi, Yosuke; Shirahata, Mai; Tsumura, Kohji; Wada, Takehiko; Zemcov, Michael

    2017-04-01

    The extragalactic background light (EBL) captures the total integrated emission from stars and galaxies throughout the cosmic history. The amplitude of the near-infrared EBL from space absolute photometry observations has been controversial and depends strongly on the modeling and subtraction of the zodiacal light (ZL) foreground. We report the first measurement of the diffuse background spectrum at 0.8-1.7 μm from the CIBER experiment. The observations were obtained with an absolute spectrometer over two flights in multiple sky fields to enable the subtraction of ZL, stars, terrestrial emission, and diffuse Galactic light. After subtracting foregrounds and accounting for systematic errors, we find the nominal EBL brightness, assuming the Kelsall ZL model, is {42.7}-10.6+11.9 nW m-2 sr-1 at 1.4 μm. We also analyzed the data using the Wright ZL model, which results in a worse statistical fit to the data and an unphysical EBL, falling below the known background light from galaxies at λ < 1.3 μm. Using a model-independent analysis based on the minimum EBL brightness, we find an EBL brightness of {28.7}-3.3+5.1 nWm-2 sr-1 at 1.4 μm. While the derived EBL amplitude strongly depends on the ZL model, we find that we cannot fit the spectral data to ZL, Galactic emission, and EBL from solely integrated galactic light from galaxy counts. The results require a new diffuse component, such as an additional foreground or an excess EBL with a redder spectrum than that of ZL.

  8. Temporal subtraction contrast-enhanced dedicated breast CT

    PubMed Central

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.

    2016-01-01

    Purpose To develop a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. Methods An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, Intensity Difference Adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using Normalized Cross Correlation (NCC), Symmetric Uncertainty Coefficient (SUC), Normalized Mutual Information (NMI), Mean Square Error (MSE) and Target Registration Error (TRE). Results The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE(0–16%), NCC (0–6%), NMI (0–13%) and TRE (0–34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Conclusion Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake. PMID:27494376

  9. Classification of natural formations based on their optical characteristics using small volumes of samples

    NASA Astrophysics Data System (ADS)

    Abramovich, N. S.; Kovalev, A. A.; Plyuta, V. Y.

    1986-02-01

    A computer algorithm has been developed to classify the spectral bands of natural scenes on Earth according to their optical characteristics. The algorithm is written in FORTRAN-IV and can be used in spectral data processing programs requiring small data loads. The spectral classifications of some different types of green vegetable canopies are given in order to illustrate the effectiveness of the algorithm.

  10. Influence of synchrotron self-absorption on 21-cm experiments

    NASA Astrophysics Data System (ADS)

    Zheng, Qian; Wu, Xiang-Ping; Gu, Jun-Hua; Wang, Jingying; Xu, Haiguang

    2012-08-01

    The presence of spectral curvature resulting from the synchrotron self-absorption of extragalactic radio sources could break down the spectral smoothness feature. This leads to the premise that the bright radio foreground can be successfully removed in 21-cm experiments that search for the epoch of reionization (EoR). We present a quantitative estimate of the effect of the spectral curvature resulting from the synchrotron self-absorption of extragalactic radio sources on the measurement of the angular power spectrum of the low-frequency sky. We incorporate a phenomenological model, which is characterized by the fraction (f) of radio sources with turnover frequencies in the range of 100-1000 MHz and by a broken power law for the spectral transition around the turnover frequencies νm, into simulated radio sources over a small sky area of 10° × 10°. We compare statistically the changes in their residual maps with and without the inclusion of the synchrotron self-absorption of extragalactic radio sources after the bright sources of S150 MHz ≥100 mJy are excised. Furthermore, the best-fitting polynomials in the frequency domain on each pixel are subtracted. It has been shown that the effect of synchrotron self-absorption on the detection of the EoR depends sensitively on the spectral profiles of the radio sources around the turnover frequencies νm. A hard transition model, described by the broken power law with the turnover of spectral index at νm, would leave pronounced imprints on the residual background and would therefore cause serious confusion with the cosmic EoR signal. However, the spectral signatures on the angular power spectrum of the extragalactic foreground, generated by a soft transition model in which the rising and falling power laws of the spectral distribution around νm are connected through a smooth transition spanning ≥200 MHz in a characteristic width, can be fitted and consequently subtracted by the use of polynomials to an acceptable degree (δT < 1 mK). As this latter scenario seems to be favoured in both theoretical expectations and radio spectral observations, we conclude that the contamination of extragalactic radio sources by synchrotron self-absorption in 21-cm experiments is probably very minor.

  11. Red Blood Cell Count Automation Using Microscopic Hyperspectral Imaging Technology.

    PubMed

    Li, Qingli; Zhou, Mei; Liu, Hongying; Wang, Yiting; Guo, Fangmin

    2015-12-01

    Red blood cell counts have been proven to be one of the most frequently performed blood tests and are valuable for early diagnosis of some diseases. This paper describes an automated red blood cell counting method based on microscopic hyperspectral imaging technology. Unlike the light microscopy-based red blood count methods, a combined spatial and spectral algorithm is proposed to identify red blood cells by integrating active contour models and automated two-dimensional k-means with spectral angle mapper algorithm. Experimental results show that the proposed algorithm has better performance than spatial based algorithm because the new algorithm can jointly use the spatial and spectral information of blood cells.

  12. Reprocessing of Archival Direct Imaging Data of Herbig Ae/Be Stars

    NASA Astrophysics Data System (ADS)

    Safsten, Emily; Stephens, Denise C.

    2017-01-01

    Herbig Ae/Be (HAeBe) stars are intermediate mass (2-10 solar mass) pre-main sequence stars with circumstellar disks. They are the higher mass analogs of the better-known T Tauri stars. Observing planets within these young disks would greatly aid in understanding planet formation processes and timescales, particularly around massive stars. So far, only one planet, HD 100546b, has been confirmed to orbit a HAeBe star. With over 250 HAeBe stars known, and several observed to have disks with structures thought to be related to planet formation, it seems likely that there are as yet undiscovered planetary companions within the circumstellar disks of some of these young stars.Direct detection of a low-luminosity companion near a star requires high contrast imaging, often with the use of a coronagraph, and the subtraction of the central star's point spread function (PSF). Several processing algorithms have been developed in recent years to improve PSF subtraction and enhance the signal-to-noise of sources close to the central star. However, many HAeBe stars were observed via direct imaging before these algorithms came out. We present here current work with the PSF subtraction program PynPoint, which employs a method of principal component analysis, to reprocess archival images of HAeBe stars to increase the likelihood of detecting a planet in their disks.

  13. Data compressive paradigm for multispectral sensing using tunable DWELL mid-infrared detectors.

    PubMed

    Jang, Woo-Yong; Hayat, Majeed M; Godoy, Sebastián E; Bender, Steven C; Zarkesh-Ha, Payman; Krishna, Sanjay

    2011-09-26

    While quantum dots-in-a-well (DWELL) infrared photodetectors have the feature that their spectral responses can be shifted continuously by varying the applied bias, the width of the spectral response at any applied bias is not sufficiently narrow for use in multispectral sensing without the aid of spectral filters. To achieve higher spectral resolutions without using physical spectral filters, algorithms have been developed for post-processing the DWELL's bias-dependent photocurrents resulting from probing an object of interest repeatedly over a wide range of applied biases. At the heart of these algorithms is the ability to approximate an arbitrary spectral filter, which we desire the DWELL-algorithm combination to mimic, by forming a weighted superposition of the DWELL's non-orthogonal spectral responses over a range of applied biases. However, these algorithms assume availability of abundant DWELL data over a large number of applied biases (>30), leading to large overall acquisition times in proportion with the number of biases. This paper reports a new multispectral sensing algorithm to substantially compress the number of necessary bias values subject to a prescribed performance level across multiple sensing applications. The algorithm identifies a minimal set of biases to be used in sensing only the relevant spectral information for remote-sensing applications of interest. Experimental results on target spectrometry and classification demonstrate a reduction in the number of required biases by a factor of 7 (e.g., from 30 to 4). The tradeoff between performance and bias compression is thoroughly investigated. © 2011 Optical Society of America

  14. A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing

    NASA Astrophysics Data System (ADS)

    Wu, Zhejun; Kudenov, Michael W.

    2017-05-01

    This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.

  15. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  16. A Feasibility Study for Measuring Accurate Chest Compression Depth and Rate on Soft Surfaces Using Two Accelerometers and Spectral Analysis

    PubMed Central

    Gutiérrez, J. J.; Russell, James K.

    2016-01-01

    Background. Cardiopulmonary resuscitation (CPR) feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin's back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%). Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p < 0.001). Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p < 0.001). Median error in rate was 0.9 cpm (1.0%), with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces. PMID:27999808

  17. Development of Fire Detection Algorithm at Its Early Stage Using Fire Colour and Shape Information

    NASA Astrophysics Data System (ADS)

    Suleiman Abdullahi, Zainab; Hamisu Dalhatu, Shehu; Hassan Abdullahi, Zakariyya

    2018-04-01

    Fire can be defined as a state in which substances combined chemically with oxygen from the air and give out heat, smoke and flame. Most of the conventional fire detection techniques such as smoke, fire and heat detectors respectively have a problem of travelling delay and also give a high false alarm. The algorithm begins by loading the selected video clip from the database developed to identify the present or absence of fire in a frame. In this approach, background subtraction was employed. If the result of subtraction is less than the set threshold, the difference is ignored and the next frame is taken. However, if the difference is equal to or greater than the set threshold then it subjected to colour and shape test. This is done by using combined RGB colour model and shape signature. The proposed technique was very effective in detecting fire compared to those technique using only motion or colour clues.

  18. An improved feature extraction algorithm based on KAZE for multi-spectral image

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Li, Jun

    2018-02-01

    Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.

  19. Real-time vehicle noise cancellation techniques for gunshot acoustics

    NASA Astrophysics Data System (ADS)

    Ramos, Antonio L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald

    2012-06-01

    Acoustical sniper positioning systems rely on the detection and direction-of-arrival (DOA) estimation of the shockwave and the muzzle blast in order to provide an estimate of a potential snipers location. Field tests have shown that detecting and estimating the DOA of the muzzle blast is a rather difficult task in the presence of background noise sources, e.g., vehicle noise, especially in long range detection and absorbing terrains. In our previous work presented in the 2011 edition of this conference we highlight the importance of improving the SNR of the gunshot signals prior to the detection and recognition stages, aiming at lowering the false alarm and miss-detection rates and, thereby, increasing the reliability of the system. This paper reports on real-time noise cancellation techniques, like Spectral Subtraction and Adaptive Filtering, applied to gunshot signals. Our model assumes the background noise as being short-time stationary and uncorrelated to the impulsive gunshot signals. In practice, relatively long periods without signal occur and can be used to estimate the noise spectrum and its first and second order statistics as required in the spectral subtraction and adaptive filtering techniques, respectively. The results presented in this work are supported with extensive simulations based on real data.

  20. Libraries of High and Mid-Resolution Spectra of F, G, K, and M Field Stars

    NASA Astrophysics Data System (ADS)

    Montes, D.

    1998-06-01

    I have compiled here the three libraries of high and mid-resolution optical spectra of late-type stars I have recently published. The libraries include F, G, K and M field stars, from dwarfs to giants. The spectral coverage is from 3800 to 1000 Å, with spectral resolution ranging from 0.09 to 3.0 Å. These spectra include many of the spectral lines most widely used as optical and near-infrared indicators of chromospheric activity. The spectra have been obtained with the aim of providing a library of high and mid-resolution spectra to be used in the study of active chromosphere stars by applying a spectral subtraction technique. However, the data set presented here can also be utilized in a wide variety of ways. A digital version of all the fully reduced spectra is available via FTP and the World Wide Web (WWW) in FITS format.

  1. Laboratory test of a polarimetry imaging subtraction system for the high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Ren, Deqing; Zhu, Yongtian; Zhang, Xi; Li, Rong

    2012-09-01

    We propose a polarimetry imaging subtraction test system that can be used for the direct imaging of the reflected light from exoplanets. Such a system will be able to remove the speckle noise scattered by the wave-front error and thus can enhance the high-contrast imaging. In this system, we use a Wollaston Prism (WP) to divide the incoming light into two simultaneous images with perpendicular linear polarizations. One of the images is used as the reference image. Then both the phase and geometric distortion corrections have been performed on the other image. The corrected image is subtracted with the reference image to remove the speckles. The whole procedure is based on an optimization algorithm and the target function is to minimize the residual speckles after subtraction. For demonstration purpose, here we only use a circular pupil in the test without integrating of our apodized-pupil coronagraph. It is shown that best result can be gained by inducing both phase and distortion corrections. Finally, it has reached an extra contrast gain of 50-times improvement in average, which is promising to be used for the direct imaging of exoplanets.

  2. WE-DE-207B-04: Quantitative Contrast-Enhanced Spectral Mammography Based On Photon-Counting Detectors: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, H; Zhou, B; Beidokhti, D

    Purpose: To investigate the feasibility of accurate quantification of iodine mass thickness in contrast-enhanced spectral mammography. Methods: Experimental phantom studies were performed on a spectral mammography system based on Si strip photon-counting detectors. Dual-energy images were acquired using 40 kVp and a splitting energy of 34 keV with 3 mm Al pre-filtration. The initial calibration was done with glandular and adipose tissue equivalent phantoms of uniform thicknesses and iodine disk phantoms of various concentrations. A secondary calibration was carried out using the iodine signal obtained from the dual-energy decomposed images and the known background phantom thicknesses and densities. The iodinemore » signal quantification method was validated using phantoms composed of a mixture of glandular and adipose materials, for various breast thicknesses and densities. Finally, the traditional dual-energy weighted subtraction method was also studied as a comparison. The measured iodine signal from both methods was compared to the known iodine concentrations of the disk phantoms to characterize the quantification accuracy. Results: There was good agreement between the iodine mass thicknesses measured using the proposed method and the known values. The root-mean-square (RMS) error was estimated to be 0.2 mg/cm2. The traditional weighted subtraction method also predicted a linear correlation between the measured signal and the known iodine mass thickness. However, the correlation slope and offset values were strongly dependent on the total breast thickness and density. Conclusion: The results of the current study suggest that iodine mass thickness can be accurately quantified with contrast-enhanced spectral mammography. The quantitative information can potentially improve the differentiation between benign and malignant legions. Grant funding from Philips Medical Systems.« less

  3. Motion induced second order temperature and y-type anisotropies after the subtraction of linear dipole in the CMB maps

    NASA Astrophysics Data System (ADS)

    Sunyaev, Rashid A.; Khatri, Rishi

    2013-03-01

    y-type spectral distortions of the cosmic microwave background allow us to detect clusters and groups of galaxies, filaments of hot gas and the non-uniformities in the warm hot intergalactic medium. Several CMB experiments (on small areas of sky) and theoretical groups (for full sky) have recently published y-type distortion maps. We propose to search for two artificial hot spots in such y-type maps resulting from the incomplete subtraction of the effect of the motion induced dipole on the cosmic microwave background sky. This dipole introduces, at second order, additional temperature and y-distortion anisotropy on the sky of amplitude few μK which could potentially be measured by Planck HFI and Pixie experiments and can be used as a source of cross channel calibration by CMB experiments. This y-type distortion is present in every pixel and is not the result of averaging the whole sky. This distortion, calculated exactly from the known linear dipole, can be subtracted from the final y-type maps, if desired.

  4. Motion induced second order temperature and y-type anisotropies after the subtraction of linear dipole in the CMB maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunyaev, Rashid A.; Khatri, Rishi, E-mail: sunyaev@mpa-garching.mpg.de, E-mail: khatri@mpa-garching.mpg.de

    2013-03-01

    y-type spectral distortions of the cosmic microwave background allow us to detect clusters and groups of galaxies, filaments of hot gas and the non-uniformities in the warm hot intergalactic medium. Several CMB experiments (on small areas of sky) and theoretical groups (for full sky) have recently published y-type distortion maps. We propose to search for two artificial hot spots in such y-type maps resulting from the incomplete subtraction of the effect of the motion induced dipole on the cosmic microwave background sky. This dipole introduces, at second order, additional temperature and y-distortion anisotropy on the sky of amplitude few μKmore » which could potentially be measured by Planck HFI and Pixie experiments and can be used as a source of cross channel calibration by CMB experiments. This y-type distortion is present in every pixel and is not the result of averaging the whole sky. This distortion, calculated exactly from the known linear dipole, can be subtracted from the final y-type maps, if desired.« less

  5. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  6. Efficient spectral-Galerkin algorithms for direct solution for second-order differential equations using Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E.; Bhrawy, A.

    2006-06-01

    It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.

  7. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  8. Multiway spectral community detection in networks

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Newman, M. E. J.

    2015-11-01

    One of the most widely used methods for community detection in networks is the maximization of the quality function known as modularity. Of the many maximization techniques that have been used in this context, some of the most conceptually attractive are the spectral methods, which are based on the eigenvectors of the modularity matrix. Spectral algorithms have, however, been limited, by and large, to the division of networks into only two or three communities, with divisions into more than three being achieved by repeated two-way division. Here we present a spectral algorithm that can directly divide a network into any number of communities. The algorithm makes use of a mapping from modularity maximization to a vector partitioning problem, combined with a fast heuristic for vector partitioning. We compare the performance of this spectral algorithm with previous approaches and find it to give superior results, particularly in cases where community sizes are unbalanced. We also give demonstrative applications of the algorithm to two real-world networks and find that it produces results in good agreement with expectations for the networks studied.

  9. Analysis of variation matrix array by bilinear least squares-residual bilinearization (BLLS-RBL) for resolving and quantifying of foodstuff dyes in a candy sample.

    PubMed

    Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza

    2014-04-05

    Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  11. Tensor Spectral Clustering for Partitioning Higher-order Network Structures.

    PubMed

    Benson, Austin R; Gleich, David F; Leskovec, Jure

    2015-01-01

    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.

  12. Tensor Spectral Clustering for Partitioning Higher-order Network Structures

    PubMed Central

    Benson, Austin R.; Gleich, David F.; Leskovec, Jure

    2016-01-01

    Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399

  13. High Energy Resolution Hyperspectral X-Ray Imaging for Low-Dose Contrast-Enhanced Digital Mammography.

    PubMed

    Pani, Silvia; Saifuddin, Sarene C; Ferreira, Filipa I M; Henthorn, Nicholas; Seller, Paul; Sellin, Paul J; Stratmann, Philipp; Veale, Matthew C; Wilson, Matthew D; Cernik, Robert J

    2017-09-01

    Contrast-enhanced digital mammography (CEDM) is an alternative to conventional X-ray mammography for imaging dense breasts. However, conventional approaches to CEDM require a double exposure of the patient, implying higher dose, and risk of incorrect image registration due to motion artifacts. A novel approach is presented, based on hyperspectral imaging, where a detector combining positional and high-resolution spectral information (in this case based on Cadmium Telluride) is used. This allows simultaneous acquisition of the two images required for CEDM. The approach was tested on a custom breast-equivalent phantom containing iodinated contrast agent (Niopam 150®). Two algorithms were used to obtain images of the contrast agent distribution: K-edge subtraction (KES), providing images of the distribution of the contrast agent with the background structures removed, and a dual-energy (DE) algorithm, providing an iodine-equivalent image and a water-equivalent image. The high energy resolution of the detector allowed the selection of two close-by energies, maximising the signal in KES images, and enhancing the visibility of details with the low surface concentration of contrast agent. DE performed consistently better than KES in terms of contrast-to-noise ratio of the details; moreover, it allowed a correct reconstruction of the surface concentration of the contrast agent in the iodine image. Comparison with CEDM with a conventional detector proved the superior performance of hyperspectral CEDM in terms of the image quality/dose tradeoff.

  14. The Sixth Data Release of the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Adelman-McCarthy, Jennifer K.; Agüeros, Marcel A.; Allam, Sahar S.; Allende Prieto, Carlos; Anderson, Kurt S. J.; Anderson, Scott F.; Annis, James; Bahcall, Neta A.; Bailer-Jones, C. A. L.; Baldry, Ivan K.; Barentine, J. C.; Bassett, Bruce A.; Becker, Andrew C.; Beers, Timothy C.; Bell, Eric F.; Berlind, Andreas A.; Bernardi, Mariangela; Blanton, Michael R.; Bochanski, John J.; Boroski, William N.; Brinchmann, Jarle; Brinkmann, J.; Brunner, Robert J.; Budavári, Tamás; Carliles, Samuel; Carr, Michael A.; Castander, Francisco J.; Cinabro, David; Cool, R. J.; Covey, Kevin R.; Csabai, István; Cunha, Carlos E.; Davenport, James R. A.; Dilday, Ben; Doi, Mamoru; Eisenstein, Daniel J.; Evans, Michael L.; Fan, Xiaohui; Finkbeiner, Douglas P.; Friedman, Scott D.; Frieman, Joshua A.; Fukugita, Masataka; Gänsicke, Boris T.; Gates, Evalyn; Gillespie, Bruce; Glazebrook, Karl; Gray, Jim; Grebel, Eva K.; Gunn, James E.; Gurbani, Vijay K.; Hall, Patrick B.; Harding, Paul; Harvanek, Michael; Hawley, Suzanne L.; Hayes, Jeffrey; Heckman, Timothy M.; Hendry, John S.; Hindsley, Robert B.; Hirata, Christopher M.; Hogan, Craig J.; Hogg, David W.; Hyde, Joseph B.; Ichikawa, Shin-ichi; Ivezić, Željko; Jester, Sebastian; Johnson, Jennifer A.; Jorgensen, Anders M.; Jurić, Mario; Kent, Stephen M.; Kessler, R.; Kleinman, S. J.; Knapp, G. R.; Kron, Richard G.; Krzesinski, Jurek; Kuropatkin, Nikolay; Lamb, Donald Q.; Lampeitl, Hubert; Lebedeva, Svetlana; Lee, Young Sun; French Leger, R.; Lépine, Sébastien; Lima, Marcos; Lin, Huan; Long, Daniel C.; Loomis, Craig P.; Loveday, Jon; Lupton, Robert H.; Malanushenko, Olena; Malanushenko, Viktor; Mandelbaum, Rachel; Margon, Bruce; Marriner, John P.; Martínez-Delgado, David; Matsubara, Takahiko; McGehee, Peregrine M.; McKay, Timothy A.; Meiksin, Avery; Morrison, Heather L.; Munn, Jeffrey A.; Nakajima, Reiko; Neilsen, Eric H., Jr.; Newberg, Heidi Jo; Nichol, Robert C.; Nicinski, Tom; Nieto-Santisteban, Maria; Nitta, Atsuko; Okamura, Sadanori; Owen, Russell; Oyaizu, Hiroaki; Padmanabhan, Nikhil; Pan, Kaike; Park, Changbom; Peoples, John, Jr.; Pier, Jeffrey R.; Pope, Adrian C.; Purger, Norbert; Raddick, M. Jordan; Re Fiorentin, Paola; Richards, Gordon T.; Richmond, Michael W.; Riess, Adam G.; Rix, Hans-Walter; Rockosi, Constance M.; Sako, Masao; Schlegel, David J.; Schneider, Donald P.; Schreiber, Matthias R.; Schwope, Axel D.; Seljak, Uroš; Sesar, Branimir; Sheldon, Erin; Shimasaku, Kazu; Sivarani, Thirupathi; Allyn Smith, J.; Snedden, Stephanie A.; Steinmetz, Matthias; Strauss, Michael A.; SubbaRao, Mark; Suto, Yasushi; Szalay, Alexander S.; Szapudi, István; Szkody, Paula; Tegmark, Max; Thakar, Aniruddha R.; Tremonti, Christy A.; Tucker, Douglas L.; Uomoto, Alan; Vanden Berk, Daniel E.; Vandenberg, Jan; Vidrih, S.; Vogeley, Michael S.; Voges, Wolfgang; Vogt, Nicole P.; Wadadekar, Yogesh; Weinberg, David H.; West, Andrew A.; White, Simon D. M.; Wilhite, Brian C.; Yanny, Brian; Yocum, D. R.; York, Donald G.; Zehavi, Idit; Zucker, Daniel B.

    2008-04-01

    This paper describes the Sixth Data Release of the Sloan Digital Sky Survey. With this data release, the imaging of the northern Galactic cap is now complete. The survey contains images and parameters of roughly 287 million objects over 9583 deg2, including scans over a large range of Galactic latitudes and longitudes. The survey also includes 1.27 million spectra of stars, galaxies, quasars, and blank sky (for sky subtraction) selected over 7425 deg2. This release includes much more stellar spectroscopy than was available in previous data releases and also includes detailed estimates of stellar temperatures, gravities, and metallicities. The results of improved photometric calibration are now available, with uncertainties of roughly 1% in g, r, i, and z, and 2% in u, substantially better than the uncertainties in previous data releases. The spectra in this data release have improved wavelength and flux calibration, especially in the extreme blue and extreme red, leading to the qualitatively better determination of stellar types and radial velocities. The spectrophotometric fluxes are now tied to point-spread function magnitudes of stars rather than fiber magnitudes. This gives more robust results in the presence of seeing variations, but also implies a change in the spectrophotometric scale, which is now brighter by roughly 0.35 mag. Systematic errors in the velocity dispersions of galaxies have been fixed, and the results of two independent codes for determining spectral classifications and redshifts are made available. Additional spectral outputs are made available, including calibrated spectra from individual 15 minute exposures and the sky spectrum subtracted from each exposure. We also quantify a recently recognized underestimation of the brightnesses of galaxies of large angular extent due to poor sky subtraction; the bias can exceed 0.2 mag for galaxies brighter than r = 14 mag.

  15. Quantitation of Fine Displacement in Echography

    NASA Astrophysics Data System (ADS)

    Masuda, Kohji; Ishihara, Ken; Yoshii, Ken; Furukawa, Toshiyuki; Kumagai, Sadatoshi; Maeda, Hajime; Kodama, Shinzo

    1993-05-01

    A High-speed Digital Subtraction Echography was developed to visualize the fine displacement of human internal organs. This method indicates differences in position through time series images of high-frame-rate echography. Fine displacement less than ultrasonic wavelength can be observed. This method, however, lacks the ability to quantitatively measure displacement length. The subtraction between two successive images was affected by displacement direction in spite of the displacement length being the same. To solve this problem, convolution of an echogram with Gaussian distribution was used. To express displacement length as brightness quantitatively, normalization using a brightness gradient was applied. The quantitation algorithm was applied to successive B-mode images. Compared to the simply subtracted images, quantitated images express more precisely the motion of organs. Expansion of the carotid artery and fine motion of ventricular walls can be visualized more easily. Displacement length can be quantitated with wavelength. Under more static conditions, this system quantitates displacement length that is much less than wavelength.

  16. Suppressing multiples using an adaptive multichannel filter based on L1-norm

    NASA Astrophysics Data System (ADS)

    Shi, Ying; Jing, Hongliang; Zhang, Wenwu; Ning, Dezhi

    2017-08-01

    Adaptive subtraction is an important link for removing surface-related multiples in the wave equation-based method. In this paper, we propose an adaptive multichannel subtraction method based on the L1-norm. We achieve enhanced compensation for the mismatch between the input seismogram and the predicted multiples in terms of the amplitude, phase, frequency band, and travel time. Unlike the conventional L2-norm, the proposed method does not rely on the assumption that the primary and the multiples are orthogonal, and also takes advantage of the fact that the L1-norm is more robust when dealing with outliers. In addition, we propose a frequency band extension via modulation to reconstruct the high frequencies to compensate for the frequency misalignment. We present a parallel computing scheme to accelerate the subtraction algorithm on graphic processing units (GPUs), which significantly reduces the computational cost. The synthetic and field seismic data tests show that the proposed method effectively suppresses the multiples.

  17. Research on the Improved Image Dodging Algorithm Based on Mask Technique

    NASA Astrophysics Data System (ADS)

    Yao, F.; Hu, H.; Wan, Y.

    2012-08-01

    The remote sensing image dodging algorithm based on Mask technique is a good method for removing the uneven lightness within a single image. However, there are some problems with this algorithm, such as how to set an appropriate filter size, for which there is no good solution. In order to solve these problems, an improved algorithm is proposed. In this improved algorithm, the original image is divided into blocks, and then the image blocks with different definitions are smoothed using the low-pass filters with different cut-off frequencies to get the background image; for the image after subtraction, the regions with different lightness are processed using different linear transformation models. The improved algorithm can get a better dodging result than the original one, and can make the contrast of the whole image more consistent.

  18. LASR-Guided Variability Subtraction: The Linear Algorithm for Significance Reduction of Stellar Seismic Activity

    NASA Astrophysics Data System (ADS)

    Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.

    2017-10-01

    Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.

  19. Addition and subtraction by students with Down syndrome

    NASA Astrophysics Data System (ADS)

    Noda Herrera, Aurelia; Bruno, Alicia; González, Carina; Moreno, Lorenzo; Sanabria, Hilda

    2011-01-01

    We present a research report on addition and subtraction conducted with Down syndrome students between the ages of 12 and 31. We interviewed a group of students with Down syndrome who executed algorithms and solved problems using specific materials and paper and pencil. The results show that students with Down syndrome progress through the same procedural levels as those without disabilities though they have difficulties in reaching the most abstract level (numerical facts). The use of fingers or concrete representations (balls) appears as a fundamental process among these students. As for errors, these vary widely depending on the students, and can be attributed mostly to an incomplete knowledge of the decimal number system.

  20. Combining spatial and spectral information to improve crop/weed discrimination algorithms

    NASA Astrophysics Data System (ADS)

    Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.

    2012-01-01

    Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.

  1. Spectral band selection for classification of soil organic matter content

    NASA Technical Reports Server (NTRS)

    Henderson, Tracey L.; Szilagyi, Andrea; Baumgardner, Marion F.; Chen, Chih-Chien Thomas; Landgrebe, David A.

    1989-01-01

    This paper describes the spectral-band-selection (SBS) algorithm of Chen and Landgrebe (1987, 1988, and 1989) and uses the algorithm to classify the organic matter content in the earth's surface soil. The effectiveness of the algorithm was evaluated comparing the results of classification of the soil organic matter using SBS bands with those obtained using Landsat MSS bands and TM bands, showing that the algorithm was successful in finding important spectral bands for classification of organic matter content. Using the calculated bands, the probabilities of correct classification for climate-stratified data were found to range from 0.910 to 0.980.

  2. Microscopic image analysis for reticulocyte based on watershed algorithm

    NASA Astrophysics Data System (ADS)

    Wang, J. Q.; Liu, G. F.; Liu, J. G.; Wang, G.

    2007-12-01

    We present a watershed-based algorithm in the analysis of light microscopic image for reticulocyte (RET), which will be used in an automated recognition system for RET in peripheral blood. The original images, obtained by micrography, are segmented by modified watershed algorithm and are recognized in term of gray entropy and area of connective area. In the process of watershed algorithm, judgment conditions are controlled according to character of the image, besides, the segmentation is performed by morphological subtraction. The algorithm was simulated with MATLAB software. It is similar for automated and manual scoring and there is good correlation(r=0.956) between the methods, which is resulted from 50 pieces of RET images. The result indicates that the algorithm for peripheral blood RETs is comparable to conventional manual scoring, and it is superior in objectivity. This algorithm avoids time-consuming calculation such as ultra-erosion and region-growth, which will speed up the computation consequentially.

  3. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  4. Spectral identification of minerals using imaging spectrometry data: Evaluating the effects of signal to noise and spectral resolution using the tricorder algorithm

    NASA Technical Reports Server (NTRS)

    Swayze, Gregg A.; Clark, Roger N.

    1995-01-01

    The rapid development of sophisticated imaging spectrometers and resulting flood of imaging spectrometry data has prompted a rapid parallel development of spectral-information extraction technology. Even though these extraction techniques have evolved along different lines (band-shape fitting, endmember unmixing, near-infrared analysis, neural-network fitting, and expert systems to name a few), all are limited by the spectrometer's signal to noise (S/N) and spectral resolution in producing useful information. This study grew from a need to quantitatively determine what effects these parameters have on our ability to differentiate between mineral absorption features using a band-shape fitting algorithm. We chose to evaluate the AVIRIS, HYDICE, MIVIS, GERIS, VIMS, NIMS, and ASTER instruments because they collect data over wide S/N and spectral-resolution ranges. The study evaluates the performance of the Tricorder algorithm, in differentiating between mineral spectra in the 0.4-2.5 micrometer spectral region. The strength of the Tricorder algorithm is in its ability to produce an easily understood comparison of band shape that can concentrate on small relevant portions of the spectra, giving it an advantage over most unmixing schemes, and in that it need not spend large amounts of time reoptimizing each time a new mineral component is added to its reference library, as is the case with neural-network schemes. We believe the flexibility of the Tricorder algorithm is unparalleled among spectral-extraction techniques and that the results from this study, although dealing with minerals, will have direct applications to spectral identification in other disciplines.

  5. Optimization of dual-energy subtraction chest radiography by use of a direct-conversion flat-panel detector system.

    PubMed

    Fukao, Mari; Kawamoto, Kiyosumi; Matsuzawa, Hiroaki; Honda, Osamu; Iwaki, Takeshi; Doi, Tsukasa

    2015-01-01

    We aimed to optimize the exposure conditions in the acquisition of soft-tissue images using dual-energy subtraction chest radiography with a direct-conversion flat-panel detector system. Two separate chest images were acquired at high- and low-energy exposures with standard or thick chest phantoms. The high-energy exposure was fixed at 120 kVp with the use of an auto-exposure control technique. For the low-energy exposure, the tube voltages and entrance surface doses ranged 40-80 kVp and 20-100 % of the dose required for high-energy exposure, respectively. Further, a repetitive processing algorithm was used for reduction of the image noise generated by the subtraction process. Seven radiology technicians ranked soft-tissue images, and these results were analyzed using the normalized-rank method. Images acquired at 60 kVp were of acceptable quality regardless of the entrance surface dose and phantom size. Using a repetitive processing algorithm, the minimum acceptable doses were reduced from 75 to 40 % for the standard phantom and to 50 % for the thick phantom. We determined that the optimum low-energy exposure was 60 kVp at 50 % of the dose required for the high-energy exposure. This allowed the simultaneous acquisition of standard radiographs and soft-tissue images at 1.5 times the dose required for a standard radiograph, which is significantly lower than the values reported previously.

  6. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  7. Home Camera-Based Fall Detection System for the Elderly.

    PubMed

    de Miguel, Koldo; Brunete, Alberto; Hernando, Miguel; Gambao, Ernesto

    2017-12-09

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.

  8. Home Camera-Based Fall Detection System for the Elderly

    PubMed Central

    de Miguel, Koldo

    2017-01-01

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%. PMID:29232846

  9. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  10. Techniques for detection and localization of weak hippocampal and medial frontal sources using beamformers in MEG.

    PubMed

    Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A

    2012-07-01

    Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.

  11. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  12. Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data

    NASA Astrophysics Data System (ADS)

    Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada

    2009-08-01

    Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.

  13. Initial Assessment of Acoustic Source Visibility with a 24-Element Microphone Array in the Arnold Engineering Development Center 80- by 120-Foot Wind Tunnel at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Horne, William C.

    2011-01-01

    Measurements of background noise were recently obtained with a 24-element phased microphone array in the test section of the Arnold Engineering Development Center 80- by120-Foot Wind Tunnel at speeds of 50 to 100 knots (27.5 to 51.4 m/s). The array was mounted in an aerodynamic fairing positioned with array center 1.2m from the floor and 16 m from the tunnel centerline, The array plate was mounted flush with the fairing surface as well as recessed in. (1.27 cm) behind a porous Kevlar screen. Wind-off speaker measurements were also acquired every 15 on a 10 m semicircular arc to assess directional resolution of the array with various processing algorithms, and to estimate minimum detectable source strengths for future wind tunnel aeroacoustic studies. The dominant background noise of the facility is from the six drive fans downstream of the test section and first set of turning vanes. Directional array response and processing methods such as background-noise cross-spectral-matrix subtraction suggest that sources 10-15 dB weaker than the background can be detected.

  14. Multi-pass encoding of hyperspectral imagery with spectral quality control

    NASA Astrophysics Data System (ADS)

    Wasson, Steven; Walker, William

    2015-05-01

    Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).

  15. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  16. A new stellar spectrum interpolation algorithm and its application to Yunnan-III evolutionary population synthesis models

    NASA Astrophysics Data System (ADS)

    Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang

    2018-05-01

    In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.

  17. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  18. Preliminary evaluation of the Environmental Research Institute of Michigan crop calendar shift algorithm for estimation of spring wheat development stage. [North Dakota, South Dakota, Montana, and Minnesota

    NASA Technical Reports Server (NTRS)

    Phinney, D. E. (Principal Investigator)

    1980-01-01

    An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.

  19. Geologist's Field Assistant: Developing Image and Spectral Analyses Algorithms for Remote Science Exploration

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Bishop, J.; Gazis, P.; Alena, R.; Sierhuis, M.

    2002-01-01

    We are developing science analyses algorithms to interface with a Geologist's Field Assistant device to allow robotic or human remote explorers to better sense their surroundings during limited surface excursions. Our algorithms will interpret spectral and imaging data obtained by various sensors. Additional information is contained in the original extended abstract.

  20. Soft X-Ray Absorption Spectroscopy of High-Abrasion-Furnace Carbon Black

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muramatsu, Yasuji; Harada, Ryusuke; Gullikson, Eric M.

    2007-02-02

    The soft x-ray absorption spectra of high-abrasion-furnace carbon black were measured to obtain local-structure/chemical-states information of the primary particles and/or crystallites. The soft x-ray absorption spectral features of carbon black represent broader {pi}* and {sigma}* peak structures compared to highly oriented pyrolytic graphite (HOPG). The subtracted spectra between the carbon black and HOPG, (carbon black) - (HOPG), show double-peak structures on both sides of the {pi}* peak. The lower-energy peak, denoted as the 'pre-peak', in the subtracted spectra and the {pi}*/{sigma}* peak intensity ratio in the absorption spectra clearly depend on the specific surface area by nitrogen adsorption (NSA). Therefore,more » it is concluded that the pre-peak intensity and the {pi}*/{sigma}* ratio reflect the local graphitic structure of carbon black.« less

  1. NIKOS II - A System For Non-Invasive Imaging Of Coronary Arteries

    NASA Astrophysics Data System (ADS)

    Dix, Wolf-Rainer; Engelke, Klaus; Heintze, Gerhard; Heuer, Joachim; Graeff, Walter; Kupper, Wolfram; Lohmann, Michael; Makin, I.; Moechel, Thomas; Reumann, Reinhold; Stellmaschek, Karl-Heinz

    1989-05-01

    This paper presents results of the initial in-vivo investigations with the system NIKOS II (NIKOS = Nicht-invasive Koronarangiographie mit Synchrotronstrahlung), an advanced version of NIKOS I which was developed since 1981. Aim of the work is to be able to visualize coronary arteries down to 1mm diameter with an iodine mass density of lmg/cm2, thus allowing non-invasive investigations by intravenous injection of the contrast agent. For this purpose Digital Subtraction Angiography (DSA) in energy subtraction mode (dichromography) is employed. The two images for subtraction are taken at photon energies just below and above the iodine K-edge (33.17keV) After subtraction the background contrast from bone and soft tissue is suppressed and the iodinated structures are strongly enhanced because of the abrupt change of absorption at the K-edge. The two monoenergetic beams are filtered out of a synchrotron radiation beam by a crystal monochromator and measured with a two line detector. One scan (two images) lasts between 250ms (final version) and ls (at present ). The images from the in-vivo investigations of dogs have been promising. The right coronary artery (diameter 1.5mm) was clearly visible. With application of better image processing algorithms the images illustrated in this paper have a definite potential for improvement.

  2. Spectral reconstruction of signals from periodic nonuniform subsampling based on a Nyquist folding scheme

    NASA Astrophysics Data System (ADS)

    Jiang, Kaili; Zhu, Jun; Tang, Bin

    2017-12-01

    Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.

  3. Estimation's Role in Calculations with Fractions

    ERIC Educational Resources Information Center

    Johanning, Debra I.

    2011-01-01

    Estimation is more than a skill or an isolated topic. It is a thinking tool that needs to be emphasized during instruction so that students will learn to develop algorithmic procedures and meaning for fraction operations. For students to realize when fractions should be added, subtracted, multiplied, or divided, they need to develop a sense of…

  4. Modified signed-digit trinary arithmetic by using optical symbolic substitution.

    PubMed

    Awwal, A A; Islam, M N; Karim, M A

    1992-04-10

    Carry-free addition and borrow-free subtraction of modified signed-digit trinary numbers with optical symbolic substitution are presented. The proposed two-step and three-step algorithms can be easily implemented by using phase-only holograms, optical content-addressable memories, a multichannel correlator, or a polarization-encoded optical shadow-casting system.

  5. Modified signed-digit trinary arithmetic by using optical symbolic substitution

    NASA Astrophysics Data System (ADS)

    Awwal, A. A. S.; Islam, M. N.; Karim, M. A.

    1992-04-01

    Carry-free addition and borrow-free subtraction of modified signed-digit trinary numbers with optical symbolic substitution are presented. The proposed two-step and three-step algorithms can be easily implemented by using phase-only holograms, optical content-addressable memories, a multichannel correlator, or a polarization-encoded optical shadow-casting system.

  6. Blood flow measurement using digital subtraction angiography for assessing hemodialysis access function

    NASA Astrophysics Data System (ADS)

    Koirala, Nischal; Setser, Randolph M.; Bullen, Jennifer; McLennan, Gordon

    2017-03-01

    Blood flow rate is a critical parameter for diagnosing dialysis access function during fistulography where a flow rate of 600 ml/min in arteriovenous graft or 400-500 ml/min in arteriovenous fistula is considered the clinical threshold for fully functioning access. In this study, a flow rate computational model for calculating intra-access flow to evaluate dialysis access patency was developed and validated in an in vitro set up using digital subtraction angiography. Flow rates were computed by tracking the bolus through two regions of interest using cross correlation (XCOR) and mean arrival time (MAT) algorithms, and correlated versus an in-line transonic flow meter measurement. The mean difference (mean +/- standard deviation) between XCOR and in-line flow measurements for in vitro setup at 3, 6, 7.5 and 10 frames/s was 118+/-63 37+/-59 31+/-31 and 46+/-57 ml/min respectively while for MAT method it was 86+/-56 57+/-72 35+/-85 and 19+/-129 ml/min respectively. The result of this investigation will be helpful for selecting candidate algorithms while blood flow computational tool is developed for clinical application.

  7. Toward particle-level filtering of individual collision events at the Large Hadron Collider and beyond

    NASA Astrophysics Data System (ADS)

    Colecchia, Federico

    2014-03-01

    Low-energy strong interactions are a major source of background at hadron colliders, and methods of subtracting the associated energy flow are well established in the field. Traditional approaches treat the contamination as diffuse, and estimate background energy levels either by averaging over large data sets or by restricting to given kinematic regions inside individual collision events. On the other hand, more recent techniques take into account the discrete nature of background, most notably by exploiting the presence of substructure inside hard jets, i.e. inside collections of particles originating from scattered hard quarks and gluons. However, none of the existing methods subtract background at the level of individual particles inside events. We illustrate the use of an algorithm that will allow particle-by-particle background discrimination at the Large Hadron Collider, and we envisage this as the basis for a novel event filtering procedure upstream of the official reconstruction chains. Our hope is that this new technique will improve physics analysis when used in combination with state-of-the-art algorithms in high-luminosity hadron collider environments.

  8. Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing.

    PubMed

    Jørgensen, Søren; Dau, Torsten

    2011-09-01

    A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America

  9. Neural representation of the self-heard biosonar click in bottlenose dolphins (Tursiops truncatus).

    PubMed

    Finneran, James J; Mulsow, Jason; Houser, Dorian S; Schlundt, Carolyn E

    2017-05-01

    The neural representation of the dolphin broadband biosonar click was investigated by measuring auditory brainstem responses (ABRs) to "self-heard" clicks masked with noise bursts having various high-pass cutoff frequencies. Narrowband ABRs were obtained by sequentially subtracting responses obtained with noise having lower high-pass cutoff frequencies from those obtained with noise having higher cutoff frequencies. For comparison to the biosonar data, ABRs were also measured in a passive listening experiment, where external clicks and masking noise were presented to the dolphins and narrowband ABRs were again derived using the subtractive high-pass noise technique. The results showed little change in the peak latencies of the ABR to the self-heard click from 28 to 113 kHz; i.e., the high-frequency neural responses to the self-heard click were delayed relative to those of an external, spectrally "pink" click. The neural representation of the self-heard click is thus highly synchronous across the echolocation frequencies and does not strongly resemble that of a frequency modulated downsweep (i.e., decreasing-frequency chirp). Longer ABR latencies at higher frequencies are hypothesized to arise from spectral differences between self-heard clicks and external clicks, forward masking from previously emitted biosonar clicks, or neural inhibition accompanying the emission of clicks.

  10. Neural representation of the self-heard biosonar click in bottlenose dolphins (Tursiops truncatus)

    PubMed Central

    Finneran, James J.; Mulsow, Jason; Houser, Dorian S.; Schlundt, Carolyn E.

    2017-01-01

    The neural representation of the dolphin broadband biosonar click was investigated by measuring auditory brainstem responses (ABRs) to “self-heard” clicks masked with noise bursts having various high-pass cutoff frequencies. Narrowband ABRs were obtained by sequentially subtracting responses obtained with noise having lower high-pass cutoff frequencies from those obtained with noise having higher cutoff frequencies. For comparison to the biosonar data, ABRs were also measured in a passive listening experiment, where external clicks and masking noise were presented to the dolphins and narrowband ABRs were again derived using the subtractive high-pass noise technique. The results showed little change in the peak latencies of the ABR to the self-heard click from 28 to 113 kHz; i.e., the high-frequency neural responses to the self-heard click were delayed relative to those of an external, spectrally “pink” click. The neural representation of the self-heard click is thus highly synchronous across the echolocation frequencies and does not strongly resemble that of a frequency modulated downsweep (i.e., decreasing-frequency chirp). Longer ABR latencies at higher frequencies are hypothesized to arise from spectral differences between self-heard clicks and external clicks, forward masking from previously emitted biosonar clicks, or neural inhibition accompanying the emission of clicks. PMID:28599518

  11. High-resolution Observations of Hα Spectra with a Subtractive Double Pass

    NASA Astrophysics Data System (ADS)

    Beck, C.; Rezaei, R.; Choudhary, D. P.; Gosain, S.; Tritschler, A.; Louis, R. E.

    2018-02-01

    High-resolution imaging spectroscopy in solar physics has relied on Fabry-Pérot interferometers (FPIs) in recent years. FPI systems, however, become technically challenging and expensive for telescopes larger than the 1 m class. A conventional slit spectrograph with a diffraction-limited performance over a large field of view (FOV) can be built at much lower cost and effort. It can be converted into an imaging spectro(polari)meter using the concept of a subtractive double pass (SDP). We demonstrate that an SDP system can reach a similar performance as FPI-based systems with a high spatial and moderate spectral resolution across a FOV of 100^'' ×100^' ' with a spectral coverage of 1 nm. We use Hα spectra taken with an SDP system at the Dunn Solar Telescope and complementary full-disc data to infer the properties of small-scale superpenumbral filaments. We find that the majority of all filaments end in patches of opposite-polarity fields. The internal fine-structure in the line-core intensity of Hα at spatial scales of about 0.5'' exceeds that in other parameters such as the line width, indicating small-scale opacity effects in a larger-scale structure with common properties. We conclude that SDP systems in combination with (multi-conjugate) adaptive optics are a valid alternative to FPI systems when high spatial resolution and a large FOV are required. They can also reach a cadence that is comparable to that of FPI systems, while providing a much larger spectral range and a simultaneous multi-line capability.

  12. Comparison of three methods for materials identification and mapping with imaging spectroscopy

    NASA Technical Reports Server (NTRS)

    Clark, Roger N.; Swayze, Gregg; Boardman, Joe; Kruse, Fred

    1993-01-01

    We are comparing three methods of mapping analysis tools for imaging spectroscopy data. The purpose of this comparison is to understand the advantages and disadvantages of each algorithm so others would be better able to choose the best algorithm or combinations of algorithms for a particular problem. The three algorithms are: (1) the spectralfeature modified least squares mapping algorithm of Clark et al (1990, 1991): programs mbandmap and tricorder; (2) the Spectral Angle Mapper Algorithm(Boardman, 1993) found in the CU CSES SIPS package; and (3) the Expert System of Kruse et al. (1993). The comparison uses a ground-calibrated 1990 AVIRIS scene of 400 by 410 pixels over Cuprite, Nevada. Along with the test data set is a spectral library of 38 minerals. Each algorithm is tested with the same AVIRIS data set and spectral library. Field work has confirmed the presence of many of these minerals in the AVIRIS scene (Swayze et al. 1992).

  13. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    DOE PAGES

    Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...

    2016-02-17

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  14. SU-E-J-23: An Accurate Algorithm to Match Imperfectly Matched Images for Lung Tumor Detection Without Markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozario, T; Bereg, S; Chiu, T

    Purpose: In order to locate lung tumors on projection images without internal markers, digitally reconstructed radiograph (DRR) is created and compared with projection images. Since lung tumors always move and their locations change on projection images while they are static on DRRs, a special DRR (background DRR) is generated based on modified anatomy from which lung tumors are removed. In addition, global discrepancies exist between DRRs and projections due to their different image originations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported. Methods: This method divides global images into a matrix ofmore » small tiles and similarities will be evaluated by calculating normalized cross correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) will be automatically optimized to keep the tumor within a single tile which has bad matching with the corresponding DRR tile. A pixel based linear transformation will be determined by linear interpolations of tile transformation results obtained during tile matching. The DRR will be transformed to the projection image level and subtracted from it. The resulting subtracted image now contains only the tumor. A DRR of the tumor is registered to the subtracted image to locate the tumor. Results: This method has been successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (Brainlab) for dynamic tumor tracking on phantom studies. Radiation opaque markers are implanted and used as ground truth for tumor positions. Although, other organs and bony structures introduce strong signals superimposed on tumors at some angles, this method accurately locates tumors on every projection over 12 gantry angles. The maximum error is less than 2.6 mm while the total average error is 1.0 mm. Conclusion: This algorithm is capable of detecting tumor without markers despite strong background signals.« less

  15. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  16. A Subsystem Test Bed for Chinese Spectral Radioheliograph

    NASA Astrophysics Data System (ADS)

    Zhao, An; Yan, Yihua; Wang, Wei

    2014-11-01

    The Chinese Spectral Radioheliograph is a solar dedicated radio interferometric array that will produce high spatial resolution, high temporal resolution, and high spectral resolution images of the Sun simultaneously in decimetre and centimetre wave range. Digital processing of intermediate frequency signal is an important part in a radio telescope. This paper describes a flexible and high-speed digital down conversion system for the CSRH by applying complex mixing, parallel filtering, and extracting algorithms to process IF signal at the time of being designed and incorporates canonic-signed digit coding and bit-plane method to improve program efficiency. The DDC system is intended to be a subsystem test bed for simulation and testing for CSRH. Software algorithms for simulation and hardware language algorithms based on FPGA are written which use less hardware resources and at the same time achieve high performances such as processing high-speed data flow (1 GHz) with 10 MHz spectral resolution. An experiment with the test bed is illustrated by using geostationary satellite data observed on March 20, 2014. Due to the easy alterability of the algorithms on FPGA, the data can be recomputed with different digital signal processing algorithms for selecting optimum algorithm.

  17. A real-time spectral mapper as an emerging diagnostic technology in biomedical sciences.

    PubMed

    Epitropou, George; Kavvadias, Vassilis; Iliou, Dimitris; Stathopoulos, Efstathios; Balas, Costas

    2013-01-01

    Real time spectral imaging and mapping at video rates can have tremendous impact not only on diagnostic sciences but also on fundamental physiological problems. We report the first real-time spectral mapper based on the combination of snap-shot spectral imaging and spectral estimation algorithms. Performance evaluation revealed that six band imaging combined with the Wiener algorithm provided high estimation accuracy, with error levels lying within the experimental noise. High accuracy is accompanied with much faster, by 3 orders of magnitude, spectral mapping, as compared with scanning spectral systems. This new technology is intended to enable spectral mapping at nearly video rates in all kinds of dynamic bio-optical effects as well as in applications where the target-probe relative position is randomly and fast changing.

  18. An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method

    NASA Astrophysics Data System (ADS)

    Tang, J.

    2012-01-01

    Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.

  19. Performing target specific band reduction using artificial neural networks and assessment of its efficacy using various target detection algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.

    2016-04-01

    Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.

  20. A comparison of change detection methods using multispectral scanner data

    USGS Publications Warehouse

    Seevers, Paul M.; Jones, Brenda K.; Qiu, Zhicheng; Liu, Yutong

    1994-01-01

    Change detection methods were investigated as a cooperative activity between the U.S. Geological Survey and the National Bureau of Surveying and Mapping, People's Republic of China. Subtraction of band 2, band 3, normalized difference vegetation index, and tasseled cap bands 1 and 2 data from two multispectral scanner images were tested using two sites in the United States and one in the People's Republic of China. A new statistical method also was tested. Band 2 subtraction gives the best results for detecting change from vegetative cover to urban development. The statistical method identifies areas that have changed and uses a fast classification algorithm to classify the original data of the changed areas by land cover type present for each image date.

  1. A complex guided spectral transform Lanczos method for studying quantum resonance states

    DOE PAGES

    Yu, Hua-Gen

    2014-12-28

    A complex guided spectral transform Lanczos (cGSTL) algorithm is proposed to compute both bound and resonance states including energies, widths and wavefunctions. The algorithm comprises of two layers of complex-symmetric Lanczos iterations. A short inner layer iteration produces a set of complex formally orthogonal Lanczos (cFOL) polynomials. They are used to span the guided spectral transform function determined by a retarded Green operator. An outer layer iteration is then carried out with the transform function to compute the eigen-pairs of the system. The guided spectral transform function is designed to have the same wavefunctions as the eigenstates of the originalmore » Hamiltonian in the spectral range of interest. Therefore the energies and/or widths of bound or resonance states can be easily computed with their wavefunctions or by using a root-searching method from the guided spectral transform surface. The new cGSTL algorithm is applied to bound and resonance states of HO₂, and compared to previous calculations.« less

  2. Method and algorithm for efficient calibration of compressive hyperspectral imaging system based on a liquid crystal retarder

    NASA Astrophysics Data System (ADS)

    Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-09-01

    Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.

  3. Passive Fourier-transform infrared spectroscopy of chemical plumes: an algorithm for quantitative interpretation and real-time background removal

    NASA Astrophysics Data System (ADS)

    Polak, Mark L.; Hall, Jeffrey L.; Herr, Kenneth C.

    1995-08-01

    We present a ratioing algorithm for quantitative analysis of the passive Fourier-transform infrared spectrum of a chemical plume. We show that the transmission of a near-field plume is given by tau plume = (Lobsd - Lbb-plume)/(Lbkgd - Lbb-plume), where tau plume is the frequency-dependent transmission of the plume, L obsd is the spectral radiance of the scene that contains the plume, Lbkgd is the spectral radiance of the same scene without the plume, and Lbb-plume is the spectral radiance of a blackbody at the plume temperature. The algorithm simultaneously achieves background removal, elimination of the spectrometer internal signature, and quantification of the plume spectral transmission. It has applications to both real-time processing for plume visualization and quantitative measurements of plume column densities. The plume temperature (Lbb-plume ), which is not always precisely known, can have a profound effect on the quantitative interpretation of the algorithm and is discussed in detail. Finally, we provide an illustrative example of the use of the algorithm on a trichloroethylene and acetone plume.

  4. Spectral analysis of the Crab Nebula and GRB 160530A with the Compton Spectrometer and Imager

    NASA Astrophysics Data System (ADS)

    Sleator, Clio; Boggs, Steven E.; Chiu, Jeng-Lun; Kierans, Carolyn; Lowell, Alexander; Tomsick, John; Zoglauer, Andreas; Amman, Mark; Chang, Hsiang-Kuang; Tseng, Chao-Hsiung; Yang, Chien-Ying; Lin, Chih H.; Jean, Pierre; von Ballmoos, Peter

    2017-08-01

    The Compton Spectrometer and Imager (COSI) is a balloon-borne soft gamma-ray (0.2-5 MeV) telescope designed to study astrophysical sources including gamma-ray bursts and compact objects. As a compact Compton telescope, COSI has inherent sensitivity to polarization. COSI utilizes 12 germanium detectors to provide excellent spectral resolution. On May 17, 2016, COSI was launched from Wanaka, New Zealand and completed a successful 46-day flight on NASA’s new Superpressure balloon. To perform spectral analysis with COSI, we have developed an accurate instrument model as required for the response matrix. With carefully chosen background regions, we are able to fit the background-subtracted spectra in XSPEC. We have developed a model of the atmosphere above COSI based on the NRLMSISE-00 Atmosphere Model to include in our spectral fits. The Crab and GRB 160530A are among the sources detected during the 2016 flight. We present spectral analysis of these two point sources. Our GRB 160530A results are consistent with those from other instruments, confirming COSI’s spectral abilities. Furthermore, we discuss prospects for measuring the Crab polarization with COSI.

  5. Super-Nyquist shaping and processing technologies for high-spectral-efficiency optical systems

    NASA Astrophysics Data System (ADS)

    Jia, Zhensheng; Chien, Hung-Chang; Zhang, Junwen; Dong, Ze; Cai, Yi; Yu, Jianjun

    2013-12-01

    The implementations of super-Nyquist pulse generation, both in a digital field using a digital-to-analog converter (DAC) or an optical filter at transmitter side, are introduced. Three corresponding signal processing algorithms at receiver are presented and compared for high spectral-efficiency (SE) optical systems employing the spectral prefiltering. Those algorithms are designed for the mitigation towards inter-symbol-interference (ISI) and inter-channel-interference (ICI) impairments by the bandwidth constraint, including 1-tap constant modulus algorithm (CMA) and 3-tap maximum likelihood sequence estimation (MLSE), regular CMA and digital filter with 2-tap MLSE, and constant multi-modulus algorithm (CMMA) with 2-tap MLSE. The principles and prefiltering tolerance are given through numerical and experimental results.

  6. Hyperspectral image classification by a variable interval spectral average and spectral curve matching combined algorithm

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der

    2010-08-01

    Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.

  7. A Spectral Algorithm for Solving the Relativistic Vlasov-Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    2001-01-01

    A spectral method algorithm is developed for the numerical solution of the full six-dimensional Vlasov-Maxwell system of equations. Here, the focus is on the electron distribution function, with positive ions providing a constant background. The algorithm consists of a Jacobi polynomial-spherical harmonic formulation in velocity space and a trigonometric formulation in position space. A transform procedure is used to evaluate nonlinear terms. The algorithm is suitable for performing moderate resolution simulations on currently available supercomputers for both scientific and engineering applications.

  8. How to Collect National Institute of Standards and Technology (NIST) Traceable Fluorescence Excitation and Emission Spectra.

    PubMed

    Gilmore, Adam Matthew

    2014-01-01

    Contemporary spectrofluorimeters comprise exciting light sources, excitation and emission monochromators, and detectors that without correction yield data not conforming to an ideal spectral response. The correction of the spectral properties of the exciting and emission light paths first requires calibration of the wavelength and spectral accuracy. The exciting beam path can be corrected up to the sample position using a spectrally corrected reference detection system. The corrected reference response accounts for both the spectral intensity and drift of the exciting light source relative to emission and/or transmission detector responses. The emission detection path must also be corrected for the combined spectral bias of the sample compartment optics, emission monochromator, and detector. There are several crucial issues associated with both excitation and emission correction including the requirement to account for spectral band-pass and resolution, optical band-pass or neutral density filters, and the position and direction of polarizing elements in the light paths. In addition, secondary correction factors are described including (1) subtraction of the solvent's fluorescence background, (2) removal of Rayleigh and Raman scattering lines, as well as (3) correcting for sample concentration-dependent inner-filter effects. The importance of the National Institute of Standards and Technology (NIST) traceable calibration and correction protocols is explained in light of valid intra- and interlaboratory studies and effective spectral qualitative and quantitative analyses including multivariate spectral modeling.

  9. Mapping minerals, amorphous materials, environmental materials, vegetation, water, ice and snow, and other materials: The USGS tricorder algorithm

    NASA Technical Reports Server (NTRS)

    Clark, Roger N.; Swayze, Gregg A.

    1995-01-01

    One of the challenges of Imaging Spectroscopy is the identification, mapping and abundance determination of materials, whether mineral, vegetable, or liquid, given enough spectral range, spectral resolution, signal to noise, and spatial resolution. Many materials show diagnostic absorption features in the visual and near infrared region (0.4 to 2.5 micrometers) of the spectrum. This region is covered by the modern imaging spectrometers such as AVIRIS. The challenge is to identify the materials from absorption bands in their spectra, and determine what specific analyses must be done to derive particular parameters of interest, ranging from simply identifying its presence to deriving its abundance, or determining specific chemistry of the material. Recently, a new analysis algorithm was developed that uses a digital spectral library of known materials and a fast, modified-least-squares method of determining if a single spectral feature for a given material is present. Clark et al. made another advance in the mapping algorithm: simultaneously mapping multiple minerals using multiple spectral features. This was done by a modified-least-squares fit of spectral features, from data in a digital spectral library, to corresponding spectral features in the image data. This version has now been superseded by a more comprehensive spectral analysis system called Tricorder.

  10. Automatic detection of the breast border and nipple position on digital mammograms using genetic algorithm for asymmetry approach to detection of microcalcifications.

    PubMed

    Karnan, M; Thangavel, K

    2007-07-01

    The presence of microcalcifications in breast tissue is one of the most incident signs considered by radiologist for an early diagnosis of breast cancer, which is one of the most common forms of cancer among women. In this paper, the Genetic Algorithm (GA) is proposed for automatic look at commonly prone area the breast border and nipple position to discover the suspicious regions on digital mammograms based on asymmetries between left and right breast image. The basic idea of the asymmetry approach is to scan left and right images are subtracted to extract the suspicious region. The proposed system consists of two steps: First, the mammogram images are enhanced using median filter, normalize the image, at the pectoral muscle region is excluding the border of the mammogram and comparing for both left and right images from the binary image. Further GA is applied to magnify the detected border. The figure of merit is calculated to evaluate whether the detected border is exact or not. And the nipple position is identified using GA. The some comparisons method is adopted for detection of suspected area. Second, using the border points and nipple position as the reference the mammogram images are aligned and subtracted to extract the suspicious region. The algorithms are tested on 114 abnormal digitized mammograms from Mammogram Image Analysis Society database.

  11. Third Grade Students' Performance on Calculator and Calculator-Related Tasks. Technical Report No. 498.

    ERIC Educational Resources Information Center

    Weaver, J. Fred

    Refinements of work with calculator algorithms previously conducted by the author are reported. Work with "chaining" and the doing/undoing property in addition and subtraction was tested with 24 third-grade students. Results indicated the need for further instruction with both ideas. Students were able to manipulate the calculator keyboard, but…

  12. Ambient-Light-Canceling Camera Using Subtraction of Frames

    NASA Technical Reports Server (NTRS)

    Morookian, John Michael

    2004-01-01

    The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.

  13. Algorithms for Solvents and Spectral Factors of Matrix Polynomials

    DTIC Science & Technology

    1981-01-01

    spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right

  14. Multi-layer imager design for mega-voltage spectral imaging

    NASA Astrophysics Data System (ADS)

    Myronakis, Marios; Hu, Yue-Houng; Fueglistaller, Rony; Wang, Adam; Baturin, Paul; Huber, Pascal; Morf, Daniel; Star-Lack, Josh; Berbeco, Ross

    2018-05-01

    The architecture of multi-layer imagers (MLIs) can be exploited to provide megavoltage spectral imaging (MVSPI) for specific imaging tasks. In the current work, we investigated bone suppression and gold fiducial contrast enhancement as two clinical tasks which could be improved with spectral imaging. A method based on analytical calculations that enables rapid investigation of MLI component materials and thicknesses was developed and validated against Monte Carlo computations. The figure of merit for task-specific imaging performance was the contrast-to-noise ratio (CNR) of the gold fiducial when the CNR of bone was equal to zero after a weighted subtraction of the signals obtained from each MLI layer. Results demonstrated a sharp increase in the CNR of gold when the build-up component or scintillation materials and thicknesses were modified. The potential for low-cost, prompt implementation of specific modifications (e.g. composition of the build-up component) could accelerate clinical translation of MVSPI.

  15. Segmentation methods for breast vasculature in dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Lee, Hyo Min; Singh, Tanushriya; Maidment, Andrew D. A.

    2015-03-01

    Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) uses an iodinated contrast agent to image the three-dimensional breast vasculature. The University of Pennsylvania has an ongoing DE CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 post-contrast). DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. Temporal subtraction of the post-contrast DE images from the pre-contrast DE image is performed to analyze iodine uptake. Our previous work investigated image registration methods to correct for patient motion, enhancing the evaluation of vascular kinetics. In this project we investigate a segmentation algorithm which identifies blood vessels in the breast from our temporal DE subtraction images. Anisotropic diffusion filtering, Gabor filtering, and morphological filtering are used for the enhancement of vessel features. Vessel labeling methods are then used to distinguish vessel and background features successfully. Statistical and clinical evaluations of segmentation accuracy in DE-CBT images are ongoing.

  16. Combing Visible and Infrared Spectral Tests for Dust Identification

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Levy, Robert; Kleidman, Richard; Remer, Lorraine; Mattoo, Shana

    2016-01-01

    The MODIS Dark Target aerosol algorithm over Ocean (DT-O) uses spectral reflectance in the visible, near-IR and SWIR wavelengths to determine aerosol optical depth (AOD) and Angstrom Exponent (AE). Even though DT-O does have "dust-like" models to choose from, dust is not identified a priori before inversion. The "dust-like" models are not true "dust models" as they are spherical and do not have enough absorption at short wavelengths, so retrieved AOD and AE for dusty regions tends to be biased. The inference of "dust" is based on postprocessing criteria for AOD and AE by users. Dust aerosol has known spectral signatures in the near-UV (Deep blue), visible, and thermal infrared (TIR) wavelength regions. Multiple dust detection algorithms have been developed over the years with varying detection capabilities. Here, we test a few of these dust detection algorithms, to determine whether they can be useful to help inform the choices made by the DT-O algorithm. We evaluate the following methods: The multichannel imager (MCI) algorithm uses spectral threshold tests in (0.47, 0.64, 0.86, 1.38, 2.26, 3.9, 11.0, 12.0 micrometer) channels and spatial uniformity test [Zhao et al., 2010]. The NOAA dust aerosol index (DAI) uses spectral contrast in the blue channels (412nm and 440nm) [Ciren and Kundragunta, 2014]. The MCI is already included as tests within the "Wisconsin" (MOD35) Cloud mask algorithm.

  17. ARGUS/LLNL IR Camera Calibration and Characterization

    DTIC Science & Technology

    1989-11-01

    122 of the 244 rows, once every 1/60 second. The even-numbered detector rows, beginning with row zero , are read out in one field; the odd-numbered...Radiometrically, a very cold reference scene is desirable because the absolute signal level of the reference scene is subtracted from all subsequent...to have effectively zero radiant energy within the spectral passband of the sensor, and so may be ignored. 1.3 LABORATORY EQUIPMENT CONFIGURATION The

  18. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1982-05-03

    artifact noise . I. wever, the deblurring spatial filter that we used were a narrow spectral band centered at 5154A green light. To compensate for the scaling...Processing, White-Light 11olographyv, Image Profcessing, Optical Signal Process inI, Image Subtraction, Image Deblurring . 70. A S’ R ACT (Continua on crow ad...optical processing technique, we had shown that the incoherent source techniques provides better image quality, and very low coherent artifact noise

  19. Roy-Steiner equations for pion-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Ditsche, C.; Hoferichter, M.; Kubis, B.; Meißner, U.-G.

    2012-06-01

    Starting from hyperbolic dispersion relations, we derive a closed system of Roy-Steiner equations for pion-nucleon scattering that respects analyticity, unitarity, and crossing symmetry. We work out analytically all kernel functions and unitarity relations required for the lowest partial waves. In order to suppress the dependence on the high energy regime we also consider once- and twice-subtracted versions of the equations, where we identify the subtraction constants with subthreshold parameters. Assuming Mandelstam analyticity we determine the maximal range of validity of these equations. As a first step towards the solution of the full system we cast the equations for the π π to overline N N partial waves into the form of a Muskhelishvili-Omnès problem with finite matching point, which we solve numerically in the single-channel approximation. We investigate in detail the role of individual contributions to our solutions and discuss some consequences for the spectral functions of the nucleon electromagnetic form factors.

  20. Statistical analysis and machine learning algorithms for optical biopsy

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Liu, Cheng-hui; Boydston-White, Susie; Beckman, Hugh; Sriramoju, Vidyasagar; Sordillo, Laura; Zhang, Chunyuan; Zhang, Lin; Shi, Lingyan; Smith, Jason; Bailin, Jacob; Alfano, Robert R.

    2018-02-01

    Analyzing spectral or imaging data collected with various optical biopsy methods is often times difficult due to the complexity of the biological basis. Robust methods that can utilize the spectral or imaging data and detect the characteristic spectral or spatial signatures for different types of tissue is challenging but highly desired. In this study, we used various machine learning algorithms to analyze a spectral dataset acquired from human skin normal and cancerous tissue samples using resonance Raman spectroscopy with 532nm excitation. The algorithms including principal component analysis, nonnegative matrix factorization, and autoencoder artificial neural network are used to reduce dimension of the dataset and detect features. A support vector machine with a linear kernel is used to classify the normal tissue and cancerous tissue samples. The efficacies of the methods are compared.

  1. Efficient geometric rectification techniques for spectral analysis algorithm

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Pang, S. S.; Curlander, J. C.

    1992-01-01

    The spectral analysis algorithm is a viable technique for processing synthetic aperture radar (SAR) data in near real time throughput rates by trading the image resolution. One major challenge of the spectral analysis algorithm is that the output image, often referred to as the range-Doppler image, is represented in the iso-range and iso-Doppler lines, a curved grid format. This phenomenon is known to be the fanshape effect. Therefore, resampling is required to convert the range-Doppler image into a rectangular grid format before the individual images can be overlaid together to form seamless multi-look strip imagery. An efficient algorithm for geometric rectification of the range-Doppler image is presented. The proposed algorithm, realized in two one-dimensional resampling steps, takes into consideration the fanshape phenomenon of the range-Doppler image as well as the high squint angle and updates of the cross-track and along-track Doppler parameters. No ground reference points are required.

  2. An Extended Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Akbari, D.

    2017-11-01

    In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  3. A joint resonance frequency estimation and in-band noise reduction method for enhancing the detectability of bearing fault signals

    NASA Astrophysics Data System (ADS)

    Bozchalooi, I. Soltani; Liang, Ming

    2008-05-01

    The vibration signal measured from a bearing contains vital information for the prognostic and health assessment purposes. However, when bearings are installed as part of a complex mechanical system, the measured signal is often heavily clouded by various noises due to the compounded effect of interferences of other machine elements and background noises present in the measuring device. As such, reliable condition monitoring would not be possible without proper de-noising. This is particularly true for incipient bearing faults with very weak signature signals. A new de-noising scheme is proposed in this paper to enhance the vibration signals acquired from faulty bearings. This de-noising scheme features a spectral subtraction to trim down the in-band noise prior to wavelet filtering. The Gabor wavelet is used in the wavelet transform and its parameters, i.e., scale and shape factor are selected in separate steps. The proper scale is found based on a novel resonance estimation algorithm. This algorithm makes use of the information derived from the variable shaft rotational speed though such variation is highly undesirable in fault detection since it complicates the process substantially. The shape factor value is then selected by minimizing a smoothness index. This index is defined as the ratio of the geometric mean to the arithmetic mean of the wavelet coefficient moduli. De-noising results are presented for simulated signals and experimental data acquired from both normal and faulty bearings with defective outer race, inner race, and rolling element.

  4. Detection of illicit substances in fingerprints by infrared spectral imaging.

    PubMed

    Ng, Ping Hei Ronnie; Walker, Sarah; Tahtouh, Mark; Reedy, Brian

    2009-08-01

    FTIR and Raman spectral imaging can be used to simultaneously image a latent fingerprint and detect exogenous substances deposited within it. These substances might include drugs of abuse or traces of explosives or gunshot residue. In this work, spectral searching algorithms were tested for their efficacy in finding targeted substances deposited within fingerprints. "Reverse" library searching, where a large number of possibly poor-quality spectra from a spectral image are searched against a small number of high-quality reference spectra, poses problems for common search algorithms as they are usually implemented. Out of a range of algorithms which included conventional Euclidean distance searching, the spectral angle mapper (SAM) and correlation algorithms gave the best results when used with second-derivative image and reference spectra. All methods tested gave poorer performances with first derivative and undifferentiated spectra. In a search against a caffeine reference, the SAM and correlation methods were able to correctly rank a set of 40 confirmed but poor-quality caffeine spectra at the top of a dataset which also contained 4,096 spectra from an image of an uncontaminated latent fingerprint. These methods also successfully and individually detected aspirin, diazepam and caffeine that had been deposited together in another fingerprint, and they did not indicate any of these substances as a match in a search for another substance which was known not to be present. The SAM was used to successfully locate explosive components in fingerprints deposited on silicon windows. The potential of other spectral searching algorithms used in the field of remote sensing is considered, and the applicability of the methods tested in this work to other modes of spectral imaging is discussed.

  5. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    NASA Astrophysics Data System (ADS)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  6. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  7. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  8. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  9. Anatomy-Based Algorithms for Detecting Oral Cancer Using Reflectance and Fluorescence Spectroscopy

    PubMed Central

    McGee, Sasha; Mardirossian, Vartan; Elackattu, Alphi; Mirkovic, Jelena; Pistey, Robert; Gallagher, George; Kabani, Sadru; Yu, Chung-Chieh; Wang, Zimmern; Badizadegan, Kamran; Grillone, Gregory; Feld, Michael S.

    2010-01-01

    Objectives We used reflectance and fluorescence spectroscopy to noninvasively and quantitatively distinguish benign from dysplastic/malignant oral lesions. We designed diagnostic algorithms to account for differences in the spectral properties among anatomic sites (gingiva, buccal mucosa, etc). Methods In vivo reflectance and fluorescence spectra were collected from 71 patients with oral lesions. The tissue was then biopsied and the specimen evaluated by histopathology. Quantitative parameters related to tissue morphology and biochemistry were extracted from the spectra. Diagnostic algorithms specific for combinations of sites with similar spectral properties were developed. Results Discrimination of benign from dysplastic/malignant lesions was most successful when algorithms were designed for individual sites (area under the receiver operator characteristic curve [ROC-AUC], 0.75 for the lateral surface of the tongue) and was least accurate when all sites were combined (ROC-AUC, 0.60). The combination of sites with similar spectral properties (floor of mouth and lateral surface of the tongue) yielded an ROC-AUC of 0.71. Conclusions Accurate spectroscopic detection of oral disease must account for spectral variations among anatomic sites. Anatomy-based algorithms for single sites or combinations of sites demonstrated good diagnostic performance in distinguishing benign lesions from dysplastic/malignant lesions and consistently performed better than algorithms developed for all sites combined. PMID:19999369

  10. The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory

    NASA Astrophysics Data System (ADS)

    Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D.; Doran, Gary B.; Miller, Adam A.; Bellm, Eric; Kasliwal, Mansi; Ofek, Eran O.; Surace, Jason; Shupe, David L.; Grillmair, Carl J.; Jackson, Ed; Barlow, Tom; Yan, Lin; Cao, Yi; Cenko, S. Bradley; Storrie-Lombardi, Lisa J.; Helou, George; Prince, Thomas A.; Kulkarni, Shrinivas R.

    2017-01-01

    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, bogus candidates from processing artifacts and imperfect image subtractions outnumber real transients by ≃10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ≃97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.

  11. The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory

    NASA Technical Reports Server (NTRS)

    Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D.; Doran, Gary B.; Miller, Adam A.; Bellm, Eric; Kasliwal, Mansi; Ofek, Eran O.; Surace, Jason; Shupe, David L.; hide

    2016-01-01

    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, bogus candidates from processing artifacts and imperfect image subtractions outnumber real transients by approximately equal to 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of approximately equal to 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.

  12. Broadband Gerchberg-Saxton algorithm for freeform diffractive spectral filter design.

    PubMed

    Vorndran, Shelby; Russo, Juan M; Wu, Yuechen; Pelaez, Silvana Ayala; Kostuk, Raymond K

    2015-11-30

    A multi-wavelength expansion of the Gerchberg-Saxton (GS) algorithm is developed to design and optimize a surface relief Diffractive Optical Element (DOE). The DOE simultaneously diffracts distinct wavelength bands into separate target regions. A description of the algorithm is provided, and parameters that affect filter performance are examined. Performance is based on the spectral power collected within specified regions on a receiver plane. The modified GS algorithm is used to design spectrum splitting optics for CdSe and Si photovoltaic (PV) cells. The DOE has average optical efficiency of 87.5% over the spectral bands of interest (400-710 nm and 710-1100 nm). Simulated PV conversion efficiency is 37.7%, which is 29.3% higher than the efficiency of the better performing PV cell without spectrum splitting optics.

  13. Methodology for the Elimination of Reflection and System Vibration Effects in Particle Image Velocimetry Data Processing

    NASA Technical Reports Server (NTRS)

    Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.

    2005-01-01

    A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.

  14. Quantitative Image Quality and Histogram-Based Evaluations of an Iterative Reconstruction Algorithm at Low-to-Ultralow Radiation Dose Levels: A Phantom Study in Chest CT

    PubMed Central

    Lee, Ki Baek

    2018-01-01

    Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008

  15. A spectral image processing algorithm for evaluating the influence of the illuminants on the reconstructed reflectance

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2017-12-01

    A spectral image processing algorithm that allows the illumination of the scene with different illuminants together with the reconstruction of the scene's reflectance is presented. Color checker spectral image and CIE A (warm light 2700 K), D65 (cold light 6500 K) and Cree TW Series LED T8 (4000 K) are employed for scene illumination. Illuminants used in the simulations have different spectra and, as a result of their illumination, the colors of the scene change. The influence of the illuminants on the reconstruction of the scene's reflectance is estimated. Demonstrative images and reflectance showing the operation of the algorithm are illustrated.

  16. Visible-infrared micro-spectrometer based on a preaggregated silver nanoparticle monolayer film and an infrared sensor card

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Peng, Jing-xiao; Ho, Ho-pui; Song, Chun-yuan; Huang, Xiao-li; Zhu, Yong-yuan; Li, Xing-ao; Huang, Wei

    2018-01-01

    By using a preaggregated silver nanoparticle monolayer film and an infrared sensor card, we demonstrate a miniature spectrometer design that covers a broad wavelength range from visible to infrared with high spectral resolution. The spectral contents of an incident probe beam are reconstructed by solving a matrix equation with a smoothing simulated annealing algorithm. The proposed spectrometer offers significant advantages over current instruments that are based on Fourier transform and grating dispersion, in terms of size, resolution, spectral range, cost and reliability. The spectrometer contains three components, which are used for dispersion, frequency conversion and detection. Disordered silver nanoparticles in dispersion component reduce the fabrication complexity. An infrared sensor card in the conversion component broaden the operational spectral range of the system into visible and infrared bands. Since the CCD used in the detection component provides very large number of intensity measurements, one can reconstruct the final spectrum with high resolution. An additional feature of our algorithm for solving the matrix equation, which is suitable for reconstructing both broadband and narrowband signals, we have adopted a smoothing step based on a simulated annealing algorithm. This algorithm improve the accuracy of the spectral reconstruction.

  17. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  18. Attenuated Total Reflection Fourier Transform Infrared (ATR FT-IR) Spectroscopy as an Analytical Method to Investigate the Secondary Structure of a Model Protein Embedded in Solid Lipid Matrices.

    PubMed

    Zeeshan, Farrukh; Tabbassum, Misbah; Jorgensen, Lene; Medlicott, Natalie J

    2018-02-01

    Protein drugs may encounter conformational perturbations during the formulation processing of lipid-based solid dosage forms. In aqueous protein solutions, attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy can investigate these conformational changes following the subtraction of spectral interference of solvent with protein amide I bands. However, in solid dosage forms, the possible spectral contribution of lipid carriers to protein amide I band may be an obstacle to determine conformational alterations. The objective of this study was to develop an ATR FT-IR spectroscopic method for the analysis of protein secondary structure embedded in solid lipid matrices. Bovine serum albumin (BSA) was chosen as a model protein, while Precirol AT05 (glycerol palmitostearate, melting point 58 ℃) was employed as the model lipid matrix. Bovine serum albumin was incorporated into lipid using physical mixing, melting and mixing, or wet granulation mixing methods. Attenuated total reflection FT-IR spectroscopy and size exclusion chromatography (SEC) were performed for the analysis of BSA secondary structure and its dissolution in aqueous media, respectively. The results showed significant interference of Precirol ATO5 with BSA amide I band which was subtracted up to 90% w/w lipid content to analyze BSA secondary structure. In addition, ATR FT-IR spectroscopy also detected thermally denatured BSA solid alone and in the presence of lipid matrix indicating its suitability for the detection of denatured protein solids in lipid matrices. Despite being in the solid state, conformational changes occurred to BSA upon incorporation into solid lipid matrices. However, the extent of these conformational alterations was found to be dependent on the mixing method employed as indicated by area overlap calculations. For instance, the melting and mixing method imparted negligible effect on BSA secondary structure, whereas the wet granulation mixing method promoted more changes. Size exclusion chromatography analysis depicted the complete dissolution of BSA in the aqueous media employed in the wet granulation method. In conclusion, an ATR FT-IR spectroscopic method was successfully developed to investigate BSA secondary structure in solid lipid matrices following the subtraction of lipid spectral interference. The ATR FT-IR spectroscopy could further be applied to investigate the secondary structure perturbations of therapeutic proteins during their formulation development.

  19. A general method for baseline-removal in ultrafast electron powder diffraction data using the dual-tree complex wavelet transform.

    PubMed

    René de Cotret, Laurent P; Siwick, Bradley J

    2017-07-01

    The general problem of background subtraction in ultrafast electron powder diffraction (UEPD) is presented with a focus on the diffraction patterns obtained from materials of moderately complex structure which contain many overlapping peaks and effectively no scattering vector regions that can be considered exclusively background. We compare the performance of background subtraction algorithms based on discrete and dual-tree complex (DTCWT) wavelet transforms when applied to simulated UEPD data on the M1-R phase transition in VO 2 with a time-varying background. We find that the DTCWT approach is capable of extracting intensities that are accurate to better than 2% across the whole range of scattering vector simulated, effectively independent of delay time. A Python package is available.

  20. The behaviour of the excess CA II H and K and Hɛ emissions in chromospherically active binaries.

    NASA Astrophysics Data System (ADS)

    Montes, D.; Fernandez-Figueroa, M. J.; Cornide, M.; de Castro, E.

    1996-08-01

    In this work we analyze the behaviour of the excess Ca II H and K and Hɛ emissions in a sample of 73 chromospherically active binary systems (RS CVn and BY Dra classes), of different activity levels and luminosity classes. This sample includes the 53 stars analyzed by Fernandez-Figueroa et al. (1994) and the observations of 28 systems described by Montes et al. (1995c). By using the spectral subtraction technique (subtraction of a synthesized stellar spectrum constructed from reference stars of spectral type and luminosity class similar to those of the binary star components) we obtain the active-chromosphere contribution to the Ca II H and K lines in these 73 systems. We have determined the excess Ca II H and K emission equivalent widths and converted them into surface fluxes. The emissions arising from each component were obtained when it was possible to deblend both contributions. We have found that the components of active binaries are generally stronger emitters than single active stars for a given effective temperature and rotation rate. A slight decline of the excess Ca II H and K emissions towards longer rotation periods, P_rot_, and larger Rossby numbers, R_0_, is found. When we use R_0_ instead of P_rot_ the scatter is reduced and a saturation at R_0_=~0.3 is observed. A good correlation between the excess Ca II K and Hɛ chromospheric emission fluxes has been found. The correlations obtained between the excess Ca II K emission and other activity indicators, (C IV in the transition region, and X-rays in the corona) indicate that the exponents of the power-law relations increase with the formation temperature of the spectral features.

  1. Robust numerical electromagnetic eigenfunction expansion algorithms

    NASA Astrophysics Data System (ADS)

    Sainath, Kamalesh

    This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.

  2. In vivo optical imaging of amblyopia: Digital subtraction autofluorescence and split-spectrum amplitude-decorrelation angiography.

    PubMed

    Guo, Lei; Tao, Jun; Xia, Fan; Yang, Zhi; Ma, Xiaoli; Hua, Rui

    2016-09-01

    Amblyopia is a visual impairment that is attributed to either abnormal binocular interactions or visual deprivation. The retina and choroids have been shown to be involved in the development of amblyopia. The purpose of this study was to investigate the retinal and choroidal microstructural abnormalities of amblyopia using digital subtraction autofluorescence and split-spectrum amplitude-decorrelation angiography (SSADA) approaches. This prospective study included 44 eyes of 22 patients with unilateral amblyopia. All patients who received indirect ophthalmoscopy, combined depth imaging spectral domain optical coherence tomography (OCT), SSADA-OCT, and macular blue light (BL-) and near-infrared (NIR-) autofluorescences underwent pupil dilation. The subfoveal choroidal thickness (SFCT) was measured. BL- and NIR-autofluorescences were determined for all patients and used to generate subtraction images with ImageJ software. The superficial, deep layers of the retina, and inner choroid layer were required for SSADA-OCT. For the normal eyes, a regularly increasing signal was observed in the central macula based on the subtraction images. In contrast, a decreased signal for the central patch or a reduced peak was detected in 16 of 22 amblyopic eyes (72.7%). The mean SFCT of the amblyopic eyes was greater than that of the fellow normal eyes (399.25 ± 4.944 µm vs. 280.58 ± 6.491 µm, respectively, P < 0.05). SSADA-OCT revealed a normal choroidal capillary network in all fellow normal eyes. However, 18 of 22 amblyopic eyes (86.4%) exhibited a blurry choroidal capillary network, and 15 of 22 amblyopic eyes (68.2%) displayed a dark atrophic patch. This is the first report of amblyopia using SSADA-OCT and digital subtraction images of autofluorescence. The mechanistic relationship of a thicker choroid and choroidal capillary atrophy with amblyopia remains to be described. The digital subtraction image confirmed the changes in the microstructure of the amblyopic retina as a supplementary approach to detect the progression of amblyopia. Lasers Surg. Med. 48:660-667, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Jeffries Matusita-Spectral Angle Mapper (JM-SAM) spectral matching for species level mapping at Bhitarkanika, Muthupet and Pichavaram mangroves

    NASA Astrophysics Data System (ADS)

    Padma, S.; Sanjeevi, S.

    2014-12-01

    This paper proposes a novel hyperspectral matching algorithm by integrating the stochastic Jeffries-Matusita measure (JM) and the deterministic Spectral Angle Mapper (SAM), to accurately map the species and the associated landcover types of the mangroves of east coast of India using hyperspectral satellite images. The JM-SAM algorithm signifies the combination of a qualitative distance measure (JM) and a quantitative angle measure (SAM). The spectral capabilities of both the measures are orthogonally projected using the tangent and sine functions to result in the combined algorithm. The developed JM-SAM algorithm is implemented to discriminate the mangrove species and the landcover classes of Pichavaram (Tamil Nadu), Muthupet (Tamil Nadu) and Bhitarkanika (Odisha) mangrove forests along the Eastern Indian coast using the Hyperion image dat asets that contain 242 bands. The developed algorithm is extended in a supervised framework for accurate classification of the Hyperion image. The pixel-level matching performance of the developed algorithm is assessed by the Relative Spectral Discriminatory Probability (RSDPB) and Relative Spectral Discriminatory Entropy (RSDE) measures. From the values of RSDPB and RSDE, it is inferred that hybrid JM-SAM matching measure results in improved discriminability of the mangrove species and the associated landcover types than the individual SAM and JM algorithms. This performance is reflected in the classification accuracies of species and landcover map of Pichavaram mangrove ecosystem. Thus, the JM-SAM (TAN) matching algorithm yielded an accuracy better than SAM and JM measures at an average difference of 13.49 %, 7.21 % respectively, followed by JM-SAM (SIN) at 12.06%, 5.78% respectively. Similarly, in the case of Muthupet, JM-SAM (TAN) yielded an increased accuracy than SAM and JM measures at an average difference of 12.5 %, 9.72 % respectively, followed by JM-SAM (SIN) at 8.34 %, 5.55% respectively. For Bhitarkanika, the combined JM-SAM (TAN) and (SIN) measures improved the performance of individual SAM by (16.1 %, 15%) and of JM by (10.3%, 9.2%) respectively.

  4. Evaluation of the morphology structure of meibomian glands based on mask dodging method

    NASA Astrophysics Data System (ADS)

    Yan, Huangping; Zuo, Yingbo; Chen, Yisha; Chen, Yanping

    2016-10-01

    Low contrast and non-uniform illumination of infrared (IR) meibography images make the detection of meibomian glands challengeable. An improved Mask dodging algorithm is proposed. To overcome the shortage of low contrast using traditional Mask dodging method, a scale factor is used to enhance the image after subtracting background image from an original one. Meibomian glands are detected and the ratio of the meibomian gland area to the measurement area is calculated. The results show that the improved Mask algorithm has ideal dodging effect, which can eliminate non-uniform illumination and improve contrast of meibography images effectively.

  5. An experimental SMI adaptive antenna array simulator for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald S.; Gupta, Inder J.

    1991-01-01

    An experimental sample matrix inversion (SMI) adaptive antenna array for suppressing weak interfering signals is described. The experimental adaptive array uses a modified SMI algorithm to increase the interference suppression. In the modified SMI algorithm, the sample covariance matrix is redefined to reduce the effect of thermal noise on the weights of an adaptive array. This is accomplished by subtracting a fraction of the smallest eigenvalue of the original covariance matrix from its diagonal entries. The test results obtained using the experimental system are compared with theoretical results. The two show a good agreement.

  6. Wide-band array signal processing via spectral smoothing

    NASA Technical Reports Server (NTRS)

    Xu, Guanghan; Kailath, Thomas

    1989-01-01

    A novel algorithm for the estimation of direction-of-arrivals (DOA) of multiple wide-band sources via spectral smoothing is presented. The proposed algorithm does not require an initial DOA estimate or a specific signal model. The advantages of replacing the MUSIC search with an ESPRIT search are discussed.

  7. Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier

    2007-04-01

    In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  9. A Voxel-by-Voxel Comparison of Deformable Vector Fields Obtained by Three Deformable Image Registration Algorithms Applied to 4DCT Lung Studies.

    PubMed

    Fatyga, Mirek; Dogan, Nesrin; Weiss, Elizabeth; Sleeman, William C; Zhang, Baoshe; Lehman, William J; Williamson, Jeffrey F; Wijesooriya, Krishni; Christensen, Gary E

    2015-01-01

    Commonly used methods of assessing the accuracy of deformable image registration (DIR) rely on image segmentation or landmark selection. These methods are very labor intensive and thus limited to relatively small number of image pairs. The direct voxel-by-voxel comparison can be automated to examine fluctuations in DIR quality on a long series of image pairs. A voxel-by-voxel comparison of three DIR algorithms applied to lung patients is presented. Registrations are compared by comparing volume histograms formed both with individual DIR maps and with a voxel-by-voxel subtraction of the two maps. When two DIR maps agree one concludes that both maps are interchangeable in treatment planning applications, though one cannot conclude that either one agrees with the ground truth. If two DIR maps significantly disagree one concludes that at least one of the maps deviates from the ground truth. We use the method to compare 3 DIR algorithms applied to peak inhale-peak exhale registrations of 4DFBCT data obtained from 13 patients. All three algorithms appear to be nearly equivalent when compared using DICE similarity coefficients. A comparison based on Jacobian volume histograms shows that all three algorithms measure changes in total volume of the lungs with reasonable accuracy, but show large differences in the variance of Jacobian distribution on contoured structures. Analysis of voxel-by-voxel subtraction of DIR maps shows differences between algorithms that exceed a centimeter for some registrations. Deformation maps produced by DIR algorithms must be treated as mathematical approximations of physical tissue deformation that are not self-consistent and may thus be useful only in applications for which they have been specifically validated. The three algorithms tested in this work perform fairly robustly for the task of contour propagation, but produce potentially unreliable results for the task of DVH accumulation or measurement of local volume change. Performance of DIR algorithms varies significantly from one image pair to the next hence validation efforts, which are exhaustive but performed on a small number of image pairs may not reflect the performance of the same algorithm in practical clinical situations. Such efforts should be supplemented by validation based on a longer series of images of clinical quality.

  10. Planck early results. XVII. Origin of the submillimetre excess dust emission in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bhatia, R.; Bock, J. J.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bot, C.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Challinor, A.; Chamballu, A.; Chiang, L.-Y.; Chiang, C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Couchot, F.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Dobashi, K.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Fukui, Y.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hovest, W.; Hoyland, R. J.; Huffenberger, K. M.; Jaffe, A. H.; Jones, W. C.; Juvela, M.; Kawamura, A.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knox, L.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leach, S.; Leonardi, R.; Leroy, C.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; MacTavish, C. J.; Madden, S.; Maffei, B.; Mandolesi, N.; Mann, R.; Maris, M.; Martínez-González, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Onishi, T.; Osborne, S.; Pajot, F.; Paladini, R.; Paradis, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Poutanen, T.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Smoot, G. F.; Starck, J.-L.; Stivoli, F.; Stolyarov, V.; Sudiwala, R.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Torre, J.-P.; Tristram, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wilkinson, A.; Ysard, N.; Yvon, D.; Zacchei, A.; Zonca, A.

    2011-12-01

    The integrated spectral energy distributions (SED) of the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC) appear significantly flatter than expected from dust models based on their far-infrared and radio emission. The still unexplained origin of this millimetre excess is investigated here using the Planck data. The integrated SED of the two galaxies before subtraction of the foreground (Milky Way) and background (CMB fluctuations) emission are in good agreement with previous determinations, confirming the presence of the millimetre excess. In the context of this preliminary analysis we do not propose a full multi-component fitting of the data, but instead subtract contributions unrelated to the galaxies and to dust emission. The background CMB contribution is subtracted using an internal linear combination (ILC) method performed locally around the galaxies. The foreground emission from the Milky Way is subtracted as a Galactic Hi template, and the dust emissivity is derived in a region surrounding the two galaxies and dominated by Milky Way emission. After subtraction, the remaining emission of both galaxies correlates closely with the atomic and molecular gas emission of the LMC and SMC. The millimetre excess in the LMC can be explained by CMB fluctuations, but a significant excess is still present in the SMC SED. The Planck and IRAS-IRIS data at 100 μm are combined to produce thermal dust temperature and optical depth maps of the two galaxies. The LMC temperature map shows the presence of a warm inner arm already found with the Spitzer data, but which also shows the existence of a previously unidentified cold outer arm. Several cold regions are found along this arm, some of which are associated with known molecular clouds. The dust optical depth maps are used to constrain the thermal dust emissivity power-law index (β). The average spectral index is found to be consistent with β = 1.5 and β = 1.2 below 500μm for the LMC and SMC respectively, significantly flatter than the values observed in the Milky Way. Also, there is evidence in the SMC of a further flattening of the SED in the sub-mm, unlike for the LMC where the SED remains consistent with β = 1.5. The spatial distribution of the millimetre dustexcess in the SMC follows the gas and thermal dust distribution. Different models are explored in order to fit the dust emission in the SMC. It is concluded that the millimetre excess is unlikely to be caused by very cold dust emission and that it could be due to a combination of spinning dust emission and thermal dust emission by more amorphous dust grains than those present in our Galaxy. Corresponding author: J.-P. Bernard, e-mail: jean-philippe.bernard@cesr.fr

  11. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  12. Imaging Sensor Development for Scattering Atmospheres.

    DTIC Science & Technology

    1983-03-01

    subtracted out- put from a CCD imaging detector for a single frame can be written as A _ S (2-22) V B + B{ shot noise thermal noise , dark current shot ...addition, the spectral re- sponses of current devices are limited to the visible region and their sensitivities are not very high. Solid state detectors ...are generally much more sensitive than spatial light modulators, and some (e.g., HgCdTe detectors ) can re- spond up to the 10 um region. Several

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jornet, N; Carrasco de Fez, P; Jordi, O

    Purpose: To evaluate the accuracy in total scatter factor (Sc,p) determination for small fields using commercial plastic scintillator detector (PSD). The manufacturer's spectral discrimination method to subtract Cerenkov light from the signal is discussed. Methods: Sc,p for field sizes ranging from 0.5 to 10 cm were measured using PSD Exradin (Standard Imaging) connected to two channel electrometer measuring the signals in two different spectral regions to subtract the Cerenkov signal from the PSD signal. A Pinpoint ionisation chamber 31006 (PTW) and a non-shielded semiconductor detector EFD (Scanditronix) were used for comparison. Measures were performed for a 6 MV X-ray beam.more » The Sc,p are measured at 10 cm depth in water for a SSD=100 cm and normalized to a 10'10 cm{sup 2} field size at the isocenter. All detectors were placed with their symmetry axis parallel to the beam axis.We followed the manufacturer's recommended calibration methodology to subtract the Cerenkov contribution to the signal as well as a modified method using smaller field sizes. The Sc,p calculated by using both calibration methodologies were compared. Results: Sc,p measured with the semiconductor and the PinPoint detectors agree, within 1.5%, for field sizes between 10'10 and 1'1 cm{sup 2}. Sc,p measured with the PSD using the manufacturer's calibration methodology were systematically 4% higher than those measured with the semiconductor detector for field sizes smaller than 5'5 cm{sup 2}. By using a modified calibration methodology for smalls fields and keeping the manufacturer calibration methodology for fields larger than 5'5cm{sup 2} field Sc,p matched semiconductor results within 2% field sizes larger than 1.5 cm. Conclusion: The calibration methodology proposed by the manufacturer is not appropriate for dose measurements in small fields. The calibration parameters are not independent of the incident radiation spectrum for this PSD. This work was partially financed by grant 2012 of Barcelona board of the AECC.« less

  14. Verification of Loop Diagnostics

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Lionello, R.; Mok, Y.; Linker, J.; Mikic, Z.

    2014-01-01

    Many different techniques have been used to characterize the plasma in the solar corona: density-sensitive spectral line ratios are used to infer the density, the evolution of coronal structures in different passbands is used to infer the temperature evolution, and the simultaneous intensities measured in multiple passbands are used to determine the emission measure. All these analysis techniques assume that the intensity of the structures can be isolated through background subtraction. In this paper, we use simulated observations from a 3D hydrodynamic simulation of a coronal active region to verify these diagnostics. The density and temperature from the simulation are used to generate images in several passbands and spectral lines. We identify loop structures in the simulated images and calculate the loop background. We then determine the density, temperature and emission measure distribution as a function of time from the observations and compare with the true temperature and density of the loop. We find that the overall characteristics of the temperature, density, and emission measure are recovered by the analysis methods, but the details of the true temperature and density are not. For instance, the emission measure curves calculated from the simulated observations are much broader than the true emission measure distribution, though the average temperature evolution is similar. These differences are due, in part, to inadequate background subtraction, but also indicate a limitation of the analysis methods.

  15. Confocal Raman spectroscopy to trace lipstick with their smudges on different surfaces.

    PubMed

    López-López, Maria; Özbek, Nil; García-Ruiz, Carmen

    2014-06-01

    Lipsticks are very popular cosmetic products that can be transferred by contact to different surfaces, being important forensic evidence with an intricate analysis if they are found in a crime scene. This study evaluates the use of confocal Raman microscopy at 780 nm excitation wavelength for the nondestructive identification of 49 lipsticks of different brands and colors, overcoming the lipstick fluorescence problem reported by previous works using other laser wavelengths. Although the lipsticks samples showed some fluorescence, this effect was not so intense to completely overwhelm the Raman spectra. Lipsticks smudges on twelve different surfaces commonly stained with these samples were also analyzed. In the case of the surfaces, some of them provided several bands to the smudge spectra compromising the identification of the lipstick. For these samples spectral subtraction of the interfering bands from the surface was performed. Finally, five different red lipsticks with very similar color were measured on different surfaces to evaluate the lipstick traceability with their smudges even on interfering surfaces. Although previous spectral subtraction was needed in some cases, all the smudged were linked to their corresponding lipsticks even when they are smeared on the interfering surfaces. As a consequence, confocal Raman microscopy using the 780 nm excitation laser is presented as a nondestructive powerful tool for the identification of these tricky samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Recording high quality speech during tagged cine-MRI studies using a fiber optic microphone.

    PubMed

    NessAiver, Moriel S; Stone, Maureen; Parthasarathy, Vijay; Kahana, Yuvi; Paritsky, Alexander; Paritsky, Alex

    2006-01-01

    To investigate the feasibility of obtaining high quality speech recordings during cine imaging of tongue movement using a fiber optic microphone. A Complementary Spatial Modulation of Magnetization (C-SPAMM) tagged cine sequence triggered by an electrocardiogram (ECG) simulator was used to image a volunteer while speaking the syllable pairs /a/-/u/, /i/-/u/, and the words "golly" and "Tamil" in sync with the imaging sequence. A noise-canceling, optical microphone was fastened approximately 1-2 inches above the mouth of the volunteer. The microphone was attached via optical fiber to a laptop computer, where the speech was sampled at 44.1 kHz. A reference recording of gradient activity with no speech was subtracted from target recordings. Good quality speech was discernible above the background gradient sound using the fiber optic microphone without reference subtraction. The audio waveform of gradient activity was extremely stable and reproducible. Subtraction of the reference gradient recording further reduced gradient noise by roughly 21 dB, resulting in exceptionally high quality speech waveforms. It is possible to obtain high quality speech recordings using an optical microphone even during exceptionally loud cine imaging sequences. This opens up the possibility of more elaborate MRI studies of speech including spectral analysis of the speech signal in all types of MRI.

  17. Theoretical study on electronic excitation spectra: A matrix form of numerical algorithm for spectral shift

    NASA Astrophysics Data System (ADS)

    Ming, Mei-Jun; Xu, Long-Kun; Wang, Fan; Bi, Ting-Jun; Li, Xiang-Yuan

    2017-07-01

    In this work, a matrix form of numerical algorithm for spectral shift is presented based on the novel nonequilibrium solvation model that is established by introducing the constrained equilibrium manipulation. This form is convenient for the development of codes for numerical solution. By means of the integral equation formulation polarizable continuum model (IEF-PCM), a subroutine has been implemented to compute spectral shift numerically. Here, the spectral shifts of absorption spectra for several popular chromophores, N,N-diethyl-p-nitroaniline (DEPNA), methylenecyclopropene (MCP), acrolein (ACL) and p-nitroaniline (PNA) were investigated in different solvents with various polarities. The computed spectral shifts can explain the available experimental findings reasonably. Discussions were made on the contributions of solute geometry distortion, electrostatic polarization and other non-electrostatic interactions to spectral shift.

  18. NIR small arms muzzle flash

    NASA Astrophysics Data System (ADS)

    Montoya, Joseph; Kennerly, Stephen; Rede, Edward

    2010-04-01

    Utilization of Near-Infrared (NIR) spectral features in a muzzle flash will allow for small arms detection using low cost silicon (Si)-based imagers. Detection of a small arms muzzle flash in a particular wavelength region is dependent on the intensity of that emission, the efficiency of source emission transmission through the atmosphere, and the relative intensity of the background scene. The NIR muzzle flash signature exists in the relatively large Si spectral response wavelength region of 300 nm-1100 nm, which allows for use of commercial-off-the-shelf (COTS) Si-based detectors. The alkali metal origin of the NIR spectral features in the 7.62 × 39-mm round muzzle flash is discussed, and the basis for the spectral bandwidth is examined, using a calculated Voigt profile. This report will introduce a model of the 7.62 × 39-mm NIR muzzle flash signature based on predicted source characteristics. Atmospheric limitations based on NIR spectral regions are investigated in relation to the NIR muzzle flash signature. A simple signal-to-clutter ratio (SCR) metric is used to predict sensor performance based on a model of radiance for the source and solar background and pixel registered image subtraction.

  19. Metal implants on CT: comparison of iterative reconstruction algorithms for reduction of metal artifacts with single energy and spectral CT scanning in a phantom model.

    PubMed

    Fang, Jieming; Zhang, Da; Wilcox, Carol; Heidinger, Benedikt; Raptopoulos, Vassilios; Brook, Alexander; Brook, Olga R

    2017-03-01

    To assess single energy metal artifact reduction (SEMAR) and spectral energy metal artifact reduction (MARS) algorithms in reducing artifacts generated by different metal implants. Phantom was scanned with and without SEMAR (Aquilion One, Toshiba) and MARS (Discovery CT750 HD, GE), with various metal implants. Images were evaluated objectively by measuring standard deviation in regions of interests and subjectively by two independent reviewers grading on a scale of 0 (no artifact) to 4 (severe artifact). Reviewers also graded new artifacts introduced by metal artifact reduction algorithms. SEMAR and MARS significantly decreased variability of the density measurement adjacent to the metal implant, with median SD (standard deviation of density measurement) of 52.1 HU without SEMAR, vs. 12.3 HU with SEMAR, p < 0.001. Median SD without MARS of 63.1 HU decreased to 25.9 HU with MARS, p < 0.001. Median SD with SEMAR is significantly lower than median SD with MARS (p = 0.0011). SEMAR improved subjective image quality with reduction in overall artifacts grading from 3.2 ± 0.7 to 1.4 ± 0.9, p < 0.001. Improvement of overall image quality by MARS has not reached statistical significance (3.2 ± 0.6 to 2.6 ± 0.8, p = 0.088). There was a significant introduction of artifacts introduced by metal artifact reduction algorithm for MARS with 2.4 ± 1.0, but minimal with SEMAR 0.4 ± 0.7, p < 0.001. CT iterative reconstruction algorithms with single and spectral energy are both effective in reduction of metal artifacts. Single energy-based algorithm provides better overall image quality than spectral CT-based algorithm. Spectral metal artifact reduction algorithm introduces mild to moderate artifacts in the far field.

  20. Trace gas detection in hyperspectral imagery using the wavelet packet subspace

    NASA Astrophysics Data System (ADS)

    Salvador, Mark A. Z.

    This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.

  1. SMV⊥: Simplex of maximal volume based upon the Gram-Schmidt process

    NASA Astrophysics Data System (ADS)

    Salazar-Vazquez, Jairo; Mendez-Vazquez, Andres

    2015-10-01

    In recent years, different algorithms for Hyperspectral Image (HI) analysis have been introduced. The high spectral resolution of these images allows to develop different algorithms for target detection, material mapping, and material identification for applications in Agriculture, Security and Defense, Industry, etc. Therefore, from the computer science's point of view, there is fertile field of research for improving and developing algorithms in HI analysis. In some applications, the spectral pixels of a HI can be classified using laboratory spectral signatures. Nevertheless, for many others, there is no enough available prior information or spectral signatures, making any analysis a difficult task. One of the most popular algorithms for the HI analysis is the N-FINDR because it is easy to understand and provides a way to unmix the original HI in the respective material compositions. The N-FINDR is computationally expensive and its performance depends on a random initialization process. This paper proposes a novel idea to reduce the complexity of the N-FINDR by implementing a bottom-up approach based in an observation from linear algebra and the use of the Gram-Schmidt process. Therefore, the Simplex of Maximal Volume Perpendicular (SMV⊥) algorithm is proposed for fast endmember extraction in hyperspectral imagery. This novel algorithm has complexity O(n) with respect to the number of pixels. In addition, the evidence shows that SMV⊥ calculates a bigger volume, and has lower computational time complexity than other poular algorithms on synthetic and real scenarios.

  2. Low complexity feature extraction for classification of harmonic signals

    NASA Astrophysics Data System (ADS)

    William, Peter E.

    In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.

  3. High spatiotemporal resolution measurement of regional lung air volumes from 2D phase contrast x-ray images.

    PubMed

    Leong, Andrew F T; Fouras, Andreas; Islam, M Sirajul; Wallace, Megan J; Hooper, Stuart B; Kitchen, Marcus J

    2013-04-01

    Described herein is a new technique for measuring regional lung air volumes from two-dimensional propagation-based phase contrast x-ray (PBI) images at very high spatial and temporal resolution. Phase contrast dramatically increases lung visibility and the outlined volumetric reconstruction technique quantifies dynamic changes in respiratory function. These methods can be used for assessing pulmonary disease and injury and for optimizing mechanical ventilation techniques for preterm infants using animal models. The volumetric reconstruction combines the algorithms of temporal subtraction and single image phase retrieval (SIPR) to isolate the image of the lungs from the thoracic cage in order to measure regional lung air volumes. The SIPR algorithm was used to recover the change in projected thickness of the lungs on a pixel-by-pixel basis (pixel dimensions ≈ 16.2 μm). The technique has been validated using numerical simulation and compared results of measuring regional lung air volumes with and without the use of temporal subtraction for removing the thoracic cage. To test this approach, a series of PBI images of newborn rabbit pups mechanically ventilated at different frequencies was employed. Regional lung air volumes measured from PBI images of newborn rabbit pups showed on average an improvement of at least 20% in 16% of pixels within the lungs in comparison to that measured without the use of temporal subtraction. The majority of pixels that showed an improvement was found to be in regions occupied by bone. Applying the volumetric technique to sequences of PBI images of newborn rabbit pups, it is shown that lung aeration at birth can be highly heterogeneous. This paper presents an image segmentation technique based on temporal subtraction that has successfully been used to isolate the lungs from PBI chest images, allowing the change in lung air volume to be measured over regions as small as the pixel size. Using this technique, it is possible to measure changes in regional lung volume at high spatial and temporal resolution during breathing at much lower x-ray dose than would be required using computed tomography.

  4. Matched-filter algorithm for subpixel spectral detection in hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Borough, Howard C.

    1991-11-01

    Hyperspectral imagery, spatial imagery with associated wavelength data for every pixel, offers a significant potential for improved detection and identification of certain classes of targets. The ability to make spectral identifications of objects which only partially fill a single pixel (due to range or small size) is of considerable interest. Multiband imagery such as Landsat's 5 and 7 band imagery has demonstrated significant utility in the past. Hyperspectral imaging systems with hundreds of spectral bands offer improved performance. To explore the application of differentpixel spectral detection algorithms a synthesized set of hyperspectral image data (hypercubes) was generated utilizing NASA earth resources and other spectral data. The data was modified using LOWTRAN 7 to model the illumination, atmospheric contributions, attenuations and viewing geometry to represent a nadir view from 10,000 ft. altitude. The base hypercube (HC) represented 16 by 21 spatial pixels with 101 wavelength samples from 0.5 to 2.5 micrometers for each pixel. Insertions were made into the base data to provide random location, random pixel percentage, and random material. Fifteen different hypercubes were generated for blind testing of candidate algorithms. An algorithm utilizing a matched filter in the spectral dimension proved surprisingly good yielding 100% detections for pixels filled greater than 40% with a standard camouflage paint, and a 50% probability of detection for pixels filled 20% with the paint, with no false alarms. The false alarm rate as a function of the number of spectral bands in the range from 101 to 12 bands was measured and found to increase from zero to 50% illustrating the value of a large number of spectral bands. This test was on imagery without system noise; the next step is to incorporate typical system noise sources.

  5. Adiabatic Quantum Search in Open Systems.

    PubMed

    Wild, Dominik S; Gopalakrishnan, Sarang; Knap, Michael; Yao, Norman Y; Lukin, Mikhail D

    2016-10-07

    Adiabatic quantum algorithms represent a promising approach to universal quantum computation. In isolated systems, a key limitation to such algorithms is the presence of avoided level crossings, where gaps become extremely small. In open quantum systems, the fundamental robustness of adiabatic algorithms remains unresolved. Here, we study the dynamics near an avoided level crossing associated with the adiabatic quantum search algorithm, when the system is coupled to a generic environment. At zero temperature, we find that the algorithm remains scalable provided the noise spectral density of the environment decays sufficiently fast at low frequencies. By contrast, higher order scattering processes render the algorithm inefficient at any finite temperature regardless of the spectral density, implying that no quantum speedup can be achieved. Extensions and implications for other adiabatic quantum algorithms will be discussed.

  6. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  7. Testing random forest classification for identifying lava flows and mapping age groups on a single Landsat 8 image

    NASA Astrophysics Data System (ADS)

    Li, Long; Solana, Carmen; Canters, Frank; Kervyn, Matthieu

    2017-10-01

    Mapping lava flows using satellite images is an important application of remote sensing in volcanology. Several volcanoes have been mapped through remote sensing using a wide range of data, from optical to thermal infrared and radar images, using techniques such as manual mapping, supervised/unsupervised classification, and elevation subtraction. So far, spectral-based mapping applications mainly focus on the use of traditional pixel-based classifiers, without much investigation into the added value of object-based approaches and into advantages of using machine learning algorithms. In this study, Nyamuragira, characterized by a series of > 20 overlapping lava flows erupted over the last century, was used as a case study. The random forest classifier was tested to map lava flows based on pixels and objects. Image classification was conducted for the 20 individual flows and for 8 groups of flows of similar age using a Landsat 8 image and a DEM of the volcano, both at 30-meter spatial resolution. Results show that object-based classification produces maps with continuous and homogeneous lava surfaces, in agreement with the physical characteristics of lava flows, while lava flows mapped through the pixel-based classification are heterogeneous and fragmented including much "salt and pepper noise". In terms of accuracy, both pixel-based and object-based classification performs well but the former results in higher accuracies than the latter except for mapping lava flow age groups without using topographic features. It is concluded that despite spectral similarity, lava flows of contrasting age can be well discriminated and mapped by means of image classification. The classification approach demonstrated in this study only requires easily accessible image data and can be applied to other volcanoes as well if there is sufficient information to calibrate the mapping.

  8. Benchmarking of data fusion algorithms in support of earth observation based Antarctic wildlife monitoring

    NASA Astrophysics Data System (ADS)

    Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.

    2016-03-01

    Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.

  9. A three-dimensional spectral algorithm for simulations of transition and turbulence

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Hussaini, M. Y.

    1985-01-01

    A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.

  10. Atmospheric correction over case 2 waters with an iterative fitting algorithm: relative humidity effects.

    PubMed

    Land, P E; Haigh, J D

    1997-12-20

    In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.

  11. Spectral Anonymization of Data

    PubMed Central

    Lasko, Thomas A.; Vinterbo, Staal A.

    2011-01-01

    The goal of data anonymization is to allow the release of scientifically useful data in a form that protects the privacy of its subjects. This requires more than simply removing personal identifiers from the data, because an attacker can still use auxiliary information to infer sensitive individual information. Additional perturbation is necessary to prevent these inferences, and the challenge is to perturb the data in a way that preserves its analytic utility. No existing anonymization algorithm provides both perfect privacy protection and perfect analytic utility. We make the new observation that anonymization algorithms are not required to operate in the original vector-space basis of the data, and many algorithms can be improved by operating in a judiciously chosen alternate basis. A spectral basis derived from the data’s eigenvectors is one that can provide substantial improvement. We introduce the term spectral anonymization to refer to an algorithm that uses a spectral basis for anonymization, and we give two illustrative examples. We also propose new measures of privacy protection that are more general and more informative than existing measures, and a principled reference standard with which to define adequate privacy protection. PMID:21373375

  12. Hazardous gas detection for FTIR-based hyperspectral imaging system using DNN and CNN

    NASA Astrophysics Data System (ADS)

    Kim, Yong Chan; Yu, Hyeong-Geun; Lee, Jae-Hoon; Park, Dong-Jo; Nam, Hyun-Woo

    2017-10-01

    Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.

  13. Spectral multigrid methods for the solution of homogeneous turbulence problems

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Zang, T. A.; Hussaini, M. Y.

    1987-01-01

    New three-dimensional spectral multigrid algorithms are analyzed and implemented to solve the variable coefficient Helmholtz equation. Periodicity is assumed in all three directions which leads to a Fourier collocation representation. Convergence rates are theoretically predicted and confirmed through numerical tests. Residual averaging results in a spectral radius of 0.2 for the variable coefficient Poisson equation. In general, non-stationary Richardson must be used for the Helmholtz equation. The algorithms developed are applied to the large-eddy simulation of incompressible isotropic turbulence.

  14. Sensitive Dual Color in vivo Bioluminescence Imaging Using a New Red Codon Optimized Firefly Luciferase and a Green Click Beetle Luciferase

    DTIC Science & Technology

    2011-04-01

    Sensitive Dual Color In Vivo Bioluminescence Imaging Using a New Red Codon Optimized Firefly Luciferase and a Green Click Beetle Luciferase Laura...20 nm). Spectral unmixing algorithms were applied to the images where good separation of signals was observed. Furthermore, HEK293 cells that...spectral emissions using a suitable spectral unmixing algorithm . This new D-luciferin-dependent reporter gene couplet opens up the possibility in the future

  15. Spectral implementation of some quantum algorithms by one- and two-dimensional nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Das, Ranabir; Kumar, Anil

    2004-10-01

    Quantum information processing has been effectively demonstrated on a small number of qubits by nuclear magnetic resonance. An important subroutine in any computing is the readout of the output. "Spectral implementation" originally suggested by Z. L. Madi, R. Bruschweiler, and R. R. Ernst [J. Chem. Phys. 109, 10603 (1999)], provides an elegant method of readout with the use of an extra "observer" qubit. At the end of computation, detection of the observer qubit provides the output via the multiplet structure of its spectrum. In spectral implementation by two-dimensional experiment the observer qubit retains the memory of input state during computation, thereby providing correlated information on input and output, in the same spectrum. Spectral implementation of Grover's search algorithm, approximate quantum counting, a modified version of Berstein-Vazirani problem, and Hogg's algorithm are demonstrated here in three- and four-qubit systems.

  16. Study on text mining algorithm for ultrasound examination of chronic liver diseases based on spectral clustering

    NASA Astrophysics Data System (ADS)

    Chang, Bingguo; Chen, Xiaofei

    2018-05-01

    Ultrasonography is an important examination for the diagnosis of chronic liver disease. The doctor gives the liver indicators and suggests the patient's condition according to the description of ultrasound report. With the rapid increase in the amount of data of ultrasound report, the workload of professional physician to manually distinguish ultrasound results significantly increases. In this paper, we use the spectral clustering method to cluster analysis of the description of the ultrasound report, and automatically generate the ultrasonic diagnostic diagnosis by machine learning. 110 groups ultrasound examination report of chronic liver disease were selected as test samples in this experiment, and the results were validated by spectral clustering and compared with k-means clustering algorithm. The results show that the accuracy of spectral clustering is 92.73%, which is higher than that of k-means clustering algorithm, which provides a powerful ultrasound-assisted diagnosis for patients with chronic liver disease.

  17. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning.

    PubMed

    Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-02-22

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.

  18. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning

    PubMed Central

    Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-01-01

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406

  19. Application of modern radiative transfer tools to model laboratory quartz emissivity

    NASA Astrophysics Data System (ADS)

    Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.

    2005-08-01

    Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.

  20. In situ optical properties of foliar flavonoids: Implication for non-destructive estimation of flavonoid content.

    PubMed

    Gitelson, Anatoly; Chivkunova, Olga; Zhigalova, Tatiana; Solovchenko, Alexei

    2017-11-01

    Flavonoids are a ubiquitous multifunctional group of phenolics of paramount importance for the terrestrial plants involved in protection from biotic and abiotic stresses, color and chemical signaling and other functions. Deciphering of in situ absorption of foliar Flv is important but was thought to be impossible due to a strong overlap with other pigments, complex in situ chemistry of Flv and sophisticated leaf optics. We deduced in situ absorbance of foliar Flv and introduced a concept of specific absorbance spectrum indicative of each pigment group contribution to light absorption and provided a rationale for the choice of spectral bands for non-destructive assessment of Flv in leaves with variable content of other pigments including anthocyanins. Only a narrow band 400-430nm was suitable for Flv assessment, however the effect of other pigments remained substantial, so subtraction of their contribution was necessary. The devised leaf absorbance-based algorithm allowed estimating Flv with error below 21%. Absorption by Flv in plant tissues might extend into the blue and can be commensurate to that of chlorophylls and carotenoids. The potential capacity of Flv to shield the cell in situ from the visible light might be essential for assessments of high light stress tolerance of plants. Copyright © 2017 Elsevier GmbH. All rights reserved.

  1. Surface emissivity and temperature retrieval for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1998-12-01

    With the growing use of hyper-spectral imagers, e.g., AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. The author believes that this will enable him to get around using the present temperature-emissivity separation algorithms using methods which take advantage of the many channels available in hyper-spectral imagers. A simple fact used in coming up with a novel algorithm is that a typical surface emissivity spectrum are rather smooth compared to spectral features introduced by the atmosphere. Thus, a iterative solution technique can be devised which retrievesmore » emissivity spectra based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. One such iterative algorithm solves the radiative transfer equation for the radiance at the sensor for the unknown emissivity and uses the blackbody temperature computed in an atmospheric window to get a guess for the unknown surface temperature. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  2. Alternative techniques for high-resolution spectral estimation of spectrally encoded endoscopy

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Duan, Lian; Javidi, Tara; Ellerbee, Audrey K.

    2015-09-01

    Spectrally encoded endoscopy (SEE) is a minimally invasive optical imaging modality capable of fast confocal imaging of internal tissue structures. Modern SEE systems use coherent sources to image deep within the tissue and data are processed similar to optical coherence tomography (OCT); however, standard processing of SEE data via the Fast Fourier Transform (FFT) leads to degradation of the axial resolution as the bandwidth of the source shrinks, resulting in a well-known trade-off between speed and axial resolution. Recognizing the limitation of FFT as a general spectral estimation algorithm to only take into account samples collected by the detector, in this work we investigate alternative high-resolution spectral estimation algorithms that exploit information such as sparsity and the general region position of the bulk sample to improve the axial resolution of processed SEE data. We validate the performance of these algorithms using bothMATLAB simulations and analysis of experimental results generated from a home-built OCT system to simulate an SEE system with variable scan rates. Our results open a new door towards using non-FFT algorithms to generate higher quality (i.e., higher resolution) SEE images at correspondingly fast scan rates, resulting in systems that are more accurate and more comfortable for patients due to the reduced image time.

  3. Adaptation of a Hyperspectral Atmospheric Correction Algorithm for Multi-spectral Ocean Color Data in Coastal Waters. Chapter 3

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.

    2003-01-01

    This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.

  4. Spectral methods to detect surface mines

    NASA Astrophysics Data System (ADS)

    Winter, Edwin M.; Schatten Silvious, Miranda

    2008-04-01

    Over the past five years, advances have been made in the spectral detection of surface mines under minefield detection programs at the U. S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD). The problem of detecting surface land mines ranges from the relatively simple, the detection of large anti-vehicle mines on bare soil, to the very difficult, the detection of anti-personnel mines in thick vegetation. While spatial and spectral approaches can be applied to the detection of surface mines, spatial-only detection requires many pixels-on-target such that the mine is actually imaged and shape-based features can be exploited. This method is unreliable in vegetated areas because only part of the mine may be exposed, while spectral detection is possible without the mine being resolved. At NVESD, hyperspectral and multi-spectral sensors throughout the reflection and thermal spectral regimes have been applied to the mine detection problem. Data has been collected on mines in forest and desert regions and algorithms have been developed both to detect the mines as anomalies and to detect the mines based on their spectral signature. In addition to the detection of individual mines, algorithms have been developed to exploit the similarities of mines in a minefield to improve their detection probability. In this paper, the types of spectral data collected over the past five years will be summarized along with the advances in algorithm development.

  5. Molecular spectral imaging system for quantitative immunohistochemical analysis of early diabetic retinopathy.

    PubMed

    Li, Qingli; Zhang, Jingfa; Wang, Yiting; Xu, Guoteng

    2009-12-01

    A molecular spectral imaging system has been developed based on microscopy and spectral imaging technology. The system is capable of acquiring molecular spectral images from 400 nm to 800 nm with 2 nm wavelength increments. The basic principles, instrumental systems, and system calibration method as well as its applications for the calculation of the stain-uptake by tissues are introduced. As a case study, the system is used for determining the pathogenesis of diabetic retinopathy and evaluating the therapeutic effects of erythropoietin. Some molecular spectral images of retinal sections of normal, diabetic, and treated rats were collected and analyzed. The typical transmittance curves of positive spots stained for albumin and advanced glycation end products are retrieved from molecular spectral data with the spectral response calibration algorithm. To explore and evaluate the protective effect of erythropoietin (EPO) on retinal albumin leakage of streptozotocin-induced diabetic rats, an algorithm based on Beer-Lambert's law is presented. The algorithm can assess the uptake by histologic retinal sections of stains used in quantitative pathology to label albumin leakage and advanced glycation end products formation. Experimental results show that the system is helpful for the ophthalmologist to reveal the pathogenesis of diabetic retinopathy and explore the protective effect of erythropoietin on retinal cells of diabetic rats. It also highlights the potential of molecular spectral imaging technology to provide more effective and reliable diagnostic criteria in pathology.

  6. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  7. GIFTS SM EDU Level 1B Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.

  8. The Brown Dwarf Kinematics Project (BDKP. III. Parallaxes for 70 Ultracool Dwarfs

    DTIC Science & Technology

    2012-06-10

    highest mass exoplanets (Saumon et al. 1996; Chabrier & Baraffe 1997). In early 2000, the standard stellar spectral classification scheme was extended...Journal, 752:56 (22pp), 2012 June 10 Faherty et al. routine xdimsum was used to perform sky subtractions and mask holes from bright stars.13 3. PARALLAX...epoch. The precise centroids of the stars were measured by binning the stellar profile in the X and Y directions using a box of ∼2′′ around the pixel

  9. Investigation of physical parameters in stellar flares observed by GINGA

    NASA Technical Reports Server (NTRS)

    Stern, Robert A.

    1994-01-01

    This program involves analysis and interpretation of results from GINGA Large Area Counter (LAC) observations from a group of large stellar x-ray flares. All LAC data are re-extracted using the standard Hayashida method of LAC background subtraction and analyzed using various models available with the XSPEC spectral fitting program. Temperature-emission measure histories are available for a total of 5 flares observed by GINGA. These will be used to compare physical parameters of these flares with solar and stellar flare models.

  10. Investigation of physical parameters in stellar flares observed by GINGA

    NASA Technical Reports Server (NTRS)

    Stern, Robert A.

    1994-01-01

    This program involves analysis and interpretation of results from GINGA Large Area Counter (LAC) observations from a group of large stellar X-ray flares. All LAC data are re-extracted using the standard Hayashida method of LAC background subtraction and analyzed using various models available with the XSPEC spectral fitting program.Temperature-emission measure histories are available for a total of 5 flares observed by GINGA. These will be used to compare physical parameters of these flares with solar and stellar flare models.

  11. Motion compensation in digital subtraction angiography using graphics hardware.

    PubMed

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  12. Characterizing Sky Spectra Using SDSS BOSS Data

    NASA Astrophysics Data System (ADS)

    Florez, Lina Maria; Strauss, Michael A.

    2018-01-01

    In the optical/near-infrared spectra gathered by a ground-based telescope observing very faint sources, the strengths of the emission lines due to the Earth’s atmosphere can be many times larger than the fluxes of the sources we are interested in. Thus the limiting factor in faint-object spectroscopy is the degree to which systematics in the sky subtraction can be minimized. Longwards of 6000 Angstroms, the night-sky spectrum is dominated by multiple vibrational/rotational transitions of the OH radical from our upper atmosphere. While the wavelengths of these lines are the same in each sky spectrum, their relative strengths vary considerably as a function of time and position on the sky. The better we can model their strengths, the better we can hope to subtract them off. We expect that the strength of lines from common upper energy levels will be correlated with one another. We used flux-calibrated sky spectra from the Sloan Digital Sky Survey Baryon Oscillation Spectroscopic Survey (SDSS BOSS) to explore these correlations. Our aim is to use these correlations for creating improved sky subtraction algorithms for the Prime Focus Spectrograph (PFS) on the 8.2-meter Subaru Telescope. When PFS starts gathering data in 2019, it will be the most powerful multi-object spectrograph in the world. Since PFS will be gathering data on sources as faint as 24th magnitude and fainter, it's of upmost importance to be able to accurately measure and subtract sky spectra from the data that we receive.

  13. Dissociating functional brain networks by decoding the between-subject variability

    PubMed Central

    Seghier, Mohamed L.; Price, Cathy J.

    2009-01-01

    In this study we illustrate how the functional networks involved in a single task (e.g. the sensory, cognitive and motor components) can be segregated without cognitive subtractions at the second-level. The method used is based on meaningful variability in the patterns of activation between subjects with the assumption that regions belonging to the same network will have comparable variations from subject to subject. fMRI data were collected from thirty nine healthy volunteers who were asked to indicate with a button press if visually presented words were semantically related or not. Voxels were classified according to the similarity in their patterns of between-subject variance using a second-level unsupervised fuzzy clustering algorithm. The results were compared to those identified by cognitive subtractions of multiple conditions tested in the same set of subjects. This illustrated that the second-level clustering approach (on activation for a single task) was able to identify the functional networks observed using cognitive subtractions (e.g. those associated with vision, semantic associations or motor processing). In addition the fuzzy clustering approach revealed other networks that were not dissociated by the cognitive subtraction approach (e.g. those associated with high- and low-level visual processing and oculomotor movements). We discuss the potential applications of our method which include the identification of “hidden” or unpredicted networks as well as the identification of systems level signatures for different subgroupings of clinical and healthy populations. PMID:19150501

  14. Spectral binning for mitigation of polarization mode dispersion artifacts in catheter-based optical frequency domain imaging

    PubMed Central

    Villiger, Martin; Zhang, Ellen Ziyi; Nadkarni, Seemantini K.; Oh, Wang-Yuhl; Vakoc, Benjamin J.; Bouma, Brett E.

    2013-01-01

    Polarization mode dispersion (PMD) has been recognized as a significant barrier to sensitive and reproducible birefringence measurements with fiber-based, polarization-sensitive optical coherence tomography systems. Here, we present a signal processing strategy that reconstructs the local retardation robustly in the presence of system PMD. The algorithm uses a spectral binning approach to limit the detrimental impact of system PMD and benefits from the final averaging of the PMD-corrected retardation vectors of the spectral bins. The algorithm was validated with numerical simulations and experimental measurements of a rubber phantom. When applied to the imaging of human cadaveric coronary arteries, the algorithm was found to yield a substantial improvement in the reconstructed birefringence maps. PMID:23938487

  15. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  16. Processing MALDI mass spectra to improve mass spectral direct tissue analysis

    NASA Astrophysics Data System (ADS)

    Norris, Jeremy L.; Cornett, Dale S.; Mobley, James A.; Andersson, Malin; Seeley, Erin H.; Chaurand, Pierre; Caprioli, Richard M.

    2007-02-01

    Profiling and imaging biological specimens using MALDI mass spectrometry has significant potential to contribute to our understanding and diagnosis of disease. The technique is efficient and high-throughput providing a wealth of data about the biological state of the sample from a very simple and direct experiment. However, in order for these techniques to be put to use for clinical purposes, the approaches used to process and analyze the data must improve. This study examines some of the existing tools to baseline subtract, normalize, align, and remove spectral noise for MALDI data, comparing the advantages of each. A preferred workflow is presented that can be easily implemented for data in ASCII format. The advantages of using such an approach are discussed for both molecular profiling and imaging mass spectrometry.

  17. Enhancement of digital images through band ratio techniques for geological applications

    NASA Technical Reports Server (NTRS)

    Filho, R. A. (Principal Investigator); Vitorello, I.

    1982-01-01

    The fundamentals in the use of band ratio techniques to enhance spectral signatures of geologic interest are discussed. The path radiance, additive term of the measured radiance at any given wavelength, is almost completely eliminated from LANDSAT images by subtracting the smallest value of the radiance measured in each channel, at shadows caused by topographic relief and clouds, and deep clear water bodies. By ratioing successive spectral channels the effect of solar angle of elevation is minimized and the product expresses, to a first approximation, a relationship between reflectances, which are intrinsic characteristics of the targets. Ratios between noncorrelated channels, such as R 7/4, R 7/5, and R 6/4 are useful to show variations in the vegetation cover, probably related to geobotanical associations.

  18. New correction procedures for the fast field program which extend its range

    NASA Technical Reports Server (NTRS)

    West, M.; Sack, R. A.

    1990-01-01

    A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.

  19. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  20. Three-dimensional monochromatic x-ray CT

    NASA Astrophysics Data System (ADS)

    Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Ktsuyuki; Uyama, Chikao

    1995-08-01

    In this paper, we describe a 3D computed tomography (3D CT) using monochromatic x-rays generated by synchrotron radiation, which performs a direct reconstruction of 3D volume image of an object from its cone-beam projections. For the develpment of 3D CT, scanning orbit of x-ray source to obtain complete 3D information about an object and corresponding 3D image reconstruction algorithm are considered. Computer simulation studies demonstrate the validities of proposed scanning method and reconstruction algorithm. A prototype experimental system of 3D CT was constructed. Basic phantom examinations and specific material CT image by energy subtraction obtained in this experimental system are shown.

  1. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  2. The multi-spectral line-polarization MSE system on Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mumgaard, R. T., E-mail: mumgaard@psfc.mit.edu; Khoury, M.; Scott, S. D.

    A multi-spectral line-polarization motional Stark effect (MSE-MSLP) diagnostic has been developed for the Alcator C-Mod tokamak wherein the Stokes vector is measured in multiple wavelength bands simultaneously on the same sightline to enable better polarized background subtraction. A ten-sightline, four wavelength MSE-MSLP detector system was designed, constructed, and qualified. This system consists of a high-throughput polychromator for each sightline designed to provide large étendue and precise spectral filtering in a cost-effective manner. Each polychromator utilizes four narrow bandpass interference filters and four custom large diameter avalanche photodiode detectors. Two filters collect light to the red and blue of the MSEmore » emission spectrum while the remaining two filters collect the beam pi and sigma emission generated at the same viewing volume. The filter wavelengths are temperature tuned using custom ovens in an automated manner. All system functions are remote controllable and the system can be easily retrofitted to existing single-wavelength line-polarization MSE systems.« less

  3. The multi-spectral line-polarization MSE system on Alcator C-Mod

    DOE PAGES

    Mumgaard, R. T.; Scott, S. D.; Khoury, M.

    2016-08-17

    A multi-spectral line-polarization motional Stark effect (MSE-MSLP) diagnostic has been developed for the Alcator C-Mod tokamak wherein the Stokes vector is measured in multiple wavelength bands simultaneously on the same sightline to enable better polarized background subtraction. A ten-sightline, four wavelength MSE-MSLP detector system was designed, constructed, and qualified. This system consists of a high-throughput polychromator for each sightline designed to provide large étendue and precise spectral filtering in a cost-effective manner. Each polychromator utilizes four narrow bandpass interference filters and four custom large diameter avalanche photodiode detectors. Two filters collect light to the red and blue of the MSEmore » emission spectrum while the remaining two filters collect the beam pi and sigma emission generated at the same viewing volume. The filter wavelengths are temperature tuned using custom ovens in an automated manner. Furthermore, all system functions are remote controllable and the system can be easily retrofitted to existing single-wavelength line-polarization MSE systems.« less

  4. Neural correlates of mathematical problem solving.

    PubMed

    Lin, Chun-Ling; Jung, Melody; Wu, Ying Choon; She, Hsiao-Ching; Jung, Tzyy-Ping

    2015-03-01

    This study explores electroencephalography (EEG) brain dynamics associated with mathematical problem solving. EEG and solution latencies (SLs) were recorded as 11 neurologically healthy volunteers worked on intellectually challenging math puzzles that involved combining four single-digit numbers through basic arithmetic operators (addition, subtraction, division, multiplication) to create an arithmetic expression equaling 24. Estimates of EEG spectral power were computed in three frequency bands - θ (4-7 Hz), α (8-13 Hz) and β (14-30 Hz) - over a widely distributed montage of scalp electrode sites. The magnitude of power estimates was found to change in a linear fashion with SLs - that is, relative to a base of power spectrum, theta power increased with longer SLs, while alpha and beta power tended to decrease. Further, the topographic distribution of spectral fluctuations was characterized by more pronounced asymmetries along the left-right and anterior-posterior axes for solutions that involved a longer search phase. These findings reveal for the first time the topography and dynamics of EEG spectral activities important for sustained solution search during arithmetical problem solving.

  5. A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen

    2014-02-01

    High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.

  6. Method to analyze remotely sensed spectral data

    DOEpatents

    Stork, Christopher L [Albuquerque, NM; Van Benthem, Mark H [Middletown, DE

    2009-02-17

    A fast and rigorous multivariate curve resolution (MCR) algorithm is applied to remotely sensed spectral data. The algorithm is applicable in the solar-reflective spectral region, comprising the visible to the shortwave infrared (ranging from approximately 0.4 to 2.5 .mu.m), midwave infrared, and thermal emission spectral region, comprising the thermal infrared (ranging from approximately 8 to 15 .mu.m). For example, employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, MCR can be used to successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. Further, MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of a gas plume component near the minimum detectable quantity.

  7. Algorithmic Foundation of Spectral Rarefaction for Measuring Satellite Imagery Heterogeneity at Multiple Spatial Scales

    PubMed Central

    Rocchini, Duccio

    2009-01-01

    Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600

  8. Mapping the mineralogy and lithology of Canyonlands, Utah with imaging spectrometer data and the multiple spectral feature mapping algorithm

    NASA Technical Reports Server (NTRS)

    Clark, Roger N.; Swayze, Gregg A.; Gallagher, Andrea

    1992-01-01

    The sedimentary sections exposed in the Canyonlands and Arches National Parks region of Utah (generally referred to as 'Canyonlands') consist of sandstones, shales, limestones, and conglomerates. Reflectance spectra of weathered surfaces of rocks from these areas show two components: (1) variations in spectrally detectable mineralogy, and (2) variations in the relative ratios of the absorption bands between minerals. Both types of information can be used together to map each major lithology and the Clark spectral features mapping algorithm is applied to do the job.

  9. Report of the 1988 2-D Intercomparison Workshop, chapter 3

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar

    1989-01-01

    Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.

  10. SpecOp: Optimal Extraction Software for Integral Field Unit Spectrographs

    NASA Astrophysics Data System (ADS)

    McCarron, Adam; Ciardullo, Robin; Eracleous, Michael

    2018-01-01

    The Hobby-Eberly Telescope’s new low resolution integral field spectrographs, LRS2-B and LRS2-R, each cover a 12”x6” area on the sky with 280 fibers and generate spectra with resolutions between R=1100 and R=1900. To extract 1-D spectra from the instrument’s 3D data cubes, a program is needed that is flexible enough to work for a wide variety of targets, including continuum point sources, emission line sources, and compact sources embedded in complex backgrounds. We therefore introduce SpecOp, a user-friendly python program for optimally extracting spectra from integral-field unit spectrographs. As input, SpecOp takes a sky-subtracted data cube consisting of images at each wavelength increment set by the instrument’s spectral resolution, and an error file for each count measurement. All of these files are generated by the current LRS2 reduction pipeline. The program then collapses the cube in the image plane using the optimal extraction algorithm detailed by Keith Horne (1986). The various user-selected options include the fraction of the total signal enclosed in a contour-defined region, the wavelength range to analyze, and the precision of the spatial profile calculation. SpecOp can output the weighted counts and errors at each wavelength in various table formats using python’s astropy package. We outline the algorithm used for extraction and explain how the software can be used to easily obtain high-quality 1-D spectra. We demonstrate the utility of the program by applying it to spectra of a variety of quasars and AGNs. In some of these targets, we extract the spectrum of a nuclear point source that is superposed on a spatially extended galaxy.

  11. Semi-automated detection of trace explosives in fingerprints on strongly interfering surfaces with Raman chemical imaging.

    PubMed

    Tripathi, Ashish; Emmons, Erik D; Wilcox, Phillip G; Guicheteau, Jason A; Emge, Darren K; Christesen, Steven D; Fountain, Augustus W

    2011-06-01

    We have previously demonstrated the use of wide-field Raman chemical imaging (RCI) to detect and identify the presence of trace explosives in contaminated fingerprints. In this current work we demonstrate the detection of trace explosives in contaminated fingerprints on strongly Raman scattering surfaces such as plastics and painted metals using an automated background subtraction routine. We demonstrate the use of partial least squares subtraction to minimize the interfering surface spectral signatures, allowing the detection and identification of explosive materials in the corrected Raman images. The resulting analyses are then visually superimposed on the corresponding bright field images to physically locate traces of explosives. Additionally, we attempt to address the question of whether a complete RCI of a fingerprint is required for trace explosive detection or whether a simple non-imaging Raman spectrum is sufficient. This investigation further demonstrates the ability to nondestructively identify explosives on fingerprints present on commonly found surfaces such that the fingerprint remains intact for further biometric analysis.

  12. Fusion of spectral models for dynamic modeling of sEMG and skeletal muscle force.

    PubMed

    Potluri, Chandrasekhar; Anugolu, Madhavi; Chiu, Steve; Urfer, Alex; Schoen, Marco P; Naidu, D Subbaram

    2012-01-01

    In this paper, we present a method of combining spectral models using a Kullback Information Criterion (KIC) data fusion algorithm. Surface Electromyographic (sEMG) signals and their corresponding skeletal muscle force signals are acquired from three sensors and pre-processed using a Half-Gaussian filter and a Chebyshev Type- II filter, respectively. Spectral models - Spectral Analysis (SPA), Empirical Transfer Function Estimate (ETFE), Spectral Analysis with Frequency Dependent Resolution (SPFRD) - are extracted from sEMG signals as input and skeletal muscle force as output signal. These signals are then employed in a System Identification (SI) routine to establish the dynamic models relating the input and output. After the individual models are extracted, the models are fused by a probability based KIC fusion algorithm. The results show that the SPFRD spectral models perform better than SPA and ETFE models in modeling the frequency content of the sEMG/skeletal muscle force data.

  13. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  14. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    NASA Astrophysics Data System (ADS)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  15. Effects of signal artefacts on electroencephalography spectral power during sleep: quantifying the effectiveness of automated artefact-rejection algorithms.

    PubMed

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Neal, Maxwell; Cashmere, David J; Germain, Anne; Reifman, Jaques

    2018-02-01

    Electroencephalography (EEG) recordings during sleep are often contaminated by muscle and ocular artefacts, which can affect the results of spectral power analyses significantly. However, the extent to which these artefacts affect EEG spectral power across different sleep states has not been quantified explicitly. Consequently, the effectiveness of automated artefact-rejection algorithms in minimizing these effects has not been characterized fully. To address these issues, we analysed standard 10-channel EEG recordings from 20 subjects during one night of sleep. We compared their spectral power when the recordings were contaminated by artefacts and after we removed them by visual inspection or by using automated artefact-rejection algorithms. During both rapid eye movement (REM) and non-REM (NREM) sleep, muscle artefacts contaminated no more than 5% of the EEG data across all channels. However, they corrupted delta, beta and gamma power levels substantially by up to 126, 171 and 938%, respectively, relative to the power level computed from artefact-free data. Although ocular artefacts were infrequent during NREM sleep, they affected up to 16% of the frontal and temporal EEG channels during REM sleep, primarily corrupting delta power by up to 33%. For both REM and NREM sleep, the automated artefact-rejection algorithms matched power levels to within ~10% of the artefact-free power level for each EEG channel and frequency band. In summary, although muscle and ocular artefacts affect only a small fraction of EEG data, they affect EEG spectral power significantly. This suggests the importance of using artefact-rejection algorithms before analysing EEG data. © 2017 European Sleep Research Society.

  16. Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering

    NASA Astrophysics Data System (ADS)

    Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier

    2012-01-01

    We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.

  17. GIFTS SM EDU Radiometric and Spectral Calibrations

    NASA Technical Reports Server (NTRS)

    Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.

  18. Spectral analysis using CCDs

    NASA Technical Reports Server (NTRS)

    Hewes, C. R.; Brodersen, R. W.; De Wit, M.; Buss, D. D.

    1976-01-01

    Charge-coupled devices (CCDs) are ideally suited for performing sampled-data transversal filtering operations in the analog domain. Two algorithms have been identified for performing spectral analysis in which the bulk of the computation can be performed in a CCD transversal filter; the chirp z-transform and the prime transform. CCD implementation of both these transform algorithms is presented together with performance data and applications.

  19. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    NASA Astrophysics Data System (ADS)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  20. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  1. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  2. Quantitative contrast-enhanced spectral mammography based on photon-counting detectors: A feasibility study.

    PubMed

    Ding, Huanjun; Molloi, Sabee

    2017-08-01

    To investigate the feasibility of accurate quantification of iodine mass thickness in contrast-enhanced spectral mammography. A computer simulation model was developed to evaluate the performance of a photon-counting spectral mammography system in the application of contrast-enhanced spectral mammography. A figure-of-merit (FOM), which was defined as the decomposed iodine signal-to-noise ratio (SNR) with respect to the square root of the mean glandular dose (MGD), was chosen to optimize the imaging parameters, in terms of beam energy, splitting energy, and prefiltrations for breasts of various thicknesses and densities. Experimental phantom studies were also performed using a beam energy of 40 kVp and a splitting energy of 34 keV with 3 mm Al prefiltration. A two-step calibration method was investigated to quantify the iodine mass thickness, and was validated using phantoms composed of a mixture of glandular and adipose materials, for various breast thicknesses and densities. Finally, the traditional dual-energy log-weighted subtraction method was also studied as a comparison. The measured iodine signal from both methods was compared to the known value to characterize the quantification accuracy and precision. The optimal imaging parameters, which lead to the highest FOM, were found at a beam energy between 42 and 46 kVp with a splitting energy at 34 keV. The optimal tube voltage decreased as the breast thickness or the Al prefiltration increased. The proposed quantification method was able to measure iodine mass thickness on phantoms of various thicknesses and densities with high accuracy. The root-mean-square (RMS) error for cm-scale lesion phantoms was estimated to be 0.20 mg/cm 2 . The precision of the technique, characterized by the standard deviation of the measurements, was estimated to be 0.18 mg/cm 2 . The traditional weighted subtraction method also predicted a linear correlation between the measured signal and the known iodine mass thickness. However, the correlation slope and offset values were strongly dependent on the total breast thickness and density. The results of this study suggest that iodine mass thickness for cm-scale lesions can be accurately quantified with contrast-enhanced spectral mammography. The quantitative information can potentially improve the differential power for malignancy. © 2017 American Association of Physicists in Medicine.

  3. Modeling the contributions of phytoplankton and non-algal particles to spectral scattering properties in near-shore and lagoon waters

    NASA Astrophysics Data System (ADS)

    Vadakke-Chanat, Sayoob; Shanmugam, Palanisamy

    2017-03-01

    Particular attention was focused on modeling the spectral scattering properties of phytoplankton (bph(λ)) and non-algal particles (detrital organic and inorganic sediments bNAP(λ)) from absorption and attenuation measurements in near-shore and lagoon waters. The absorption line height (aLH(676)) measured above a linear background between 648 nm and 714 nm in particulate and dissolved organic matter absorption spectra (ap(λ)) is a spectral feature that is primarily associated with the chlorophyll with significantly less pigment package effect compared to the blue peak, and hence it is solely attributed to the phytoplankton absorption (aph). The correlation of aph(λ) with bph(λ) in terms of the spectral shape and the relation of aLH(676) with chlorophyll concentration hold the key to derive bph(648) from the aLH(676) measurements. bNAP(648) values are then determined by subtracting the bph(648) from bp(648), allowing the power-law model to derive the bNAP(λ). In-situ determination of bph (λ) is subsequently achieved by subtracting the featureless bNAP(λ) from bp(λ) provided by the ac-s sensor. These data form the basis for the development of models for independent estimates of bph(λ) and bNAP(λ) based on the measurements of aLH and suspended sediment concentration or turbidity. The validity of this method was demonstrated in a wide variety of samples from coastal and inland environments. Comparison of the modeled and measured spectral variations of bph(λ) showed the mean relative percent difference between these two data to be within 20%. bNAP(λ) predictions also had an error a few percent and the correlation coefficient close to unity. When comparing the modeled bph(λ) with laboratory culture data, the results were exceptionally good although discrepancies in size and refractive index of cells of monospecific lab culture samples and natural assemblages due to the simultaneous presence of different species. The proposed approach and models are highly instrumental in investigating the scattering properties of phytoplankton and non-living constituents, and will provide new tools for improving our current understanding of particle dynamics, advancing biogeochemical and ecosystem modeling, and assessing phytoplankton blooms and sediment plumes within inland and coastal environments.

  4. Evaluation of Algorithms for Compressing Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  5. DOA estimation of noncircular signals for coprime linear array via locally reduced-dimensional Capon

    NASA Astrophysics Data System (ADS)

    Zhai, Hui; Zhang, Xiaofei; Zheng, Wang

    2018-05-01

    We investigate the issue of direction of arrival (DOA) estimation of noncircular signals for coprime linear array (CLA). The noncircular property enhances the degree of freedom and improves angle estimation performance, but it leads to a more complex angle ambiguity problem. To eliminate ambiguity, we theoretically prove that the actual DOAs of noncircular signals can be uniquely estimated by finding the coincide results from the two decomposed subarrays based on the coprimeness. We propose a locally reduced-dimensional (RD) Capon algorithm for DOA estimation of noncircular signals for CLA. The RD processing is used in the proposed algorithm to avoid two dimensional (2D) spectral peak search, and coprimeness is employed to avoid the global spectral peak search. The proposed algorithm requires one-dimensional locally spectral peak search, and it has very low computational complexity. Furthermore, the proposed algorithm needs no prior knowledge of the number of sources. We also derive the Crámer-Rao bound of DOA estimation of noncircular signals in CLA. Numerical simulation results demonstrate the effectiveness and superiority of the algorithm.

  6. Spectral correction algorithm for multispectral CdTe x-ray detectors

    NASA Astrophysics Data System (ADS)

    Christensen, Erik D.; Kehres, Jan; Gu, Yun; Feidenhans'l, Robert; Olsen, Ulrik L.

    2017-09-01

    Compared to the dual energy scintillator detectors widely used today, pixelated multispectral X-ray detectors show the potential to improve material identification in various radiography and tomography applications used for industrial and security purposes. However, detector effects, such as charge sharing and photon pileup, distort the measured spectra in high flux pixelated multispectral detectors. These effects significantly reduce the detectors' capabilities to be used for material identification, which requires accurate spectral measurements. We have developed a semi analytical computational algorithm for multispectral CdTe X-ray detectors which corrects the measured spectra for severe spectral distortions caused by the detector. The algorithm is developed for the Multix ME100 CdTe X-ray detector, but could potentially be adapted for any pixelated multispectral CdTe detector. The calibration of the algorithm is based on simple attenuation measurements of commercially available materials using standard laboratory sources, making the algorithm applicable in any X-ray setup. The validation of the algorithm has been done using experimental data acquired with both standard lab equipment and synchrotron radiation. The experiments show that the algorithm is fast, reliable even at X-ray flux up to 5 Mph/s/mm2, and greatly improves the accuracy of the measured X-ray spectra, making the algorithm very useful for both security and industrial applications where multispectral detectors are used.

  7. Spectral editing of weakly coupled spins using variable flip angles in PRESS constant echo time difference spectroscopy: Application to GABA

    NASA Astrophysics Data System (ADS)

    Snyder, Jeff; Hanstock, Chris C.; Wilman, Alan H.

    2009-10-01

    A general in vivo magnetic resonance spectroscopy editing technique is presented to detect weakly coupled spin systems through subtraction, while preserving singlets through addition, and is applied to the specific brain metabolite γ-aminobutyric acid (GABA) at 4.7 T. The new method uses double spin echo localization (PRESS) and is based on a constant echo time difference spectroscopy approach employing subtraction of two asymmetric echo timings, which is normally only applicable to strongly coupled spin systems. By utilizing flip angle reduction of one of the two refocusing pulses in the PRESS sequence, we demonstrate that this difference method may be extended to weakly coupled systems, thereby providing a very simple yet effective editing process. The difference method is first illustrated analytically using a simple two spin weakly coupled spin system. The technique was then demonstrated for the 3.01 ppm resonance of GABA, which is obscured by the strong singlet peak of creatine in vivo. Full numerical simulations, as well as phantom and in vivo experiments were performed. The difference method used two asymmetric PRESS timings with a constant total echo time of 131 ms and a reduced 120° final pulse, providing 25% GABA yield upon subtraction compared to two short echo standard PRESS experiments. Phantom and in vivo results from human brain demonstrate efficacy of this method in agreement with numerical simulations.

  8. The Effects of Using Direct Instruction and the Equal Additions Algorithm to Promote Subtraction with Regrouping Skills of Students with Emotional and Behavioral Disorders with Mathematics Difficulties

    ERIC Educational Resources Information Center

    Fain, Angela Christine

    2013-01-01

    Students with emotional and behavioral disorders (E/BD) display severe social and academic deficits that can adversely affect their academic performance in mathematics and result in higher rates of failure throughout their schooling compared to other students with disabilities (U.S. Department of Education, 2005; Webber & Plotts, 2008).…

  9. Radioisotope dilution analyses of geological samples using 236U and 229Th

    USGS Publications Warehouse

    Rosholt, J.N.

    1984-01-01

    The use of 236U and 229Th in alpha spectrometric measurements has some advantages over the use of other tracers and measurement techniques in isotope dilution analyses of most geological samples. The advantages are: (1) these isotopes do not occur in terrestrial rocks, (2) they have negligible decay losses because of their long half lives, (3) they cause minimal recoil contamination to surface-barrier detectors, (4) they allow for simultaneous determination of the concentration and isotopic composition of uranium and thorium in a variety of sample types, and (5) they allow for simple and constant corrections for spectral inferences, 0.5% of the 238U activity is subtracted for the contribution of 235U in the 236U peak and 1% of the 229Th activity is subtracted from the 230Th activity. Disadvantages in using 236U and 229Th are: (1) individual separates of uranium and thorium must be prepared as very thin sources for alpha spectrometry, (2) good resolution in the spectrometer system is required for thorium isotopic measurements where measurement times may extend to 300 h, and (3) separate calibrations of the 236U and 229Th spike solution with both uranium and thorium standards are required. The use of these tracers in applications of uranium-series disequilibrium studies has simplified the measurements required for the determination of the isotopic composition of uranium and thorium because of the minimal corrections needed for alpha spectral interferences. ?? 1984.

  10. Spectral dispersion and fringe detection in IOTA

    NASA Technical Reports Server (NTRS)

    Traub, W. A.; Lacasse, M. G.; Carleton, N. P.

    1990-01-01

    Pupil plane beam combination, spectral dispersion, detection, and fringe tracking are discussed for the IOTA interferometer. A new spectrometer design is presented in which the angular dispersion with respect to wavenumber is nearly constant. The dispersing element is a type of grism, a series combination of grating and prism, in which the constant parts of the dispersion add, but the slopes cancel. This grism is optimized for the display of channelled spectra. The dispersed fringes can be tracked by a matched-filter photon-counting correlator algorithm. This algorithm requires very few arithmetic operations per detected photon, making it well-suited for real-time fringe tracking. The algorithm is able to adapt to different stellar spectral types, intensity levels, and atmospheric time constants. The results of numerical experiments are reported.

  11. PCA-based approach for subtracting thermal background emission in high-contrast imaging data

    NASA Astrophysics Data System (ADS)

    Hunziker, S.; Quanz, S. P.; Amara, A.; Meyer, M. R.

    2018-03-01

    Aims.Ground-based observations at thermal infrared wavelengths suffer from large background radiation due to the sky, telescope and warm surfaces in the instrument. This significantly limits the sensitivity of ground-based observations at wavelengths longer than 3 μm. The main purpose of this work is to analyse this background emission in infrared high-contrast imaging data as illustrative of the problem, show how it can be modelled and subtracted and demonstrate that it can improve the detection of faint sources, such as exoplanets. Methods: We used principal component analysis (PCA) to model and subtract the thermal background emission in three archival high-contrast angular differential imaging datasets in the M' and L' filter. We used an M' dataset of β Pic to describe in detail how the algorithm works and explain how it can be applied. The results of the background subtraction are compared to the results from a conventional mean background subtraction scheme applied to the same dataset. Finally, both methods for background subtraction are compared by performing complete data reductions. We analysed the results from the M' dataset of HD 100546 only qualitatively. For the M' band dataset of β Pic and the L' band dataset of HD 169142, which was obtained with an angular groove phase mask vortex vector coronagraph, we also calculated and analysed the achieved signal-to-noise ratio (S/N). Results: We show that applying PCA is an effective way to remove spatially and temporarily varying thermal background emission down to close to the background limit. The procedure also proves to be very successful at reconstructing the background that is hidden behind the point spread function. In the complete data reductions, we find at least qualitative improvements for HD 100546 and HD 169142, however, we fail to find a significant increase in S/N of β Pic b. We discuss these findings and argue that in particular datasets with strongly varying observing conditions or infrequently sampled sky background will benefit from the new approach.

  12. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  13. The 3XMM spectral fit database

    NASA Astrophysics Data System (ADS)

    Georgantopoulos, I.; Corral, A.; Watson, M.; Carrera, F.; Webb, N.; Rosen, S.

    2016-06-01

    I will present the XMMFITCAT database which is a spectral fit inventory of the sources in the 3XMM catalogue. Spectra are available by the XMM/SSC for all 3XMM sources which have more than 50 background subtracted counts per module. This work is funded in the framework of the ESA Prodex project. The 3XMM catalog currently covers 877 sq. degrees and contains about 400,000 unique sources. Spectra are available for over 120,000 sources. Spectral fist have been performed with various spectral models. The results are available in the web page http://xraygroup.astro.noa.gr/ and also at the University of Leicester LEDAS database webpage ledas-www.star.le.ac.uk/. The database description as well as some science results in the joint area with SDSS are presented in two recent papers: Corral et al. 2015, A&A, 576, 61 and Corral et al. 2014, A&A, 569, 71. At least for extragalactic sources, the spectral fits will acquire added value when photometric redshifts become available. In the framework of a new Prodex project we have been funded to derive photometric redshifts for the 3XMM sources using machine learning techniques. I will present the techniques as well as the optical near-IR databases that will be used.

  14. Film thickness measurement based on nonlinear phase analysis using a Linnik microscopic white-light spectral interferometer.

    PubMed

    Guo, Tong; Chen, Zhuo; Li, Minghui; Wu, Juhong; Fu, Xing; Hu, Xiaotang

    2018-04-20

    Based on white-light spectral interferometry and the Linnik microscopic interference configuration, the nonlinear phase components of the spectral interferometric signal were analyzed for film thickness measurement. The spectral interferometric signal was obtained using a Linnik microscopic white-light spectral interferometer, which includes the nonlinear phase components associated with the effective thickness, the nonlinear phase error caused by the double-objective lens, and the nonlinear phase of the thin film itself. To determine the influence of the effective thickness, a wavelength-correction method was proposed that converts the effective thickness into a constant value; the nonlinear phase caused by the effective thickness can then be determined and subtracted from the total nonlinear phase. A method for the extraction of the nonlinear phase error caused by the double-objective lens was also proposed. Accurate thickness measurement of a thin film can be achieved by fitting the nonlinear phase of the thin film after removal of the nonlinear phase caused by the effective thickness and by the nonlinear phase error caused by the double-objective lens. The experimental results demonstrated that both the wavelength-correction method and the extraction method for the nonlinear phase error caused by the double-objective lens improve the accuracy of film thickness measurements.

  15. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  16. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  17. Difference optimization: Automatic correction of relative frequency and phase for mean non-edited and edited GABA 1H MEGA-PRESS spectra

    NASA Astrophysics Data System (ADS)

    Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R.

    2017-06-01

    Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1H MEGA-PRESS, misalignment between mean edited (ON ‾) and non-edited (OFF ‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON ‾ and OFF ‾ 1H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3 T. The results of the alignment between the mean OFF ‾ and ON ‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF ‾ spectra of Rbd = 0.997 ± 0.003 (method (b) vs. (d)), compared to Rad = 0.764 ± 0.220 (method (a) vs. (d)) with no alignment between OFF ‾ and ON ‾ . Method (c) revealed a slightly lower correlation coefficient of Rcd = 0.972 ± 0.028 compared to Rbd, that can be ascribed to small remaining subtraction artefacts in the final DIFF ‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF ‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging.

  18. Difference optimization: Automatic correction of relative frequency and phase for mean non-edited and edited GABA 1H MEGA-PRESS spectra.

    PubMed

    Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R

    2017-06-01

    Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1 H MEGA-PRESS, misalignment between mean edited (ON‾) and non-edited (OFF‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON‾ and OFF‾ 1 H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L 1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L 2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3T. The results of the alignment between the mean OFF‾ and ON‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF‾ spectra of R bd =0.997±0.003 (method (b) vs. (d)), compared to R ad =0.764±0.220 (method (a) vs. (d)) with no alignment between OFF‾ and ON‾. Method (c) revealed a slightly lower correlation coefficient of R cd =0.972±0.028 compared to R bd , that can be ascribed to small remaining subtraction artefacts in the final DIFF‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Demonstration of an optoelectronic interconnect architecture for a parallel modified signed-digit adder and subtracter

    NASA Astrophysics Data System (ADS)

    Sun, Degui; Wang, Na-Xin; He, Li-Ming; Weng, Zhao-Heng; Wang, Daheng; Chen, Ray T.

    1996-06-01

    A space-position-logic-encoding scheme is proposed and demonstrated. This encoding scheme not only makes the best use of the convenience of binary logic operation, but is also suitable for the trinary property of modified signed- digit (MSD) numbers. Based on the space-position-logic-encoding scheme, a fully parallel modified signed-digit adder and subtractor is built using optoelectronic switch technologies in conjunction with fiber-multistage 3D optoelectronic interconnects. Thus an effective combination of a parallel algorithm and a parallel architecture is implemented. In addition, the performance of the optoelectronic switches used in this system is experimentally studied and verified. Both the 3-bit experimental model and the experimental results of a parallel addition and a parallel subtraction are provided and discussed. Finally, the speed ratio between the MSD adder and binary adders is discussed and the advantage of the MSD in operating speed is demonstrated.

  20. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  1. Computational solution of spike overlapping using data-based subtraction algorithms to resolve synchronous sympathetic nerve discharge

    PubMed Central

    Su, Chun-Kuei; Chiang, Chia-Hsun; Lee, Chia-Ming; Fan, Yu-Pei; Ho, Chiu-Ming; Shyu, Liang-Yu

    2013-01-01

    Sympathetic nerves conveying central commands to regulate visceral functions often display activities in synchronous bursts. To understand how individual fibers fire synchronously, we establish “oligofiber recording techniques” to record “several” nerve fiber activities simultaneously, using in vitro splanchnic sympathetic nerve–thoracic spinal cord preparations of neonatal rats as experimental models. While distinct spike potentials were easily recorded from collagenase-dissociated sympathetic fibers, a problem arising from synchronous nerve discharges is a higher incidence of complex waveforms resulted from spike overlapping. Because commercial softwares do not provide an explicit solution for spike overlapping, a series of custom-made LabVIEW programs incorporated with MATLAB scripts was therefore written for spike sorting. Spikes were represented as data points after waveform feature extraction and automatically grouped by k-means clustering followed by principal component analysis (PCA) to verify their waveform homogeneity. For dissimilar waveforms with exceeding Hotelling's T2 distances from the cluster centroids, a unique data-based subtraction algorithm (SA) was used to determine if they were the complex waveforms resulted from superimposing a spike pattern close to the cluster centroid with the other signals that could be observed in original recordings. In comparisons with commercial software, higher accuracy was achieved by analyses using our algorithms for the synthetic data that contained synchronous spiking and complex waveforms. Moreover, both T2-selected and SA-retrieved spikes were combined as unit activities. Quantitative analyses were performed to evaluate if unit activities truly originated from single fibers. We conclude that applications of our programs can help to resolve synchronous sympathetic nerve discharges (SND). PMID:24198782

  2. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  3. Saliency detection algorithm based on LSC-RC

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu

    2018-02-01

    Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.

  4. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  5. A two-step along-track spectral analysis for estimating the magnetic signals of magnetospheric ring current from Swarm data

    NASA Astrophysics Data System (ADS)

    Martinec, Zdeněk; Velímský, Jakub; Haagmans, Roger; Šachl, Libor

    2018-02-01

    This study deals with the analysis of Swarm vector magnetic field measurements in order to estimate the magnetic field of magnetospheric ring current. For a single Swarm satellite, the magnetic measurements are processed by along-track spectral analysis on a track-by-track basis. The main and lithospheric magnetic fields are modelled by the CHAOS-6 field model and subtracted from the along-track Swarm magnetic data. The mid-latitude residual signal is then spectrally analysed and extrapolated to the polar regions. The resulting model of the magnetosphere (model MME) is compared to the existing Swarm Level 2 magnetospheric field model (MMA_SHA_2C). The differences of up to 10 nT are found on the nightsides Swarm data from 2014 April 8 to May 10, which are due to different processing schemes used to construct the two magnetospheric magnetic field models. The forward-simulated magnetospheric magnetic field generated by the external part of model MME then demonstrates the consistency of the separation of the Swarm along-track signal into the external and internal parts by the two-step along-track spectral analysis.

  6. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  7. Subtraction of subcutaneous fat to improve the prediction of visceral adiposity: exploring a new anthropometric track in overweight and obese youth.

    PubMed

    Samouda, H; De Beaufort, C; Stranges, S; Van Nieuwenhuyse, J-P; Dooms, G; Keunen, O; Leite, S; Vaillant, M; Lair, M-L; Dadoun, F

    2017-08-01

    The efficiency of traditional anthropometric measurements such as body mass index (BMI) or waist circumference (Waist C) used to replace biomedical imaging for assessing visceral adipose tissue (VAT) is still highly controversial in youth. We evaluated the most accurate model predicting VAT in overweight/obese youth, using various anthropometric measurements and their correlation with different body fat compartments, especially by testing, for the first time in youth, the hypothesis that subtracting the anthropometric measurement the most highly correlated with subcutaneous abdominal adipose tissue (SAAT) and less correlated possible with VAT from an anthropometric abdominal measurement highly correlated with visceral and total abdominal adipose tissue (TAAT), predicts VAT with higher accuracy. VAT and SAAT data resulted from magnetic resonance imaging (MRI) analysis performed on 181 boys and girls (7-17 y) from Diabetes & Endocrinology Care Paediatrics Clinic in Luxembourg. Height, weight, abdominal diameters, waist, hip, and thigh circumferences were measured with a view to developing the anthropometric VAT predictive algorithms. In girls, subtracting proximal thigh circumference (Proximal Thigh C), the most closely correlated anthropometric measurement with SAAT, from Waist C, the most closely correlated anthropometric measurement with VAT was instrumental in improving VAT prediction, in comparison with the most accurate single VAT anthropometric surrogate. [Formula: see text] Residual analysis showed a negligible estimation error (5 cm 2 ). In boys, Waist C was the best VAT predictor. Subtraction of abdominal subcutaneous fat is important to predict VAT in overweight/obese girls. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Handwritten text line segmentation by spectral clustering

    NASA Astrophysics Data System (ADS)

    Han, Xuecheng; Yao, Hui; Zhong, Guoqiang

    2017-02-01

    Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.

  9. Device, Algorithm and Integrated Modeling Research for Performance-Drive Multi-Modal Optical Sensors

    DTIC Science & Technology

    2012-12-17

    to!feature!aided!tracking! using !spectral! information .! ! !iii! •! A!novel!technique!for!spectral!waveband!selection!was!developed!and! used !as! part! of ... of !spectral! information ! using !the!tunable!single;pixel!spectrometer!concept.! •! A! database! was! developed! of ! spectral! reflectance! measurements...exploring! the! utility! of ! spectral! and! polarimetric! information !to!help!with!the!vehicle!tracking!application.!Through!the! use ! of ! both

  10. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    PubMed

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  11. Hyperspectral Remote Sensing of Terrestrial Ecosystem Productivity from ISS

    NASA Astrophysics Data System (ADS)

    Huemmrich, K. F.; Campbell, P. K. E.; Gao, B. C.; Flanagan, L. B.; Goulden, M.

    2017-12-01

    Data from the Hyperspectral Imager for Coastal Ocean (HICO), mounted on the International Space Station (ISS), were used to develop and test algorithms for remotely retrieving ecosystem productivity. The ISS orbit introduces both limitations and opportunities for observing ecosystem dynamics. Twenty six HICO images were used from four study sites representing different vegetation types: grasslands, shrubland, and forest. Gross ecosystem production (GEP) data from eddy covariance were matched with HICO-derived spectra. Multiple algorithms were successful relating spectral reflectance with GEP, including: Spectral Vegetation Indices (SVI), SVI in a light use efficiency model framework, spectral shape characteristics through spectral derivatives and absorption feature analysis, and statistical models leading to Multiband Hyperspectral Indices (MHI) from stepwise regressions and Partial Least Squares Regression (PLSR). Algorithms were able to achieve r2 better than 0.7 for both GEP at the overpass time and daily GEP. These algorithms were successful using a diverse set of observations combining data from multiple years, multiple times during growing season, different times of day, with different view angles, and different vegetation types. The demonstrated robustness of the algorithms presented in this study over these conditions provides some confidence in mapping spatial patterns of GEP, describing variability within fields as well as the regional patterns based only on spectral reflectance information. The ISS orbit provides periods with multiple observations collected at different times of the day within a period of a few days. Diurnal GEP patterns were estimated comparing the half-hourly average GEP from the flux tower against HICO estimates of GEP (r2=0.87) if morning, midday, and afternoon observations were available for average fluxes in the time period.

  12. Crustal interpretation of the MAGSAT data in the continental United States

    NASA Technical Reports Server (NTRS)

    Won, I. J.; Son, K. H.

    1982-01-01

    The processing of MAGSAT scalar data to construct a crustal magnetic anomaly map over the continental U.S. involves removal of the reference field model, a path-by-path subtraction of a low order polynomial through a least-squares fit to reduce orbital offset errors, and a two dimensional spectral filtering to mitigate the spectral bias induced by the path-by-path orbital correction scheme. The resultant anomaly map shows reasonably good correlations with an aeromagnetic map derived from the project MAGNET. Prominent satellite magnetic anomalies are identified in terms of geological provinces and age boundaries. An inversion method was applied to MAGSAT data which produces both the Curie depth topography and laterally varying magnetic susceptibility of the crust. A contoured Curie depth map thus derived shows general agreements with a crustal thickness map based on seismic data.

  13. Computer-aided diagnosis and artificial intelligence in clinical imaging.

    PubMed

    Shiraishi, Junji; Li, Qiang; Appelbaum, Daniel; Doi, Kunio

    2011-11-01

    Computer-aided diagnosis (CAD) is rapidly entering the radiology mainstream. It has already become a part of the routine clinical work for the detection of breast cancer with mammograms. The computer output is used as a "second opinion" in assisting radiologists' image interpretations. The computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification via the use of tools such as artificial neural networks (ANN). In this article, we will explore these and other current processes that have come to be referred to as "artificial intelligence." One element of CAD, temporal subtraction, has been applied for enhancing interval changes and for suppressing unchanged structures (eg, normal structures) between 2 successive radiologic images. To reduce misregistration artifacts on the temporal subtraction images, a nonlinear image warping technique for matching the previous image to the current one has been developed. Development of the temporal subtraction method originated with chest radiographs, with the method subsequently being applied to chest computed tomography (CT) and nuclear medicine bone scans. The usefulness of the temporal subtraction method for bone scans was demonstrated by an observer study in which reading times and diagnostic accuracy improved significantly. An additional prospective clinical study verified that the temporal subtraction image could be used as a "second opinion" by radiologists with negligible detrimental effects. ANN was first used in 1990 for computerized differential diagnosis of interstitial lung diseases in CAD. Since then, ANN has been widely used in CAD schemes for the detection and diagnosis of various diseases in different imaging modalities, including the differential diagnosis of lung nodules and interstitial lung diseases in chest radiography, CT, and position emission tomography/CT. It is likely that CAD will be integrated into picture archiving and communication systems and will become a standard of care for diagnostic examinations in daily clinical work. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing

    NASA Astrophysics Data System (ADS)

    Cox, Cary M.

    This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work also explores the concept of an edge within hyperspectral space, the relative importance of spatial and spectral resolutions as they pertain to HSI edge detection and how effectively compressed HSI data improves edge detection results. The HSI edge detection experiments yielded valuable insights into the algorithms' strengths, weaknesses and optimal alignment to remote sensing applications. The gradient-based edge operator produced strong edge planes across a range of evaluation measures and applications, particularly with respect to false negatives, unbroken edges, urban mapping, vegetation mapping and oil spill mapping applications. False positives and uncompressed HSI data presented occasional challenges to the algorithm. The HySPADE edge operator produced satisfactory results with respect to localization, single-point response, oil spill mapping and trace chemical detection, and was challenged by false positives, declining spectral resolution and vegetation mapping applications. The level set edge detector produced high-quality edge planes for most tests and demonstrated strong performance with respect to false positives, single-point response, oil spill mapping and mineral mapping. False negatives were a regular challenge for the level set edge detection algorithm. Finally, HSI data optimized for spectral information compression and noise was shown to improve edge detection performance across all three algorithms, while the gradient-based algorithm and HySPADE demonstrated significant robustness to declining spectral and spatial resolutions.

  15. Status of the NPP and J1 NOAA Unique Combined Atmospheric Processing System (NUCAPS): recent algorithm enhancements geared toward validation and near real time users applications.

    NASA Astrophysics Data System (ADS)

    Gambacorta, A.; Nalli, N. R.; Tan, C.; Iturbide-Sanchez, F.; Wilson, M.; Zhang, K.; Xiong, X.; Barnet, C. D.; Sun, B.; Zhou, L.; Wheeler, A.; Reale, A.; Goldberg, M.

    2017-12-01

    The NOAA Unique Combined Atmospheric Processing System (NUCAPS) is the NOAA operational algorithm to retrieve thermodynamic and composition variables from hyper spectral thermal sounders such as CrIS, IASI and AIRS. The combined use of microwave sounders, such as ATMS, AMSU and MHS, enables full atmospheric sounding of the atmospheric column under all-sky conditions. NUCAPS retrieval products are accessible in near real time (about 1.5 hour delay) through the NOAA Comprehensive Large Array-data Stewardship System (CLASS). Since February 2015, NUCAPS retrievals have been also accessible via Direct Broadcast, with unprecedented low latency of less than 0.5 hours. NUCAPS builds on a long-term, multi-agency investment on algorithm research and development. The uniqueness of this algorithm consists in a number of features that are key in providing highly accurate and stable atmospheric retrievals, suitable for real time weather and air quality applications. Firstly, maximizing the use of the information content present in hyper spectral thermal measurements forms the foundation of the NUCAPS retrieval algorithm. Secondly, NUCAPS is a modular, name-list driven design. It can process multiple hyper spectral infrared sounders (on Aqua, NPP, MetOp and JPSS series) by mean of the same exact retrieval software executable and underlying spectroscopy. Finally, a cloud-clearing algorithm and a synergetic use of microwave radiance measurements enable full vertical sounding of the atmosphere, under all-sky regimes. As we transition toward improved hyper spectral missions, assessing retrieval skill and consistency across multiple platforms becomes a priority for real time users applications. Focus of this presentation is a general introduction on the recent improvements in the delivery of the NUCAPS full spectral resolution upgrade and an overview of the lessons learned from the 2017 Hazardous Weather Test bed Spring Experiment. Test cases will be shown on the use of NPP and MetOp NUCAPS under pre-convective, capping inversion and dry layer intrusion events.

  16. Stellar spectral classification of previously unclassified stars GSC 4461-698 and GSC 4466-870

    NASA Astrophysics Data System (ADS)

    Grau, Darren Moser

    Stellar spectral classification is one of the first efforts undertaken to begin defining the physical characteristics of stars. However, many stars lack even this basic information, which is the foundation for later research to constrain stellar effective temperatures, masses, radial velocities, the number of stars in the system, and age. This research obtained visible-λ stellar spectra via the testing and commissioning of a Santa Barbara Instruments Group (SBIG) Self-Guiding Spectrograph (SGS) at the UND Observatory. Utilizing a 16-inch-aperture telescope on Internet Observatory #3, the SGS obtained spectra of GSC 4461-698 and GSC 4466-870 in the low-resolution mode using an 18-µm wide slit with dispersion of 4.3 Å/pixel, resolution of 8 Å, and a spectral range from 3800-7500 Å. Observational protocols include automatic bias/dark frame subtraction for each stellar spectrum obtained. This was followed by spectral averaging to obtain a combined spectrum for each star observed. Image calibration and spectral averaging was performed using the software programs, Maxim DL, Image J, Microsoft Excel, and Winmk. A wavelength calibration process was used to obtain spectra of an Hg/Ne source that allowed the conversion of spectrograph channels into wavelengths. Stellar emission and absorption lines, such as those for hydrogen (H) and helium (He), were identified, extracted, and rectified. Each average spectrum was compared to the MK stellar spectral standards to determine an initial spectral classification for each star. The hope is that successful completion of this project will allow long-term stellar spectral observations to begin at the UND Observatory.

  17. GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm

    NASA Technical Reports Server (NTRS)

    Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.

    2003-01-01

    The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.

  18. [A cloud detection algorithm for MODIS images combining Kmeans clustering and multi-spectral threshold method].

    PubMed

    Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei

    2011-04-01

    An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.

  19. Constrained spectral clustering under a local proximity structure assumption

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie

    2005-01-01

    This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.

  20. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  1. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  2. A hyperspectral imagery anomaly detection algorithm based on local three-dimensional orthogonal subspace projection

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Wen, Gongjian

    2015-10-01

    Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.

  3. Spectral Unmixing Based Construction of Lunar Mineral Abundance Maps

    NASA Astrophysics Data System (ADS)

    Bernhardt, V.; Grumpe, A.; Wöhler, C.

    2017-07-01

    In this study we apply a nonlinear spectral unmixing algorithm to a nearly global lunar spectral reflectance mosaic derived from hyper-spectral image data acquired by the Moon Mineralogy Mapper (M3) instrument. Corrections for topographic effects and for thermal emission were performed. A set of 19 laboratory-based reflectance spectra of lunar samples published by the Lunar Soil Characterization Consortium (LSCC) were used as a catalog of potential endmember spectra. For a given spectrum, the multi-population population-based incremental learning (MPBIL) algorithm was used to determine the subset of endmembers actually contained in it. However, as the MPBIL algorithm is computationally expensive, it cannot be applied to all pixels of the reflectance mosaic. Hence, the reflectance mosaic was clustered into a set of 64 prototype spectra, and the MPBIL algorithm was applied to each prototype spectrum. Each pixel of the mosaic was assigned to the most similar prototype, and the set of endmembers previously determined for that prototype was used for pixel-wise nonlinear spectral unmixing using the Hapke model, implemented as linear unmixing of the single-scattering albedo spectrum. This procedure yields maps of the fractional abundances of the 19 endmembers. Based on the known modal abundances of a variety of mineral species in the LSCC samples, a conversion from endmember abundances to mineral abundances was performed. We present maps of the fractional abundances of plagioclase, pyroxene and olivine and compare our results with previously published lunar mineral abundance maps.

  4. SCP -- A Simple CCD Processing Package

    NASA Astrophysics Data System (ADS)

    Lewis, J. R.

    This note describes a small set of programs, written at RGO, which deal with basic CCD frame processing (e.g. bias subtraction, flat fielding, trimming etc.). The need to process large numbers of CCD frames from devices such as FOS or ISIS in order to extract spectra has prompted the writing of routines which will do the basic hack-work with a minimal amount of interaction from the user. Although they were written with spectral data in mind, there are no ``spectrum-specific'' features in the software which means they can be applied to any CCD data.

  5. Instrumental and atmospheric background lines observed by the SMM gamma-ray spectrometer

    NASA Technical Reports Server (NTRS)

    Share, G. H.; Kinzer, R. L.; Strickman, M. S.; Letaw, J. R.; Chupp, E. L.

    1989-01-01

    Preliminary identifications of instrumental and atmospheric background lines detected by the gamma-ray spectrometer on NASA's Solar Maximum Mission satellite (SMM) are presented. The long-term and stable operation of this experiment has provided data of high quality for use in this analysis. Methods are described for identifying radioactive isotopes which use their different decay times. Temporal evolution of the features are revealed by spectral comparisons, subtractions, and fits. An understanding of these temporal variations has enabled the data to be used for detecting celestial gamma-ray sources.

  6. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  7. Analysis of Modified SMI Method for Adaptive Array Weight Control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald Louis

    1989-01-01

    An adaptive array is used to receive a desired signal in the presence of weak interference signals which need to be suppressed. A modified sample matrix inversion (SMI) algorithm controls the array weights. The modification leads to increased interference suppression by subtracting a fraction of the noise power from the diagonal elements of the covariance matrix. The modified algorithm maximizes an intuitive power ratio criterion. The expected values and variances of the array weights, output powers, and power ratios as functions of the fraction and the number of snapshots are found and compared to computer simulation and real experimental array performance. Reduced-rank covariance approximations and errors in the estimated covariance are also described.

  8. Video Analytics Evaluation: Survey of Datasets, Performance Metrics and Approaches

    DTIC Science & Technology

    2014-09-01

    training phase and a fusion of the detector outputs. 6.3.1 Training Techniques 1. Bagging: The basic idea of Bagging is to train multiple classifiers...can reduce more noise interesting points. Person detection and background subtraction methods were used to create hot regions. The hot regions were...detection algorithms are incorporated with MHT to construct one integrated detector /tracker. 6.8 IRDS-CASIA team IRDS-CASIA proposed a method to solve a

  9. Spread Spectrum Signal Characteristic Estimation Using Exponential Averaging and an AD-HOC Chip rate Estimator

    DTIC Science & Technology

    2007-03-01

    Quadrature QPSK Quadrature Phase-Shift Keying RV Random Variable SHAC Single-Hop-Observation Auto- Correlation SINR Signal-to-Interference...The fast Fourier transform ( FFT ) accumulation method and the strip spectral correlation algorithm subdivide the support region in the bi-frequency...diamond shapes, while the strip spectral correlation algorithm subdivides the region into strips. Each strip covers a number of the FFT accumulation

  10. Designing a practical system for spectral imaging of skylight.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Lee, Raymond L

    2005-09-20

    In earlier work [J. Opt. Soc. Am. A 21, 13-23 (2004)], we showed that a combination of linear models and optimum Gaussian sensors obtained by an exhaustive search can recover daylight spectra reliably from broadband sensor data. Thus our algorithm and sensors could be used to design an accurate, relatively inexpensive system for spectral imaging of daylight. Here we improve our simulation of the multispectral system by (1) considering the different kinds of noise inherent in electronic devices such as change-coupled devices (CCDs) or complementary metal-oxide semiconductors (CMOS) and (2) extending our research to a different kind of natural illumination, skylight. Because exhaustive searches are expensive computationally, here we switch to a simulated annealing algorithm to define the optimum sensors for recovering skylight spectra. The annealing algorithm requires us to minimize a single cost function, and so we develop one that calculates both the spectral and colorimetric similarity of any pair of skylight spectra. We show that the simulated annealing algorithm yields results similar to the exhaustive search but with much less computational effort. Our technique lets us study the properties of optimum sensors in the presence of noise, one side effect of which is that adding more sensors may not improve the spectral recovery.

  11. A simulation of remote sensor systems and data processing algorithms for spectral feature classification

    NASA Technical Reports Server (NTRS)

    Arduini, R. F.; Aherron, R. M.; Samms, R. W.

    1984-01-01

    A computational model of the deterministic and stochastic processes involved in multispectral remote sensing was designed to evaluate the performance of sensor systems and data processing algorithms for spectral feature classification. Accuracy in distinguishing between categories of surfaces or between specific types is developed as a means to compare sensor systems and data processing algorithms. The model allows studies to be made of the effects of variability of the atmosphere and of surface reflectance, as well as the effects of channel selection and sensor noise. Examples of these effects are shown.

  12. Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.

  13. Spectral Unmixing With Multiple Dictionaries

    NASA Astrophysics Data System (ADS)

    Cohen, Jeremy E.; Gillis, Nicolas

    2018-02-01

    Spectral unmixing aims at recovering the spectral signatures of materials, called endmembers, mixed in a hyperspectral or multispectral image, along with their abundances. A typical assumption is that the image contains one pure pixel per endmember, in which case spectral unmixing reduces to identifying these pixels. Many fully automated methods have been proposed in recent years, but little work has been done to allow users to select areas where pure pixels are present manually or using a segmentation algorithm. Additionally, in a non-blind approach, several spectral libraries may be available rather than a single one, with a fixed number (or an upper or lower bound) of endmembers to chose from each. In this paper, we propose a multiple-dictionary constrained low-rank matrix approximation model that address these two problems. We propose an algorithm to compute this model, dubbed M2PALS, and its performance is discussed on both synthetic and real hyperspectral images.

  14. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.

  15. Mandarin Chinese Tone Identification in Cochlear Implants: Predictions from Acoustic Models

    PubMed Central

    Morton, Kenneth D.; Torrione, Peter A.; Throckmorton, Chandra S.; Collins, Leslie M.

    2015-01-01

    It has been established that current cochlear implants do not supply adequate spectral information for perception of tonal languages. Comprehension of a tonal language, such as Mandarin Chinese, requires recognition of lexical tones. New strategies of cochlear stimulation such as variable stimulation rate and current steering may provide the means of delivering more spectral information and thus may provide the auditory fine structure required for tone recognition. Several cochlear implant signal processing strategies are examined in this study, the continuous interleaved sampling (CIS) algorithm, the frequency amplitude modulation encoding (FAME) algorithm, and the multiple carrier frequency algorithm (MCFA). These strategies provide different types and amounts of spectral information. Pattern recognition techniques can be applied to data from Mandarin Chinese tone recognition tasks using acoustic models as a means of testing the abilities of these algorithms to transmit the changes in fundamental frequency indicative of the four lexical tones. The ability of processed Mandarin Chinese tones to be correctly classified may predict trends in the effectiveness of different signal processing algorithms in cochlear implants. The proposed techniques can predict trends in performance of the signal processing techniques in quiet conditions but fail to do so in noise. PMID:18706497

  16. A review of spectral methods

    NASA Technical Reports Server (NTRS)

    Lustman, L.

    1984-01-01

    An outline for spectral methods for partial differential equations is presented. The basic spectral algorithm is defined, collocation are emphasized and the main advantage of the method, the infinite order of accuracy in problems with smooth solutions are discussed. Examples of theoretical numerical analysis of spectral calculations are presented. An application of spectral methods to transonic flow is presented. The full potential transonic equation is among the best understood among nonlinear equations.

  17. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  18. Novel Spectral Representations and Sparsity-Driven Algorithms for Shape Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Zhong, Ming

    In this dissertation, we focus on extending classical spectral shape analysis by incorporating spectral graph wavelets and sparsity-seeking algorithms. Defined with the graph Laplacian eigenbasis, the spectral graph wavelets are localized both in the vertex domain and graph spectral domain, and thus are very effective in describing local geometry. With a rich dictionary of elementary vectors and forcing certain sparsity constraints, a real life signal can often be well approximated by a very sparse coefficient representation. The many successful applications of sparse signal representation in computer vision and image processing inspire us to explore the idea of employing sparse modeling techniques with dictionary of spectral basis to solve various shape modeling problems. Conventional spectral mesh compression uses the eigenfunctions of mesh Laplacian as shape bases, which are highly inefficient in representing local geometry. To ameliorate, we advocate an innovative approach to 3D mesh compression using spectral graph wavelets as dictionary to encode mesh geometry. The spectral graph wavelets are locally defined at individual vertices and can better capture local shape information than Laplacian eigenbasis. The multi-scale SGWs form a redundant dictionary as shape basis, so we formulate the compression of 3D shape as a sparse approximation problem that can be readily handled by greedy pursuit algorithms. Surface inpainting refers to the completion or recovery of missing shape geometry based on the shape information that is currently available. We devise a new surface inpainting algorithm founded upon the theory and techniques of sparse signal recovery. Instead of estimating the missing geometry directly, our novel method is to find this low-dimensional representation which describes the entire original shape. More specifically, we find that, for many shapes, the vertex coordinate function can be well approximated by a very sparse coefficient representation with respect to the dictionary comprising its Laplacian eigenbasis, and it is then possible to recover this sparse representation from partial measurements of the original shape. Taking advantage of the sparsity cue, we advocate a novel variational approach for surface inpainting, integrating data fidelity constraints on the shape domain with coefficient sparsity constraints on the transformed domain. Because of the powerful properties of Laplacian eigenbasis, the inpainting results of our method tend to be globally coherent with the remaining shape. Informative and discriminative feature descriptors are vital in qualitative and quantitative shape analysis for a large variety of graphics applications. We advocate novel strategies to define generalized, user-specified features on shapes. Our new region descriptors are primarily built upon the coefficients of spectral graph wavelets that are both multi-scale and multi-level in nature, consisting of both local and global information. Based on our novel spectral feature descriptor, we developed a user-specified feature detection framework and a tensor-based shape matching algorithm. Through various experiments, we demonstrate the competitive performance of our proposed methods and the great potential of spectral basis and sparsity-driven methods for shape modeling.

  19. Velocity interferometer signal de-noising using modified Wiener filter

    NASA Astrophysics Data System (ADS)

    Rav, Amit; Joshi, K. D.; Roy, Kallol; Kaushik, T. C.

    2017-05-01

    The accuracy and precision of the non-contact velocity interferometer system for any reflector (VISAR) depends not only on the good optical design and linear optical-to- electrical conversion system, but also on accurate and robust post-processing techniques. The performance of these techniques, such as the phase unwrapping algorithm, depends on the signal-to-noise ratio (SNR) of the recorded signal. In the present work, a novel method of improving the SNR of the recorded VISAR signal, based on the knowledge of the noise characteristic of the signal conversion and recording system, is presented. The proposed method uses a modified Wiener filter, for which the signal power spectrum estimation is obtained using a spectral subtraction method (SSM), and the noise power spectrum estimation is obtained by taking the average of the recorded signal during the period when no target movement is expected. Since the noise power spectrum estimate is dynamic in nature, and obtained for each experimental record individually, the improved signal quality is high. The proposed method is applied to the simulated standard signals, and is not only found to be better than the SSM, but is also less sensitive to the selection of the noise floor during signal power spectrum estimation. Finally, the proposed method is applied to the recorded experimental signal and an improvement in the SNR is reported.

  20. Data Reduction Pipeline for the CHARIS Integral-Field Spectrograph I: Detector Readout Calibration and Data Cube Extraction

    NASA Technical Reports Server (NTRS)

    Groff, Tyler; Rizzo, Maxime; Greco, Johnny P.; Loomis, Craig; Mede, Kyle; Kasdin, N. Jeremy; Knapp, Gillian; Tamura, Motohide; Hayashi, Masahiko; Galvin, Michael; hide

    2017-01-01

    We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or chi-squared fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spread function (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a chi-squared-based extraction of the data cube, with typical residuals of approximately 5 percent due to imperfect models of the under-sampled lenslet PSFs. The full two-dimensional residual of the chi-squared extraction allows us to model and remove correlated read noise, dramatically improving CHARIS's performance. The chi-squared extraction produces a data cube that has been deconvolved with the line-spread function and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS's software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.

  1. Data reduction pipeline for the CHARIS integral-field spectrograph I: detector readout calibration and data cube extraction

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; Rizzo, Maxime; Groff, Tyler; Chilcote, Jeffrey; Greco, Johnny P.; Kasdin, N. Jeremy; Limbach, Mary Anne; Galvin, Michael; Loomis, Craig; Knapp, Gillian; McElwain, Michael W.; Jovanovic, Nemanja; Currie, Thayne; Mede, Kyle; Tamura, Motohide; Takato, Naruhisa; Hayashi, Masahiko

    2017-10-01

    We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or χ2 fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spread function (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a χ2-based extraction of the data cube, with typical residuals of ˜5% due to imperfect models of the undersampled lenslet PSFs. The full two-dimensional residual of the χ2 extraction allows us to model and remove correlated read noise, dramatically improving CHARIS's performance. The χ2 extraction produces a data cube that has been deconvolved with the line-spread function and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS's software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.

  2. Instrumental Response Model and Detrending for the Dark Energy Camera

    DOE PAGES

    Bernstein, G. M.; Abbott, T. M. C.; Desai, S.; ...

    2017-09-14

    We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less

  3. Instrumental Response Model and Detrending for the Dark Energy Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, G. M.; Abbott, T. M. C.; Desai, S.

    We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less

  4. Jacobi spectral Galerkin method for elliptic Neumann problems

    NASA Astrophysics Data System (ADS)

    Doha, E.; Bhrawy, A.; Abd-Elhameed, W.

    2009-01-01

    This paper is concerned with fast spectral-Galerkin Jacobi algorithms for solving one- and two-dimensional elliptic equations with homogeneous and nonhomogeneous Neumann boundary conditions. The paper extends the algorithms proposed by Shen (SIAM J Sci Comput 15:1489-1505, 1994) and Auteri et al. (J Comput Phys 185:427-444, 2003), based on Legendre polynomials, to Jacobi polynomials with arbitrary α and β. The key to the efficiency of our algorithms is to construct appropriate basis functions with zero slope at the endpoints, which lead to systems with sparse matrices for the discrete variational formulations. The direct solution algorithm developed for the homogeneous Neumann problem in two-dimensions relies upon a tensor product process. Nonhomogeneous Neumann data are accounted for by means of a lifting. Numerical results indicating the high accuracy and effectiveness of these algorithms are presented.

  5. Distortion correction and cross-talk compensation algorithm for use with an imaging spectrometer based spatially resolved diffuse reflectance system

    NASA Astrophysics Data System (ADS)

    Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.

    2016-12-01

    Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.

  6. Optimizing interconnections to maximize the spectral radius of interdependent networks

    NASA Astrophysics Data System (ADS)

    Chen, Huashan; Zhao, Xiuyan; Liu, Feng; Xu, Shouhuai; Lu, Wenlian

    2017-03-01

    The spectral radius (i.e., the largest eigenvalue) of the adjacency matrices of complex networks is an important quantity that governs the behavior of many dynamic processes on the networks, such as synchronization and epidemics. Studies in the literature focused on bounding this quantity. In this paper, we investigate how to maximize the spectral radius of interdependent networks by optimally linking k internetwork connections (or interconnections for short). We derive formulas for the estimation of the spectral radius of interdependent networks and employ these results to develop a suite of algorithms that are applicable to different parameter regimes. In particular, a simple algorithm is to link the k nodes with the largest k eigenvector centralities in one network to the node in the other network with a certain property related to both networks. We demonstrate the applicability of our algorithms via extensive simulations. We discuss the physical implications of the results, including how the optimal interconnections can more effectively decrease the threshold of epidemic spreading in the susceptible-infected-susceptible model and the threshold of synchronization of coupled Kuramoto oscillators.

  7. An analysis of spectral envelope-reduction via quadratic assignment problems

    NASA Technical Reports Server (NTRS)

    George, Alan; Pothen, Alex

    1994-01-01

    A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.

  8. Multitaper scan-free spectrum estimation using a rotational shear interferometer.

    PubMed

    Lepage, Kyle; Thomson, David J; Kraut, Shawn; Brady, David J

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9 degrees from a source with a SNR of 70.1, with a significance level of 10(-4), approximately 4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  9. Multitaper scan-free spectrum estimation using a rotational shear interferometer

    NASA Astrophysics Data System (ADS)

    Lepage, Kyle; Thomson, David J.; Kraut, Shawn; Brady, David J.

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9° from a source with a SNR of 70.1, with a significance level of 10-4, ˜4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  10. [State Recognition of Solid Fermentation Process Based on Near Infrared Spectroscopy with Adaboost and Spectral Regression Discriminant Analysis].

    PubMed

    Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui

    2016-01-01

    In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct recognition rate of the Adaboost-SRDA-NN model achieved 100% in the validation set. The overall results demonstrate that SRDA algorithm can effectively achieve the spectral feature information extraction to the spectral dimension reduction in model calibration process of qualitative analysis of NIR spectroscopy. In addition, the Adaboost lifting algorithm can improve the classification accuracy of the final model. The results obtained in this work can provide research foundation for developing online monitoring instruments for the monitoring of SSF process.

  11. Automated detection of Martian water ice clouds: the Valles Marineris

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Munetomo, Takafumi; Hatanaka, Yuji; Okumura, Susumu

    2016-10-01

    We need to extract water ice clouds from the large number of Mars images in order to reveal spatial and temporal variations of water ice cloud occurrence and to meteorologically understand climatology of water ice clouds. However, visible images observed by Mars orbiters for several years are too many to visually inspect each of them even though the inspection was limited to one region. Therefore, an automated detection algorithm of Martian water ice clouds is necessary for collecting ice cloud images efficiently. In addition, it may visualize new aspects of spatial and temporal variations of water ice clouds that we have never been aware. We present a method for automatically evaluating the presence of Martian water ice clouds using difference images and cross-correlation distributions calculated from blue band images of the Valles Marineris obtained by the Mars Orbiter Camera onboard the Mars Global Surveyor (MGS/MOC). We derived one subtracted image and one cross-correlation distribution from two reflectance images. The difference between the maximum and the average, variance, kurtosis, and skewness of the subtracted image were calculated. Those of the cross-correlation distribution were also calculated. These eight statistics were used as feature vectors for training Support Vector Machine, and its generalization ability was tested using 10-fold cross-validation. F-measure and accuracy tended to be approximately 0.8 if the maximum in the normalized reflectance and the difference of the maximum and the average in the cross-correlation were chosen as features. In the process of the development of the detection algorithm, we found many cases where the Valles Marineris became clearly brighter than adjacent areas in the blue band. It is at present unclear whether the bright Valles Marineris means the occurrence of water ice clouds inside the Valles Marineris or not. Therefore, subtracted images showing the bright Valles Marineris were excluded from the detection of water ice clouds

  12. Target tracking and 3D trajectory acquisition of cabbage butterfly (P. rapae) based on the KCF-BS algorithm.

    PubMed

    Guo, Yang-Yang; He, Dong-Jian; Liu, Cong

    2018-06-25

    Insect behaviour is an important research topic in plant protection. To study insect behaviour accurately, it is necessary to observe and record their flight trajectory quantitatively and precisely in three dimensions (3D). The goal of this research was to analyse frames extracted from videos using Kernelized Correlation Filters (KCF) and Background Subtraction (BS) (KCF-BS) to plot the 3D trajectory of cabbage butterfly (P. rapae). Considering the experimental environment with a wind tunnel, a quadrature binocular vision insect video capture system was designed and applied in this study. The KCF-BS algorithm was used to track the butterfly in video frames and obtain coordinates of the target centroid in two videos. Finally the 3D trajectory was calculated according to the matching relationship in the corresponding frames of two angles in the video. To verify the validity of the KCF-BS algorithm, Compressive Tracking (CT) and Spatio-Temporal Context Learning (STC) algorithms were performed. The results revealed that the KCF-BS tracking algorithm performed more favourably than CT and STC in terms of accuracy and robustness.

  13. Resolution Study of a Hyperspectral Sensor using Computed Tomography in the Presence of Noise

    DTIC Science & Technology

    2012-06-14

    diffraction efficiency is dependent on wavelength. Compared to techniques developed by later work, simple algebraic reconstruction techniques were used...spectral di- mension, using computed tomography (CT) techniques with only a finite number of diverse images. CTHIS require a reconstruction algorithm in...many frames are needed to reconstruct the spectral cube of a simple object using a theoretical lower bound. In this research a new algorithm is derived

  14. Two-dimensional imaging of gas temperature and concentration based on hyperspectral tomography

    NASA Astrophysics Data System (ADS)

    Xin, Ming-yuan; Jin, Xing; Wang, Guang-yu; Song, Junling

    2016-10-01

    Two-dimensional imaging of gas temperature and concentration is realized by hyperspectral tomography, which has the characteristics of using multi-wavelengths absorption spectral information, so that the imaging could be accomplished in a small number of projections and viewing angles. A temperature and concentration model is established to simulate the combustion conditions and a total number of 10 near-infrared absorption spectral information of H2O is used. An improved simulated annealing algorithm by adjusting search step is performed the main search algorithm for the tomography. By adding random errors into the absorption area information, the stability of the algorithm is tested, and the results are compared with the reconstructions provided by algebraic reconstruction technique which takes advantage of 2 spectral information contents in imaging. The results show that the two methods perform equivalent in low-level noise environment, but at high-level, hyperspectral tomography turns out to be more stable.

  15. Model-based spectral estimation of Doppler signals using parallel genetic algorithms.

    PubMed

    Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F

    2000-05-01

    Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.

  16. Automated computation of autonomous spectral submanifolds for nonlinear modal analysis

    NASA Astrophysics Data System (ADS)

    Ponsioen, Sten; Pedergnana, Tiemo; Haller, George

    2018-04-01

    We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.

  17. The problem of scattering in fibre-fed VPH spectrographs and possible solutions

    NASA Astrophysics Data System (ADS)

    Ellis, S. C.; Saunders, Will; Betters, Chris; Croom, Scott

    2014-07-01

    All spectrographs unavoidably scatter light. Scattering in the spectral direction is problematic for sky subtraction, since atmospheric spectral lines are blurred. Scattering in the spatial direction is problematic for fibre fed spectrographs, since it limits how closely fibres can be packed together. We investigate the nature of this scattering and show that the scattering wings have both a Lorentzian component, and a shallower (1/r) component. We investigate the causes of this from a theoretical perspective, and argue that for the spectral PSF the Lorentzian wings are in part due to the profile of the illumination of the pupil of the spectrograph onto the diffraction grating, whereas the shallower component is from bulk scattering. We then investigate ways to mitigate the diffractive scattering by apodising the pupil. In the ideal case of a Gaussian apodised pupil, the scattering can be significantly improved. Finally we look at realistic models of the spectrograph pupils of fibre fed spectrographs with a centrally obstructed telescope, and show that it is possible to apodise the pupil through non-telecentric injection into the fibre.

  18. Nonlinear ultrasonic wave modulation for online fatigue crack detection

    NASA Astrophysics Data System (ADS)

    Sohn, Hoon; Lim, Hyung Jin; DeSimio, Martin P.; Brown, Kevin; Derriso, Mark

    2014-02-01

    This study presents a fatigue crack detection technique using nonlinear ultrasonic wave modulation. Ultrasonic waves at two distinctive driving frequencies are generated and corresponding ultrasonic responses are measured using permanently installed lead zirconate titanate (PZT) transducers with a potential for continuous monitoring. Here, the input signal at the lower driving frequency is often referred to as a 'pumping' signal, and the higher frequency input is referred to as a 'probing' signal. The presence of a system nonlinearity, such as a crack formation, can provide a mechanism for nonlinear wave modulation, and create spectral sidebands around the frequency of the probing signal. A signal processing technique combining linear response subtraction (LRS) and synchronous demodulation (SD) is developed specifically to extract the crack-induced spectral sidebands. The proposed crack detection method is successfully applied to identify actual fatigue cracks grown in metallic plate and complex fitting-lug specimens. Finally, the effect of pumping and probing frequencies on the amplitude of the first spectral sideband is investigated using the first sideband spectrogram (FSS) obtained by sweeping both pumping and probing signals over specified frequency ranges.

  19. Simulating the WFIRST coronagraph integral field spectrograph

    NASA Astrophysics Data System (ADS)

    Rizzo, Maxime J.; Groff, Tyler D.; Zimmermann, Neil T.; Gong, Qian; Mandell, Avi M.; Saxena, Prabal; McElwain, Michael W.; Roberge, Aki; Krist, John; Riggs, A. J. Eldorado; Cady, Eric J.; Mejia Prada, Camilo; Brandt, Timothy; Douglas, Ewan; Cahoy, Kerri

    2017-09-01

    A primary goal of direct imaging techniques is to spectrally characterize the atmospheres of planets around other stars at extremely high contrast levels. To achieve this goal, coronagraphic instruments have favored integral field spectrographs (IFS) as the science cameras to disperse the entire search area at once and obtain spectra at each location, since the planet position is not known a priori. These spectrographs are useful against confusion from speckles and background objects, and can also help in the speckle subtraction and wavefront control stages of the coronagraphic observation. We present a software package, the Coronagraph and Rapid Imaging Spectrograph in Python (crispy) to simulate the IFS of the WFIRST Coronagraph Instrument (CGI). The software propagates input science cubes using spatially and spectrally resolved coronagraphic focal plane cubes, transforms them into IFS detector maps and ultimately reconstructs the spatio-spectral input scene as a 3D datacube. Simulated IFS cubes can be used to test data extraction techniques, refine sensitivity analyses and carry out design trade studies of the flight CGI-IFS instrument. crispy is a publicly available Python package and can be adapted to other IFS designs.

  20. Contrast-enhanced spectral mammography with a photon-counting detector.

    PubMed

    Fredenberg, Erik; Hemmendorff, Magnus; Cederström, Björn; Aslund, Magnus; Danielsson, Mats

    2010-05-01

    Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. The authors have investigated a photon-counting spectral imaging system with two energy bins for contrast-enhanced mammography. System optimization and the potential benefit compared to conventional non-energy-resolved absorption imaging was studied. A framework for system characterization was set up that included quantum and anatomical noise and a theoretical model of the system was benchmarked to phantom measurements. Optimal combination of the energy-resolved images corresponded approximately to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging in the phantom study. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, yielded only a minute improvement. In a simulation of a clinically more realistic case, spectral imaging was predicted to perform approximately 30% better than absorption imaging for an average glandularity breast with an average level of anatomical noise. For dense breast tissue and a high level of anatomical noise, however, a rise in detectability by a factor of 6 was predicted. Another approximately 70%-90% improvement was found to be within reach for an optimized system. Contrast-enhanced spectral mammography is feasible and beneficial with the current system, and there is room for additional improvements. Inclusion of anatomical noise is essential for optimizing spectral imaging systems.

  1. Overlapping communities detection based on spectral analysis of line graphs

    NASA Astrophysics Data System (ADS)

    Gui, Chun; Zhang, Ruisheng; Hu, Rongjing; Huang, Guoming; Wei, Jiaxuan

    2018-05-01

    Community in networks are often overlapping where one vertex belongs to several clusters. Meanwhile, many networks show hierarchical structure such that community is recursively grouped into hierarchical organization. In order to obtain overlapping communities from a global hierarchy of vertices, a new algorithm (named SAoLG) is proposed to build the hierarchical organization along with detecting the overlap of community structure. SAoLG applies the spectral analysis into line graphs to unify the overlap and hierarchical structure of the communities. In order to avoid the limitation of absolute distance such as Euclidean distance, SAoLG employs Angular distance to compute the similarity between vertices. Furthermore, we make a micro-improvement partition density to evaluate the quality of community structure and use it to obtain the more reasonable and sensible community numbers. The proposed SAoLG algorithm achieves a balance between overlap and hierarchy by applying spectral analysis to edge community detection. The experimental results on one standard network and six real-world networks show that the SAoLG algorithm achieves higher modularity and reasonable community number values than those generated by Ahn's algorithm, the classical CPM and GN ones.

  2. A differential optical absorption spectroscopy method for retrieval from ground-based Fourier transform spectrometers measurements of the direct solar beam

    NASA Astrophysics Data System (ADS)

    Huo, Yanfeng; Duan, Minzheng; Tian, Wenshou; Min, Qilong

    2015-08-01

    A differential optical absorption spectroscopy (DOAS)-like algorithm is developed to retrieve the column-averaged dryair mole fraction of carbon dioxide from ground-based hyper-spectral measurements of the direct solar beam. Different to the spectral fitting method, which minimizes the difference between the observed and simulated spectra, the ratios of multiple channel-pairs—one weak and one strong absorption channel—are used to retrieve from measurements of the shortwave infrared (SWIR) band. Based on sensitivity tests, a super channel-pair is carefully selected to reduce the effects of solar lines, water vapor, air temperature, pressure, instrument noise, and frequency shift on retrieval errors. The new algorithm reduces computational cost and the retrievals are less sensitive to temperature and H2O uncertainty than the spectral fitting method. Multi-day Total Carbon Column Observing Network (TCCON) measurements under clear-sky conditions at two sites (Tsukuba and Bremen) are used to derive xxxx for the algorithm evaluation and validation. The DOAS-like results agree very well with those of the TCCON algorithm after correction of an airmass-dependent bias.

  3. Exploratory Item Classification Via Spectral Graph Clustering

    PubMed Central

    Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang

    2017-01-01

    Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476

  4. A Polarimetric Approach for Constraining the Dynamic Foreground Spectrum for Cosmological Global 21 cm Measurements

    NASA Astrophysics Data System (ADS)

    Nhan, Bang D.; Bradley, Richard F.; Burns, Jack O.

    2017-02-01

    The cosmological global (sky-averaged) 21 cm signal is a powerful tool to probe the evolution of the intergalactic medium in high-redshift universe (z≤slant 6). One of the biggest observational challenges is to remove the foreground spectrum which is at least four orders of magnitude brighter than the cosmological 21 cm emission. Conventional global 21 cm experiments rely on the spectral smoothness of the foreground synchrotron emission to separate it from the unique 21 cm spectral structures in a single total-power spectrum. However, frequency-dependent instrumental and observational effects are known to corrupt such smoothness and complicate the foreground subtraction. We introduce a polarimetric approach to measure the projection-induced polarization of the anisotropic foreground onto a stationary dual-polarized antenna. Due to Earth rotation, when pointing the antenna at a celestial pole, the revolving foreground will modulate this polarization with a unique frequency-dependent sinusoidal signature as a function of time. In our simulations, by harmonic decomposing this dynamic polarization, our technique produces two separate spectra in parallel from the same observation: (I) a total sky power consisting both the foreground and the 21 cm background and (II) a model-independent measurement of the foreground spectrum at a harmonic consistent to twice the sky rotation rate. In the absence of any instrumental effects, by scaling and subtracting the latter from the former, we recover the injected global 21 cm model within the assumed uncertainty. We further discuss several limiting factors and potential remedies for future implementation.

  5. Fast algorithm for bilinear transforms in optics

    NASA Astrophysics Data System (ADS)

    Ostrovsky, Andrey S.; Martinez-Niconoff, Gabriel C.; Ramos Romero, Obdulio; Cortes, Liliana

    2000-10-01

    The fast algorithm for calculating the bilinear transform in the optical system is proposed. This algorithm is based on the coherent-mode representation of the cross-spectral density function of the illumination. The algorithm is computationally efficient when the illumination is partially coherent. Numerical examples are studied and compared with the theoretical results.

  6. Imaging spectroscopy: Earth and planetary remote sensing with the USGS Tetracorder and expert systems

    USGS Publications Warehouse

    Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Kokaly, Raymond F.; Sutley, Steve J.; Dalton, J. Brad; McDougal, Robert R.; Gent, Carol A.

    2003-01-01

    Imaging spectroscopy is a tool that can be used to spectrally identify and spatially map materials based on their specific chemical bonds. Spectroscopic analysis requires significantly more sophistication than has been employed in conventional broadband remote sensing analysis. We describe a new system that is effective at material identification and mapping: a set of algorithms within an expert system decision‐making framework that we call Tetracorder. The expertise in the system has been derived from scientific knowledge of spectral identification. The expert system rules are implemented in a decision tree where multiple algorithms are applied to spectral analysis, additional expert rules and algorithms can be applied based on initial results, and more decisions are made until spectral analysis is complete. Because certain spectral features are indicative of specific chemical bonds in materials, the system can accurately identify and map those materials. In this paper we describe the framework of the decision making process used for spectral identification, describe specific spectral feature analysis algorithms, and give examples of what analyses and types of maps are possible with imaging spectroscopy data. We also present the expert system rules that describe which diagnostic spectral features are used in the decision making process for a set of spectra of minerals and other common materials. We demonstrate the applications of Tetracorder to identify and map surface minerals, to detect sources of acid rock drainage, and to map vegetation species, ice, melting snow, water, and water pollution, all with one set of expert system rules. Mineral mapping can aid in geologic mapping and fault detection and can provide a better understanding of weathering, mineralization, hydrothermal alteration, and other geologic processes. Environmental site assessment, such as mapping source areas of acid mine drainage, has resulted in the acceleration of site cleanup, saving millions of dollars and years in cleanup time. Imaging spectroscopy data and Tetracorder analysis can be used to study both terrestrial and planetary science problems. Imaging spectroscopy can be used to probe planetary systems, including their atmospheres, oceans, and land surfaces.

  7. Automated cellular pathology in noninvasive confocal microscopy

    NASA Astrophysics Data System (ADS)

    Ting, Monica; Krueger, James; Gareau, Daniel

    2014-03-01

    A computer algorithm was developed to automatically identify and count melanocytes and keratinocytes in 3D reflectance confocal microscopy (RCM) images of the skin. Computerized pathology increases our understanding and enables prevention of superficial spreading melanoma (SSM). Machine learning involved looking at the images to measure the size of cells through a 2-D Fourier transform and developing an appropriate mask with the erf() function to model the cells. Implementation involved processing the images to identify cells whose image segments provided the least difference when subtracted from the mask. With further simplification of the algorithm, the program may be directly implemented on the RCM images to indicate the presence of keratinocytes in seconds and to quantify the keratinocytes size in the en face plane as a function of depth. Using this system, the algorithm can identify any irregularities in maturation and differentiation of keratinocytes, thereby signaling the possible presence of cancer.

  8. Development of Cloud and Precipitation Property Retrieval Algorithms and Measurement Simulators from ASR Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, Gerald G.

    What has made the ASR program unique is the amount of information that is available. The suite of recently deployed instruments significantly expands the scope of the program (Mather and Voyles, 2013). The breadth of this information allows us to pose sophisticated process-level questions. Our ASR project, now entering its third year, has been about developing algorithms that use this information in ways that fully exploit the new capacity of the ARM data streams. Using optimal estimation (OE) and Markov Chain Monte Carlo (MCMC) inversion techniques, we have developed methodologies that allow us to use multiple radar frequency Doppler spectramore » along with lidar and passive constraints where data streams can be added or subtracted efficiently and algorithms can be reformulated for various combinations of hydrometeors by exchanging sets of empirical coefficients. These methodologies have been applied to boundary layer clouds, mixed phase snow cloud systems, and cirrus.« less

  9. Comparison of maximum intensity projection and digitally reconstructed radiographic projection for carotid artery stenosis measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyde, Derek E.; Habets, Damiaan F.; Fox, Allan J.

    2007-07-15

    Digital subtraction angiography is being supplanted by three-dimensional imaging techniques in many clinical applications, leading to extensive use of maximum intensity projection (MIP) images to depict volumetric vascular data. The MIP algorithm produces intensity profiles that are different than conventional angiograms, and can also increase the vessel-to-tissue contrast-to-noise ratio. We evaluated the effect of the MIP algorithm in a clinical application where quantitative vessel measurement is important: internal carotid artery stenosis grading. Three-dimensional computed rotational angiography (CRA) was performed on 26 consecutive symptomatic patients to verify an internal carotid artery stenosis originally found using duplex ultrasound. These volumes of datamore » were visualized using two different postprocessing projection techniques: MIP and digitally reconstructed radiographic (DRR) projection. A DRR is a radiographic image simulating a conventional digitally subtracted angiogram, but it is derived computationally from the same CRA dataset as the MIP. By visualizing a single volume with two different projection techniques, the postprocessing effect of the MIP algorithm is isolated. Vessel measurements were made, according to the NASCET guidelines, and percentage stenosis grades were calculated. The paired t-test was used to determine if the measurement difference between the two techniques was statistically significant. The CRA technique provided an isotropic voxel spacing of 0.38 mm. The MIPs and DRRs had a mean signal-difference-to-noise-ratio of 30:1 and 26:1, respectively. Vessel measurements from MIPs were, on average, 0.17 mm larger than those from DRRs (P<0.0001). The NASCET-type stenosis grades tended to be underestimated on average by 2.4% with the MIP algorithm, although this was not statistically significant (P=0.09). The mean interobserver variability (standard deviation) of both the MIP and DRR images was 0.35 mm. It was concluded that the MIP algorithm slightly increased the apparent dimensions of the arteries, when applied to these intra-arterial CRA images. This subpixel increase was smaller than both the voxel size and interobserver variability, and was therefore not clinically relevant.« less

  10. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  11. Chromospheric activity on late-type star DM UMa using high-resolution spectroscopic observations

    NASA Astrophysics Data System (ADS)

    Zhang, LiYun; Pi, QingFeng; Han, Xianming L.; Chang, Liang; Wang, Daimei

    2016-06-01

    We present new 14 high-resolution echelle spectra to discuss the level of chromospheric activity of DM UMa in {He I} D3, {Na I} D1, D2, Hα, and {Ca II} infrared triplet lines (IRT). It is the first time to discover the emissions above the continuum in the {He I} D3 lines on 2015 February 9 and 10. The emission on February 9 is the strongest one ever detected for DM UMa. We analysed these chromospheric active indicators by employing the spectral subtraction technique. The subtracted spectra reveal weak emissions in the {Na I} D1, D2 lines, strong emission in the Hα line, and clear excess emissions in the {Ca II} IRT lines. Our values for the EW8542/EW8498 ratio are on the low side, in the range of 1.0-1.7. There are also clear phase variations of the level of chromospheric activity in equivalent width (EW) light curves in these chromospheric active lines (especially the Hα line). These phenomena might be explained by flare events or rotational modulations of the level of chromospheric activity.

  12. Comparative study of novel versus conventional two-wavelength spectrophotometric methods for analysis of spectrally overlapping binary mixture.

    PubMed

    Lotfy, Hayam M; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom

    2015-09-05

    Smart spectrophotometric methods have been applied and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and prednisolone acetate (PA) without preliminary separation. Two novel methods have been developed; the first method depends upon advanced absorbance subtraction (AAS), while the other method relies on advanced amplitude modulation (AAM); in addition to the well established dual wavelength (DW), ratio difference (RD) and constant center coupled with spectrum subtraction (CC-SS) methods. Accuracy, precision and linearity ranges of these methods were determined. Moreover, selectivity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied to the assay of drugs in their pharmaceutical formulations. No interference was observed from common additives and the validity of the methods was tested. The obtained results have been statistically compared to that of official spectrophotometric methods to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Evaluating the portability of satellite derived chlorophyll-a algorithms for temperate inland lakes using airborne hyperspectral imagery and dense surface observations.

    PubMed

    Johansen, Richard; Beck, Richard; Nowosad, Jakub; Nietch, Christopher; Xu, Min; Shu, Song; Yang, Bo; Liu, Hongxing; Emery, Erich; Reif, Molly; Harwood, Joseph; Young, Jade; Macke, Dana; Martin, Mark; Stillings, Garrett; Stumpf, Richard; Su, Haibin

    2018-06-01

    This study evaluated the performances of twenty-nine algorithms that use satellite-based spectral imager data to derive estimates of chlorophyll-a concentrations that, in turn, can be used as an indicator of the general status of algal cell densities and the potential for a harmful algal bloom (HAB). The performance assessment was based on making relative comparisons between two temperate inland lakes: Harsha Lake (7.99 km 2 ) in Southwest Ohio and Taylorsville Lake (11.88 km 2 ) in central Kentucky. Of interest was identifying algorithm-imager combinations that had high correlation with coincident chlorophyll-a surface observations for both lakes, as this suggests portability for regional HAB monitoring. The spectral data utilized to estimate surface water chlorophyll-a concentrations were derived from the airborne Compact Airborne Spectral Imager (CASI) 1500 hyperspectral imager, that was then used to derive synthetic versions of currently operational satellite-based imagers using spatial resampling and spectral binning. The synthetic data mimics the configurations of spectral imagers on current satellites in earth's orbit including, WorldView-2/3, Sentinel-2, Landsat-8, Moderate-resolution Imaging Spectroradiometer (MODIS), and Medium Resolution Imaging Spectrometer (MERIS). High correlations were found between the direct measurement and the imagery-estimated chlorophyll-a concentrations at both lakes. The results determined that eleven out of the twenty-nine algorithms were considered portable, with r 2 values greater than 0.5 for both lakes. Even though the two lakes are different in terms of background water quality, size and shape, with Taylorsville being generally less impaired, larger, but much narrower throughout, the results support the portability of utilizing a suite of certain algorithms across multiple sensors to detect potential algal blooms through the use of chlorophyll-a as a proxy. Furthermore, the strong performance of the Sentinel-2 algorithms is exceptionally promising, due to the recent launch of the second satellite in the constellation, which will provide higher temporal resolution for temperate inland water bodies. Additionally, scripts were written for the open-source statistical software R that automate much of the spectral data processing steps. This allows for the simultaneous consideration of numerous algorithms across multiple imagers over an expedited time frame for the near real-time monitoring required for detecting algal blooms and mitigating their adverse impacts. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  15. Cloud and aerosol optical depths

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Russell, P. B.; Ackerman, Thomas P.; Colburn, D. C.; Wrigley, R. C.; Spanner, M. A.; Livingston, J. M.

    1988-01-01

    An airborne Sun photometer was used to measure optical depths in clear atmospheres between the appearances of broken stratus clouds, and the optical depths in the vicinity of smokes. Results show that (human) activities can alter the chemical and optical properties of background atmospheres to affect their spectral optical depths. Effects of water vapor adsorption on aerosol optical depths are apparent, based on data of the water vapor absorption band centered around 940 nm. Smoke optical depths show increases above the background atmosphere by up to two orders of magnitude. When the total optical depths measured through clouds were corrected for molecular scattering and gaseous absorption by subtracting the total optical depths measured through the background atmosphere, the resultant values are lower than those of the background aerosol at short wavelengths. The spectral dependence of these cloud optical depths is neutral, however, in contrast to that of the background aerosol or the molecular atmosphere.

  16. Reference-free fatigue crack detection using nonlinear ultrasonic modulation under various temperature and loading conditions

    NASA Astrophysics Data System (ADS)

    Lim, Hyung Jin; Sohn, Hoon; DeSimio, Martin P.; Brown, Kevin

    2014-04-01

    This study presents a reference-free fatigue crack detection technique using nonlinear ultrasonic modulation. When low frequency (LF) and high frequency (HF) inputs generated by two surface-mounted lead zirconate titanate (PZT) transducers are applied to a structure, the presence of a fatigue crack can provide a mechanism for nonlinear ultrasonic modulation and create spectral sidebands around the frequency of the HF signal. The crack-induced spectral sidebands are isolated using a combination of linear response subtraction (LRS), synchronous demodulation (SD) and continuous wavelet transform (CWT) filtering. Then, a sequential outlier analysis is performed on the extracted sidebands to identify the crack presence without referring any baseline data obtained from the intact condition of the structure. Finally, the robustness of the proposed technique is demonstrated using actual test data obtained from simple aluminum plate and complex aircraft fitting-lug specimens under varying temperature and loading variations.

  17. Methods for gas detection using stationary hyperspectral imaging sensors

    DOEpatents

    Conger, James L [San Ramon, CA; Henderson, John R [Castro Valley, CA

    2012-04-24

    According to one embodiment, a method comprises producing a first hyperspectral imaging (HSI) data cube of a location at a first time using data from a HSI sensor; producing a second HSI data cube of the same location at a second time using data from the HSI sensor; subtracting on a pixel-by-pixel basis the second HSI data cube from the first HSI data cube to produce a raw difference cube; calibrating the raw difference cube to produce a calibrated raw difference cube; selecting at least one desired spectral band based on a gas of interest; producing a detection image based on the at least one selected spectral band and the calibrated raw difference cube; examining the detection image to determine presence of the gas of interest; and outputting a result of the examination. Other methods, systems, and computer program products for detecting the presence of a gas are also described.

  18. Reduction of background clutter in structured lighting systems

    DOEpatents

    Carlson, Jeffrey J.; Giles, Michael K.; Padilla, Denise D.; Davidson, Jr., Patrick A.; Novick, David K.; Wilson, Christopher W.

    2010-06-22

    Methods for segmenting the reflected light of an illumination source having a characteristic wavelength from background illumination (i.e. clutter) in structured lighting systems can comprise pulsing the light source used to illuminate a scene, pulsing the light source synchronously with the opening of a shutter in an imaging device, estimating the contribution of background clutter by interpolation of images of the scene collected at multiple spectral bands not including the characteristic wavelength and subtracting the estimated background contribution from an image of the scene comprising the wavelength of the light source and, placing a polarizing filter between the imaging device and the scene, where the illumination source can be polarized in the same orientation as the polarizing filter. Apparatus for segmenting the light of an illumination source from background illumination can comprise an illuminator, an image receiver for receiving images of multiple spectral bands, a processor for calculations and interpolations, and a polarizing filter.

  19. A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.

    2016-01-01

    The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.

  20. A cloud detection algorithm using the downwelling infrared radiance measured by an infrared pyrometer of the ground-based microwave radiometer

    DOE PAGES

    Ahn, M. H.; Han, D.; Won, H. Y.; ...

    2015-02-03

    For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperaturemore » and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.« less

  1. Terascale spectral element algorithms and implementations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, P. F.; Tufo, H. M.

    1999-08-17

    We describe the development and implementation of an efficient spectral element code for multimillion gridpoint simulations of incompressible flows in general two- and three-dimensional domains. We review basic and recently developed algorithmic underpinnings that have resulted in good parallel and vector performance on a broad range of architectures, including the terascale computing systems now coming online at the DOE labs. Sustained performance of 219 GFLOPS has been recently achieved on 2048 nodes of the Intel ASCI-Red machine at Sandia.

  2. Calculation and experimental validation of spectral properties of microsize grains surrounded by nanoparticles.

    PubMed

    Yu, Haitong; Liu, Dong; Duan, Yuanyuan; Wang, Xiaodong

    2014-04-07

    Opacified aerogels are particulate thermal insulating materials in which micrometric opacifier mineral grains are surrounded by silica aerogel nanoparticles. A geometric model was developed to characterize the spectral properties of such microsize grains surrounded by much smaller particles. The model represents the material's microstructure with the spherical opacifier's spectral properties calculated using the multi-sphere T-matrix (MSTM) algorithm. The results are validated by comparing the measured reflectance of an opacified aerogel slab against the value predicted using the discrete ordinate method (DOM) based on calculated optical properties. The results suggest that the large particles embedded in the nanoparticle matrices show different scattering and absorption properties from the single scattering condition and that the MSTM and DOM algorithms are both useful for calculating the spectral and radiative properties of this particulate system.

  3. Arc-welding quality assurance by means of embedded fiber sensor and spectral processing combining feature selection and neural networks

    NASA Astrophysics Data System (ADS)

    Mirapeix, J.; García-Allende, P. B.; Cobo, A.; Conde, O.; López-Higuera, J. M.

    2007-07-01

    A new spectral processing technique designed for its application in the on-line detection and classification of arc-welding defects is presented in this paper. A non-invasive fiber sensor embedded within a TIG torch collects the plasma radiation originated during the welding process. The spectral information is then processed by means of two consecutive stages. A compression algorithm is first applied to the data allowing real-time analysis. The selected spectral bands are then used to feed a classification algorithm, which will be demonstrated to provide an efficient weld defect detection and classification. The results obtained with the proposed technique are compared to a similar processing scheme presented in a previous paper, giving rise to an improvement in the performance of the monitoring system.

  4. Spectral unmixing of multi-color tissue specific in vivo fluorescence in mice

    NASA Astrophysics Data System (ADS)

    Zacharakis, Giannis; Favicchio, Rosy; Garofalakis, Anikitos; Psycharakis, Stylianos; Mamalaki, Clio; Ripoll, Jorge

    2007-07-01

    Fluorescence Molecular Tomography (FMT) has emerged as a powerful tool for monitoring biological functions in vivo in small animals. It provides the means to determine volumetric images of fluorescent protein concentration by applying the principles of diffuse optical tomography. Using different probes tagged to different proteins or cells, different biological functions and pathways can be simultaneously imaged in the same subject. In this work we present a spectral unmixing algorithm capable of separating signal from different probes when combined with the tomographic imaging modality. We show results of two-color imaging when the algorithm is applied to separate fluorescence activity originating from phantoms containing two different fluorophores, namely CFSE and SNARF, with well separated emission spectra, as well as Dsred- and GFP-fused cells in F5-b10 transgenic mice in vivo. The same algorithm can furthermore be applied to tissue-specific spectroscopy data. Spectral analysis of a variety of organs from control, DsRed and GFP F5/B10 transgenic mice showed that fluorophore detection by optical systems is highly tissue-dependent. Spectral data collected from different organs can provide useful insight into experimental parameter optimisation (choice of filters, fluorophores, excitation wavelengths) and spectral unmixing can be applied to measure the tissue-dependency, thereby taking into account localized fluorophore efficiency. Summed up, tissue spectral unmixing can be used as criteria in choosing the most appropriate tissue targets as well as fluorescent markers for specific applications.

  5. Radionuclide identification algorithm for organic scintillator-based radiation portal monitor

    NASA Astrophysics Data System (ADS)

    Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.

    2017-03-01

    We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.

  6. Three-dimensional monochromatic x-ray computed tomography using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Katsuyuki; Uyama, Chikao

    1998-08-01

    We describe a technique of 3D computed tomography (3D CT) using monochromatic x rays generated by synchrotron radiation, which performs a direct reconstruction of a 3D volume image of an object from its cone-beam projections. For the development, we propose a practical scanning orbit of the x-ray source to obtain complete 3D information on an object, and its corresponding 3D image reconstruction algorithm. The validity and usefulness of the proposed scanning orbit and reconstruction algorithm were confirmed by computer simulation studies. Based on these investigations, we have developed a prototype 3D monochromatic x-ray CT using synchrotron radiation, which provides exact 3D reconstruction and material-selective imaging by using the K-edge energy subtraction technique.

  7. Spectral element multigrid. Part 2: Theoretical justification

    NASA Technical Reports Server (NTRS)

    Maday, Yvon; Munoz, Rafael

    1988-01-01

    A multigrid algorithm is analyzed which is used for solving iteratively the algebraic system resulting from tha approximation of a second order problem by spectral or spectral element methods. The analysis, performed here in the one dimensional case, justifies the good smoothing properties of the Jacobi preconditioner that was presented in Part 1 of this paper.

  8. Onboard spectral imager data processor

    NASA Astrophysics Data System (ADS)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  9. Directly data processing algorithm for multi-wavelength pyrometer (MWP).

    PubMed

    Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong

    2017-11-27

    Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.

  10. TES Level 1 Algorithms: Interferogram Processing, Geolocation, Radiometric, and Spectral Calibration

    NASA Technical Reports Server (NTRS)

    Worden, Helen; Beer, Reinhard; Bowman, Kevin W.; Fisher, Brendan; Luo, Mingzhao; Rider, David; Sarkissian, Edwin; Tremblay, Denis; Zong, Jia

    2006-01-01

    The Tropospheric Emission Spectrometer (TES) on the Earth Observing System (EOS) Aura satellite measures the infrared radiance emitted by the Earth's surface and atmosphere using Fourier transform spectrometry. The measured interferograms are converted into geolocated, calibrated radiance spectra by the L1 (Level 1) processing, and are the inputs to L2 (Level 2) retrievals of atmospheric parameters, such as vertical profiles of trace gas abundance. We describe the algorithmic components of TES Level 1 processing, giving examples of the intermediate results and diagnostics that are necessary for creating TES L1 products. An assessment of noise-equivalent spectral radiance levels and current systematic errors is provided. As an initial validation of our spectral radiances, TES data are compared to the Atmospheric Infrared Sounder (AIRS) (on EOS Aqua), after accounting for spectral resolution differences by applying the AIRS spectral response function to the TES spectra. For the TES L1 nadir data products currently available, the agreement with AIRS is 1 K or better.

  11. Spectrum synthesis for a spectrally tunable light source based on a DMD-convex grating Offner configuration

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Pan, Qiao; Shen, Weimin

    2016-09-01

    As one kind of light source simulation devices, spectrally tunable light sources are able to generate specific spectral shape and radiant intensity outputs according to different application requirements, which have urgent demands in many fields of the national economy and the national defense industry. Compared with the LED-type spectrally tunable light source, the one based on a DMD-convex grating Offner configuration has advantages of high spectral resolution, strong digital controllability, high spectrum synthesis accuracy, etc. As a key link of the above type light source to achieve target spectrum outputs, spectrum synthesis algorithm based on spectrum matching is therefore very important. An improved spectrum synthesis algorithm based on linear least square initialization and Levenberg-Marquardt iterative optimization is proposed in this paper on the basis of in-depth study of the spectrum matching principle. The effectiveness of the proposed method is verified by a series of simulations and experimental works.

  12. Spectral CT Reconstruction with Image Sparsity and Spectral Mean

    PubMed Central

    Zhang, Yi; Xi, Yan; Yang, Qingsong; Cong, Wenxiang; Zhou, Jiliu

    2017-01-01

    Photon-counting detectors can acquire x-ray intensity data in different energy bins. The signal to noise ratio of resultant raw data in each energy bin is generally low due to the narrow bin width and quantum noise. To address this problem, here we propose an image reconstruction approach for spectral CT to simultaneously reconstructs x-ray attenuation coefficients in all the energy bins. Because the measured spectral data are highly correlated among the x-ray energy bins, the intra-image sparsity and inter-image similarity are important prior acknowledge for image reconstruction. Inspired by this observation, the total variation (TV) and spectral mean (SM) measures are combined to improve the quality of reconstructed images. For this purpose, a linear mapping function is used to minimalize image differences between energy bins. The split Bregman technique is applied to perform image reconstruction. Our numerical and experimental results show that the proposed algorithms outperform competing iterative algorithms in this context. PMID:29034267

  13. Onboard Science and Applications Algorithm for Hyperspectral Data Reduction

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel

    2012-01-01

    An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track biological activity such as harmful algal blooms. Measuring surface water extent to track flooding is another rapid response product leveraging VSWIR spectral information.

  14. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  15. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.

    PubMed

    Gregoire, John M; Dale, Darren; van Dover, R Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  16. Method for hyperspectral imagery exploitation and pixel spectral unmixing

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang (Inventor)

    2003-01-01

    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.

  17. An Analysis of Light Periods of BL Lac Object S5 0716+714 with the MUSIC Algorithm

    NASA Astrophysics Data System (ADS)

    Tang, Jie

    2012-07-01

    The multiple signal classification (MUSIC) algorithm is introduced to the estimation of light periods of BL Lac objects. The principle of the MUSIC algorithm is given, together with a testing on its spectral resolution by using a simulative signal. From a lot of literature, we have collected a large number of effective observational data of the BL Lac object S5 0716+714 in the three optical wavebands V, R, and I from 1994 to 2008. The light periods of S5 0716+714 are obtained by means of the MUSIC algorithm and average periodogram algorithm, respectively. It is found that there exist two major periodic components, one is the period of (3.33±0.08) yr, another is the period of (1.24±0.01) yr. The comparison of the performances of periodicity analysis of two algorithms indicates that the MUSIC algorithm has a smaller requirement on the sample length, as well as a good spectral resolution and anti-noise ability, to improve the accuracy of periodicity analysis in the case of short sample length.

  18. Assessing the use of an infrared spectrum hyperpixel array imager to measure temperature during additive and subtractive manufacturing

    NASA Astrophysics Data System (ADS)

    Whitenton, Eric; Heigel, Jarred; Lane, Brandon; Moylan, Shawn

    2016-05-01

    Accurate non-contact temperature measurement is important to optimize manufacturing processes. This applies to both additive (3D printing) and subtractive (material removal by machining) manufacturing. Performing accurate single wavelength thermography suffers numerous challenges. A potential alternative is hyperpixel array hyperspectral imaging. Focusing on metals, this paper discusses issues involved such as unknown or changing emissivity, inaccurate greybody assumptions, motion blur, and size of source effects. The algorithm which converts measured thermal spectra to emissivity and temperature uses a customized multistep non-linear equation solver to determine the best-fit emission curve. Emissivity dependence on wavelength may be assumed uniform or have a relationship typical for metals. The custom software displays residuals for intensity, temperature, and emissivity to gauge the correctness of the greybody assumption. Initial results are shown from a laser powder-bed fusion additive process, as well as a machining process. In addition, the effects of motion blur are analyzed, which occurs in both additive and subtractive manufacturing processes. In a laser powder-bed fusion additive process, the scanning laser causes the melt pool to move rapidly, causing a motion blur-like effect. In machining, measuring temperature of the rapidly moving chip is a desirable goal to develop and validate simulations of the cutting process. A moving slit target is imaged to characterize how the measured temperature values are affected by motion of a measured target.

  19. Tomographic digital subtraction angiography for lung perfusion estimation in rodents.

    PubMed

    Badea, Cristian T; Hedlund, Laurence W; De Lin, Ming; Mackel, Julie S Boslego; Samei, Ehsan; Johnson, G Allan

    2007-05-01

    In vivo measurements of perfusion present a challenge to existing small animal imaging techniques such as magnetic resonance microscopy, micro computed tomography, micro positron emission tomography, and microSPECT, due to combined requirements for high spatial and temporal resolution. We demonstrate the use of tomographic digital subtraction angiography (TDSA) for estimation of perfusion in small animals. TDSA augments conventional digital subtraction angiography (DSA) by providing three-dimensional spatial information using tomosynthesis algorithms. TDSA is based on the novel paradigm that the same time density curves can be reproduced in a number of consecutive injections of microL volumes of contrast at a series of different angles of rotation. The capabilities of TDSA are established in studies on lung perfusion in rats. Using an imaging system developed in-house, we acquired data for four-dimensional (4D) imaging with temporal resolution of 140 ms, in-plane spatial resolution of 100 microm, and slice thickness on the order of millimeters. Based on a structured experimental approach, we optimized TDSA imaging providing a good trade-off between slice thickness, the number of injections, contrast to noise, and immunity to artifacts. Both DSA and TDSA images were used to create parametric maps of perfusion. TDSA imaging has potential application in a number of areas where functional perfusion measurements in 4D can provide valuable insight into animal models of disease and response to therapeutics.

  20. Color analysis and image rendering of woodblock prints with oil-based ink

    NASA Astrophysics Data System (ADS)

    Horiuchi, Takahiko; Tanimoto, Tetsushi; Tominaga, Shoji

    2012-01-01

    This paper proposes a method for analyzing the color characteristics of woodblock prints having oil-based ink and rendering realistic images based on camera data. The analysis results of woodblock prints show some characteristic features in comparison with oil paintings: 1) A woodblock print can be divided into several cluster areas, each with similar surface spectral reflectance; and 2) strong specular reflection from the influence of overlapping paints arises only in specific cluster areas. By considering these properties, we develop an effective rendering algorithm by modifying our previous algorithm for oil paintings. A set of surface spectral reflectances of a woodblock print is represented by using only a small number of average surface spectral reflectances and the registered scaling coefficients, whereas the previous algorithm for oil paintings required surface spectral reflectances of high dimension at all pixels. In the rendering process, in order to reproduce the strong specular reflection in specific cluster areas, we use two sets of parameters in the Torrance-Sparrow model for cluster areas with or without strong specular reflection. An experiment on a woodblock printing with oil-based ink was performed to demonstrate the feasibility of the proposed method.

  1. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  2. Spectral-spatial classification of hyperspectral imagery with cooperative game

    NASA Astrophysics Data System (ADS)

    Zhao, Ji; Zhong, Yanfei; Jia, Tianyi; Wang, Xinyu; Xu, Yao; Shu, Hong; Zhang, Liangpei

    2018-01-01

    Spectral-spatial classification is known to be an effective way to improve classification performance by integrating spectral information and spatial cues for hyperspectral imagery. In this paper, a game-theoretic spectral-spatial classification algorithm (GTA) using a conditional random field (CRF) model is presented, in which CRF is used to model the image considering the spatial contextual information, and a cooperative game is designed to obtain the labels. The algorithm establishes a one-to-one correspondence between image classification and game theory. The pixels of the image are considered as the players, and the labels are considered as the strategies in a game. Similar to the idea of soft classification, the uncertainty is considered to build the expected energy model in the first step. The local expected energy can be quickly calculated, based on a mixed strategy for the pixels, to establish the foundation for a cooperative game. Coalitions can then be formed by the designed merge rule based on the local expected energy, so that a majority game can be performed to make a coalition decision to obtain the label of each pixel. The experimental results on three hyperspectral data sets demonstrate the effectiveness of the proposed classification algorithm.

  3. Interference graph-based dynamic frequency reuse in optical attocell networks

    NASA Astrophysics Data System (ADS)

    Liu, Huanlin; Xia, Peijie; Chen, Yong; Wu, Lan

    2017-11-01

    Indoor optical attocell network may achieve higher capacity than radio frequency (RF) or Infrared (IR)-based wireless systems. It is proposed as a special type of visible light communication (VLC) system using Light Emitting Diodes (LEDs). However, the system spectral efficiency may be severely degraded owing to the inter-cell interference (ICI), particularly for dense deployment scenarios. To address these issues, we construct the spectral interference graph for indoor optical attocell network, and propose the Dynamic Frequency Reuse (DFR) and Weighted Dynamic Frequency Reuse (W-DFR) algorithms to decrease ICI and improve the spectral efficiency performance. The interference graph makes LEDs can transmit data without interference and select the minimum sub-bands needed for frequency reuse. Then, DFR algorithm reuses the system frequency equally across service-providing cells to mitigate spectrum interference. While W-DFR algorithm can reuse the system frequency by using the bandwidth weight (BW), which is defined based on the number of service users. Numerical results show that both of the proposed schemes can effectively improve the average spectral efficiency (ASE) of the system. Additionally, improvement of the user data rate is also obtained by analyzing its cumulative distribution function (CDF).

  4. On increasing the spectral efficiency and transmissivity in the data transmission channel on the spacecraft-ground tracking station line

    NASA Astrophysics Data System (ADS)

    Andrianov, M. N.; Kostenko, V. I.; Likhachev, S. F.

    2018-01-01

    The algorithms for achieving a practical increase in the rate of data transmission on the space-craft-ground tracking station line has been considered. This increase is achieved by applying spectral-effective modulation techniques, the technology of orthogonal frequency compression of signals using millimeterrange radio waves. The advantages and disadvantages of each of three algorithms have been revealed. A significant advantage of data transmission in the millimeter range has been indicated.

  5. Model Order Reduction Algorithm for Estimating the Absorption Spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.

    The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less

  6. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  7. Optimizing Algorithm Choice for Metaproteomics: Comparing X!Tandem and Proteome Discoverer for Soil Proteomes

    NASA Astrophysics Data System (ADS)

    Diaz, K. S.; Kim, E. H.; Jones, R. M.; de Leon, K. C.; Woodcroft, B. J.; Tyson, G. W.; Rich, V. I.

    2014-12-01

    The growing field of metaproteomics links microbial communities to their expressed functions by using mass spectrometry methods to characterize community proteins. Comparison of mass spectrometry protein search algorithms and their biases is crucial for maximizing the quality and amount of protein identifications in mass spectral data. Available algorithms employ different approaches when mapping mass spectra to peptides against a database. We compared mass spectra from four microbial proteomes derived from high-organic content soils searched with two search algorithms: 1) Sequest HT as packaged within Proteome Discoverer (v.1.4) and 2) X!Tandem as packaged in TransProteomicPipeline (v.4.7.1). Searches used matched metagenomes, and results were filtered to allow identification of high probability proteins. There was little overlap in proteins identified by both algorithms, on average just ~24% of the total. However, when adjusted for spectral abundance, the overlap improved to ~70%. Proteome Discoverer generally outperformed X!Tandem, identifying an average of 12.5% more proteins than X!Tandem, with X!Tandem identifying more proteins only in the first two proteomes. For spectrally-adjusted results, the algorithms were similar, with X!Tandem marginally outperforming Proteome Discoverer by an average of ~4%. We then assessed differences in heat shock proteins (HSP) identification by the two algorithms by BLASTing identified proteins against the Heat Shock Protein Information Resource, because HSP hits typically account for the majority signal in proteomes, due to extraction protocols. Total HSP identifications for each of the 4 proteomes were approximately ~15%, ~11%, ~17%, and ~19%, with ~14% for total HSPs with redundancies removed. Of the ~15% average of proteins from the 4 proteomes identified as HSPs, ~10% of proteins and spectra were identified by both algorithms. On average, Proteome Discoverer identified ~9% more HSPs than X!Tandem.

  8. Imaging spectrometer measurement of water vapor in the 400 to 2500 nm spectral region

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Roberts, Dar A.; Conel, James E.; Dozier, Jeff

    1995-01-01

    The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) measures the total upwelling spectral radiance from 400 to 2500 nm sampled at 10 nm intervals. The instrument acquires spectral data at an altitude of 20 km above sea level, as images of 11 by up to 100 km at 17x17 meter spatial sampling. We have developed a nonlinear spectral fitting algorithm coupled with a radiative transfer code to derive the total path water vapor from the spectrum, measured for each spatial element in an AVIRIS image. The algorithm compensates for variation in the surface spectral reflectance and atmospheric aerosols. It uses water vapor absorption bands centered at 940 nm, 1040 nm, and 1380 nm. We analyze data sets with water vapor abundances ranging from 1 to 40 perceptible millimeters. In one data set, the total path water vapor varies from 7 to 21 mm over a distance of less than 10 km. We have analyzed a time series of five images acquired at 12 minute intervals; these show spatially heterogeneous changes of advocated water vapor of 25 percent over 1 hour. The algorithm determines water vapor for images with a range of ground covers, including bare rock and soil, sparse to dense vegetation, snow and ice, open water, and clouds. The precision of the water vapor determination approaches one percent. However, the precision is sensitive to the absolute abundance and the absorption strength of the atmospheric water vapor band analyzed. We have evaluated the accuracy of the algorithm by comparing several surface-based determinations of water vapor at the time of the AVIRIS data acquisition. The agreement between the AVIRIS measured water vapor and the in situ surface radiometer and surface interferometer measured water vapor is 5 to 10 percent.

  9. Community structure from spectral properties in complex networks

    NASA Astrophysics Data System (ADS)

    Servedio, V. D. P.; Colaiori, F.; Capocci, A.; Caldarelli, G.

    2005-06-01

    We analyze the spectral properties of complex networks focusing on their relation to the community structure, and develop an algorithm based on correlations among components of different eigenvectors. The algorithm applies to general weighted networks, and, in a suitably modified version, to the case of directed networks. Our method allows to correctly detect communities in sharply partitioned graphs, however it is useful to the analysis of more complex networks, without a well defined cluster structure, as social and information networks. As an example, we test the algorithm on a large scale data-set from a psychological experiment of free word association, where it proves to be successful both in clustering words, and in uncovering mental association patterns.

  10. Highlights of TOMS Version 9 Total Ozone Algorithm

    NASA Technical Reports Server (NTRS)

    Bhartia, Pawan; Haffner, David

    2012-01-01

    The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side benefit of this algorithm is that it is considerably simpler than the present algorithm that uses a database of 1512 profiles to retrieve total ozone. These profiles are tedious to construct and modify. Though conceptually similar to the SBUV V8 algorithm that was developed about a decade ago, the SBUV and TOMS V9 algorithms differ in detail. The TOMS algorithm uses 3 wavelengths to retrieve the profile while the SBUV algorithm uses 6-9 wavelengths, so TOMS provides less profile information. However both algorithms have comparable total ozone information and TOMS V9 can be easily adapted to use additional wavelengths from instruments like GOME, OMI and OMPS to provide better profile information at smaller SZAs. The other significant difference between the two algorithms is that while the SBUV algorithm has been optimized for deriving monthly zonal means by making an appropriate choice of the a priori error covariance matrix, the TOMS algorithm has been optimized for tracking short-term variability using month and latitude dependent covariance matrices.

  11. Gauge invariant spectral Cauchy characteristic extraction

    NASA Astrophysics Data System (ADS)

    Handmer, Casey J.; Szilágyi, Béla; Winicour, Jeffrey

    2015-12-01

    We present gauge invariant spectral Cauchy characteristic extraction. We compare gravitational waveforms extracted from a head-on black hole merger simulated in two different gauges by two different codes. We show rapid convergence, demonstrating both gauge invariance of the extraction algorithm and consistency between the legacy Pitt null code and the much faster spectral Einstein code (SpEC).

  12. Phase Retrieval from Modulus Using Homeomorphic Signal Processing and the Complex Cepstrum: An Algorithm for Lightning Protection Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G A

    2004-06-08

    In general, the Phase Retrieval from Modulus problem is very difficult. In this report, we solve the difficult, but somewhat more tractable case in which we constrain the solution to a minimum phase reconstruction. We exploit the real-and imaginary part sufficiency properties of the Fourier and Hilbert Transforms of causal sequences to develop an algorithm for reconstructing spectral phase given only spectral modulus. The algorithm uses homeomorphic signal processing methods with the complex cepstrum. The formal problem of interest is: Given measurements of only the modulus {vert_bar}H(k){vert_bar} (no phase) of the Discrete Fourier Transform (DFT) of a real, finite-length, stable,more » causal time domain signal h(n), compute a minimum phase reconstruction {cflx h}(n) of the signal. Then compute the phase of {cflx h}(n) using a DFT, and exploit the result as an estimate of the phase of h(n). The development of the algorithm is quite involved, but the final algorithm and its implementation are very simple. This work was motivated by a Phase Retrieval from Modulus Problem that arose in LLNL Defense Sciences Engineering Division (DSED) projects in lightning protection for buildings. The measurements are limited to modulus-only spectra from a spectrum analyzer. However, it is desired to perform system identification on the building to compute impulse responses and transfer functions that describe the amount of lightning energy that will be transferred from the outside of the building to the inside. This calculation requires knowledge of the entire signals (both modulus and phase). The algorithm and software described in this report are proposed as an approach to phase retrieval that can be used for programmatic needs. This report presents a brief tutorial description of the mathematical problem and the derivation of the phase retrieval algorithm. The efficacy of the theory is demonstrated using simulated signals that meet the assumptions of the algorithm. We see that for the noiseless case, the reconstructions are extremely accurate. When moderate to heavy simulated white Gaussian noise was added, the algorithm performance remained reasonably robust, especially in the low frequency part of the spectrum, which is the part of most interest for lightning protection. Limitations of the algorithm include the following: (1) It does not account for noise in the given spectral modulus. Fortunately, the lightning protection signals of interest generally have a reasonably high signal-to-noise ratio (SNR). (2) The DFT length N must be even and larger than the length of the nonzero part of the measured signals. These constraints are simple to meet in practice. (3) Regardless of the properties of the actual signal h(n), the phase retrieval results are constrained to have the minimum phase property. In most problems of practical interest, these assumptions are very reasonable and probably valid. They are reasonable assumptions for Lightning Protection applications. Proposed future work includes (a) Evaluating the efficacy of the algorithm with real Lightning Protection signals from programmatic applications, (b) Performing a more rigorous analysis of noise effects, (c) Using the algorithm along with advanced system identification algorithms to estimate impulse responses and transfer functions, (d) Developing algorithms to deal with measured partial (truncated) spectral moduli, and (e) R & D of phase retrieval algorithms that specifically deal with general (not necessarily minimum phase) signals, and noisy spectral moduli.« less

  13. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  14. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  15. A semi-supervised classification algorithm using the TAD-derived background as training data

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  16. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  17. Modified algorithm for mineral identification in LWIR hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Yousefi, Bardia; Sojasi, Saeed; Liaigre, Kévin; Ibarra Castanedo, Clemente; Beaudoin, Georges; Huot, François; Maldague, Xavier P. V.; Chamberland, Martin

    2017-05-01

    The applications of hyperspectral infrared imagery in the different fields of research are significant and growing. It is mainly used in remote sensing for target detection, vegetation detection, urban area categorization, astronomy and geological applications. The geological applications of this technology mainly consist in mineral identification using in airborne or satellite imagery. We address a quantitative and qualitative assessment of mineral identification in the laboratory conditions. We strive to identify nine different mineral grains (Biotite, Diopside, Epidote, Goethite, Kyanite, Scheelite, Smithsonite, Tourmaline, Quartz). A hyperspectral camera in the Long Wave Infrared (LWIR, 7.7-11.8 ) with a LW-macro lens providing a spatial resolution of 100 μm, an infragold plate, and a heating source are the instruments used in the experiment. The proposed algorithm clusters all the pixel-spectra in different categories. Then the best representatives of each cluster are chosen and compared with the ASTER spectral library of JPL/NASA through spectral comparison techniques, such as Spectral angle mapper (SAM) and Normalized Cross Correlation (NCC). The results of the algorithm indicate significant computational efficiency (more than 20 times faster) as compared to previous algorithms and have shown a promising performance for mineral identification.

  18. Detection of cardiac activity using a 5.8 GHz radio frequency sensor.

    PubMed

    Vasu, V; Fox, N; Brabetz, T; Wren, M; Heneghan, C; Sezer, S

    2009-01-01

    A 5.8-GHz ISM-Band radio-frequency sensor has been developed for non-contact measurement of respiration and heart rate from stationary and semi-stationary subjects at a distance of 0.5 to 1.5 meters. We report on the accuracy of the heart rate measurements obtained using two algorithmic approaches, as compared to a reference heart rate obtained using a pulse oximeter. Simultaneous Photoplethysmograph (PPG) and non-contact sensor recordings were recorded over fifteen minute periods for ten healthy subjects (8M/2F, ages 29.6 + or - 5.6 yrs) One algorithm is based on automated detection of individual peaks associated with each cardiac cycle; a second algorithm extracts a heart rate over a 60-second period using spectral analysis. Peaks were also extracted manually for comparison with the automated method. The peak-detection methods were less accurate than the spectral methods, but suggest the possibility of acquiring beat by beat data; the spectral algorithms measured heart rate to within + or -10% for the ten subjects chosen. Non-contact measurement of heart rate will be useful in chronic disease monitoring for conditions such as heart failure and cardiovascular disease.

  19. [A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].

    PubMed

    Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong

    2011-10-01

    Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.

  20. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    NASA Astrophysics Data System (ADS)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

Top